Skip to content
Confidential

Initiative Registry — Tilden Mine

Purpose: Living tracker of all AI/digital initiatives identified during the Tilden site sprint.

Workflow: - Day 1-2: Identify (status: identified) - Day 2-3: Validate in workshops (status: validated or rejected) - Day 4: Size — $ estimates, confidence, feasibility (status: sized) - Day 5: Prioritize — matrix placement, sequence (status: prioritized)

Carry-forward from previous sites: Review the project catalog (phases/03-consolidation/project-catalog.md) for hypotheses to validate here. Use the validation questions listed per project.

Important: Tilden is a mine, not a steel mill. Many corporate projects (PRJ-01..08) were defined from steelmaking evidence. Some translate directly, some need reframing, and some don't apply. Mining-specific hypotheses are seeded below.

Last updated: 2026-04-16 (appendix preparation — no content changes, field evidence preserved as collected)


Hypotheses to Validate from Previous Sites

Adapted for mining context. Not all steel-site projects apply at Tilden.

Projects That Translate to Mining

Project Horizon Steel Sites Status Mining Translation Key Question for Tilden
PRJ-01 Ops-Maint Integration H1 CLV ★★, MDT ★★ — strongest signal everywhere Ops-Maint data disconnect in mining — same tribal knowledge, same paper processes, same information silos? Does mine ops talk to maintenance? Same Tabware pain? Or different CMMS?
PRJ-03 PdM Platform H1→H2 CLV ★, MDT partial Heavy mobile equipment PdM — haul trucks, shovels, drills + fixed plant PdM — AG mills, kilns, conveyors What condition monitoring exists? Telematics on trucks? Vibration on mills?
PRJ-06 Maint Workflow (copilot + procurement) H1 CLV ★, MDT ★ Same voice capture, parts lookup, $500 threshold pain? Wi-Fi in the pit? Tablets in trucks? Cell coverage?
Knowledge Capture / Virtual SME MDT ★★ (leadership #1) Mining expertise capture — concentrator ops, blast design, pellet plant, ore body knowledge Who are the tribal knowledge holders? Retirement risk?

Projects That Need Reframing

Project Horizon Steel Sites Status Mining Reframe Key Question for Tilden
PRJ-02 Scheduling / S&IOP H3 CLV ★, MDT (seed) Mine plan optimization — pit sequencing, production scheduling, stockpile management How is the mine plan managed? What software? How far ahead?
PRJ-04 Quality & Yield H2→H3 CLV partial, MDT ★★ Pellet quality prediction — from ore blend through concentrator to pellet plant specs How are pellet specs tracked? Any quality model? Recovery rate tracking?
PRJ-07 Logistics H2 CLV partial, MDT ★ Mine-to-dock logistics — LS&I rail, stockpile → dock → ship, seasonal planning How is the shipping season planned? Rail scheduling manual? Stockpile management?

Projects That Don't Apply at Tilden

Project Why Not
PRJ-05 Cobble & Process Risk No Hot Strip Mill — this is a mine
PRJ-08 Caster Chemistry No caster — this is a mine

Master Summary Table

ID Initiative Horizon Project Status Value ($/yr) Confidence Complexity Priority
TLD-01 Mining Ops–Maintenance Data Integration H1 PRJ-01 identified ★★ $2-5M High Med
TLD-02 Heavy Mobile Equipment PdM (Trucks & Shovels) H1→H2 PRJ-03 identified $3-8M High Med
TLD-03 Fixed Plant PdM (AG Mills, Kilns, Conveyors) H1→H2 PRJ-03 identified
TLD-04 Haul Truck Fleet Dispatching Optimization H2 new identified ★★★ $2-6M Med-High Med ★ Ryan Top 3
TLD-05 Drill & Blast Pattern Optimization H2 new identified ★★★ $1-3M Med-High Med
TLD-06 Ore Grade Control & Blend Optimization H2 TLD-P17 (deferred) identified ★★ $3-8M Med Strategic
TLD-07 Concentrator AG Mill Throughput Optimization H2 new identified ★★ $2-5M Med Med
TLD-08 Flotation Recovery Optimization H1→H2 TLD-P01 (POV) validated ★★★ $4-10M Med-High Med ★ POV scope
TLD-09 Pellet Quality Prediction H2 PRJ-04 (reframed) identified
TLD-10 Kiln & Grate Temperature Optimization H2 new seed
TLD-11 Concentrator Energy Optimization H2 new identified
TLD-12 Maintenance Workflow Digitization (Copilot) H1 PRJ-06 identified $0.5-2M High Quick Win
TLD-13 Procurement Automation (Parts & Consumables) H1 PRJ-06 identified $1-3M High Quick Win
TLD-14 Mining Knowledge Capture / Virtual SME H1→H2 new (aligns Virtual SME) identified $0.5-2M High Med
TLD-15 Mine Plan & Production Scheduling H3 PRJ-02 (reframed) identified ★★ $3-8M Med Strategic
TLD-16 Vessel/Shipping Schedule & Rail Coordination H1→H2 PRJ-07 (reframed) validated ★★★ $2-6M High Med ★ Top 3
TLD-17 Haul Road & Pit Slope Monitoring H2 new seed
TLD-18 Environmental Compliance Analytics (Selenium/Water) H2 new seed
TLD-19 Tire Management & Prediction (incl. Michelin comparison) H1 PRJ-03 validated ★★★ $1-3M High Quick Win ★ Stepping stone
TLD-20 Safety Analytics (Proximity/Fatigue/Collision) H2 new identified $0.5-2M Med Med
TLD-21 Concentrator Feed-Forward Control H2 TLD-P17 (deferred) validated ★★★ $2.5-5M High Med ★ Phase 2
TLD-22 Filter Performance Monitoring (42 filters) H1 new identified ★★ $1-3M High Quick Win
TLD-23 Reagent Suite Optimization (Dispersant Standardization) H1 TLD-P01 (POV) validated ★★★ $2.5-5M High Med ★ POV scope — Track 1
TLD-24 Workplace & Equipment Inspection Digitization H1 PRJ-06 identified ★★ $0.3-1M High Quick Win
TLD-25 Mine Production Reporting Automation H1 new identified $0.2-0.5M High Quick Win
TLD-26 Operator Performance & Payload Analytics H1 new identified $1-3M Med-High Quick Win
TLD-27 Environmental Compliance Knowledge System H1→H2 new identified $0.3-1M Med Med
TLD-28 Utilities/Energy Consumption Forecasting H2 new identified $0.5-2M Med Med
TLD-29 HR/Workforce Overtime Forecasting H1 new identified $0.2-0.5M Med Quick Win
TLD-30 Parts Warehouse Digitization (Barcode/Scanner) H1 PRJ-06 identified $0.2-0.5M High Quick Win
TLD-31 Stockpile Ore Distribution Modeling H2 TLD-P17 (deferred) identified ★★ $2-5M Med-High Med
TLD-32 Concentrator Operator Decision Support (Live CRP) H1 TLD-P01 (POV) validated ★★★ $1-3M High Med ★ POV scope
TLD-33 HPGR Feed Rate Root Cause Analysis H1 new identified $1-3M Med Quick Win
TLD-34 Pellet Calcium Control Automation H1 new identified $0.5-2M High Quick Win
TLD-35 ELLIPS Inventory Master Data Cleanup H1 PRJ-06 identified ★★ $0.5-2M High Quick Win
TLD-36 Maintenance Parts & Budget Forecasting H1→H2 new identified $0.5-2M Med-High Med
TLD-37 Railroad Asset Maintenance Analytics H2 PRJ-03 identified $0.3-1M Low-Med Med
TLD-38 HPGR Knowledge Base + PdM Pilot H1 PRJ-06 + PRJ-03 validated ★★★ $0.5-2M High Quick Win ★ Stepping stone
TLD-39 Major Repair Schedule Optimization H1→H2 new identified ★★ $1-3M Med-High Med
TLD-40 Maintenance Resource & Workforce Scheduling H1→H2 new identified $0.5-2M Med Med
TLD-41 Deferred Maintenance Risk Quantification H2 new identified ★★ $1-5M Med Med
TLD-42 Cross-Asset Failure Pattern Search H1 PRJ-01 identified ★★ $0.5-2M High Quick Win
TLD-43 Maintenance Training Content Generation H1→H2 Virtual SME identified $0.3-1M Med Med
TLD-44 Employee Onboarding Automation H1 new identified $0.1-0.5M Med Quick Win
TLD-45 Modular Dispatch ↔ ELLIPS Automated Integration H1 PRJ-01 identified ★★ $1-3M High Med
TLD-46 Duty-Cycle Based Maintenance (Tons vs Hours) H2 new ★★ identified ★★ $2-5M Med-High Strategic
TLD-47 Fleet Capital Replacement & Lifecycle Planning H2→H3 new identified ★★ $3-8M Med Strategic
TLD-48 OEM Parts Catalog & PM Procedure Auto-Import H1 PRJ-06 identified ★★ $0.5-2M High Quick Win
TLD-49 Dispatch Status Auto-Correction H1 PRJ-01 identified ★★ $0.3-1M High Quick Win
TLD-50 Real-Time Mine Plan Deviation Alerting H1→H2 new identified ★★ $1-3M Med-High Med
TLD-51 Shift Handover & Ops Knowledge Base H1 PRJ-01 + Virtual SME identified ★★ $0.5-2M High Quick Win
TLD-52 Labor/BLA Contract Knowledge Assistant H1 Virtual SME identified ★★ $0.2-0.5M High Quick Win
TLD-53 Drill Consumable Predictive Ordering H1 new identified $0.2-0.5M Med Quick Win
TLD-54 Beaker Test Vision & Standardization H1 TLD-P01 (POV) validated ★★ $0.5-2M High Quick Win ★ POV scope

Status key: seed = from pre-visit research | identified = emerged from field conversations | validated = confirmed in workshop | sized = $ attached | prioritized = on matrix | rejected = not viable


Initiative Detail Cards

TLD-01: Mining Ops–Maintenance Data Integration

Field Value
Horizon H1
Project PRJ-01 — Ops-Maintenance Data Integration
Status identified ★★★ — Day 2 Plant Maintenance + Day 2 Mine Maintenance
Source Cross-site pattern (CLV ★★, MDT ★★) + Day 2 Plant Maintenance: Andrew Mullen cross-site validation + Day 2 Mine Maintenance: Modular ↔ ELLIPS disconnect confirmed
Field champion George Harmon (reliability eng) + Gary (area maintenance) + Pete Austin (mine maint section mgr)

Problem statement:

At every CLF site visited so far, the #1 pain is the disconnect between operations and maintenance — data lives in silos, work orders don't close the loop, and tribal knowledge substitutes for system data. Mining operations add a mobile equipment dimension (truck/shovel maintenance coordinated with pit operations) that may make this worse.

Proposed solution: Unified view of mining operations + maintenance data, connecting fleet management systems with CMMS. Real-time equipment status, planned vs. unplanned downtime tracking, close-the-loop on work orders.

Day 2 evidence (Plant Maintenance):

★★ Andrew Mullen (corporate AI program manager) explicitly validates cross-site pattern: "Doesn't matter what CMMS it is... regardless if it's Teams at Middletown, if it's Tabware at Burns Harbor, if it's ELLIPS here, it's not getting done because it's just too cumbersome." ★★ Parallel systems confirmed — George Harmon lists: ELLIPS (work orders), DCS (continuous monitoring — temp, current, vibration), drawing database (60,000 prints), relay system (delay tracking), Business Objects, Power BI. "None of them talking to each other."Operations delay reporting inconsistent — "Operators will write it in a book or type it in. But it's hit and miss." ★ DCS signals visible after failures — "There were signs of this three months ago. Something had changed. It's just equipment... there's so much data that usually we're looking at it after the fact." ★ Ops delay categories too coarse — Operations logs the "mill" as the delay cause but maintenance needs to know which specific component. Reconciliation between abstraction levels is complex.

Current state: ELLIPS (CMMS) + DCS + drawing database (60K prints) + relay system + Business Objects + Power BI + Oracle. All separate. Operations writes delays inconsistently. Maintenance can't easily search across similar equipment for patterns. DCS shows signs months before failure but nobody's watching proactively. Target state: Integrated ops-maintenance dashboard, automated work order flow, data-driven scheduling Value estimate: $2-5M/yr (aligned with steel site estimates, mining equipment costs are higher) Confidence: High (pattern validated at 3/3 sites — now corporate-validated by Andrew Mullen) Data readiness: Partial — ELLIPS has good history, DCS data exists, integration is the challenge Systems: ELLIPS (CMMS), Foxboro IA DCS, drawing database, relay system, Business Objects, Power BI, Oracle Complexity: Medium Dependencies: Data platform (Microsoft Fabric likely)

Comparison with other sites:

Cleveland: 70/30 reactive/planned, zero close-the-loop. Tabware + SQL reporting + tribal knowledge. Middletown: Same pattern but better managed by competent individuals (AK Steel culture). Tilden: 70/30 PM/reactive (better baseline), ELLIPS well-maintained, but same silo problem. ★ Andrew Mullen's statement is the definitive corporate validation of cross-site pattern — 3 sites, 3 CMMS, same problem.

Day 2 evidence (Mine Maintenance):

★★ Modular Dispatch ↔ ELLIPS completely disconnected — Dispatch system (Modular) has good data on equipment operating hours, keys on/off, routes, loaded/unloaded status, fuel fill events. ELLIPS needs this for PM scheduling. "Talking with Molly, there isn't really a way to put that into ELLIPS right now. The information's there. They just don't know how to get it into ELLIPS." ★★ Equipment hours = manual handwriting transcription — Operators write hours on paper inspection sheets. Chase/George manually enter into ELLIPS. "About 4 hours entering those hours and taking them out, putting in another spreadsheet." Every Monday morning just to build next week's schedule. ★★ ELLIPS makes flawed predictions — If a machine is down 28 days, ELLIPS calculates lower average usage and pushes PMs out, not realizing the machine will run 22 hrs/day when it returns. "It's not smart enough to realize that it was broke down for two days." ★ Operations ↔ maintenance = blast quality dependency — Blast quality directly affects shovel/equipment wear. Bad blast = bad dig = more equipment damage. "Maintenance work is directly dependent on the quality of the operations."

Open questions: - [x] What CMMS is used at Tilden? → ELLIPS (confirmed Day 1 — third different CMMS across CLF) - [x] How does mobile equipment maintenance integrate with fixed plant maintenance? → Separate teams, separate ELLIPS use. Mine maint = Pete Austin's group. Plant maint = Gary/George. Same ELLIPS but different workflows. - [x] Is there a fleet management system? What data does it capture? → Modular dispatch — GPS, cycle times, keys on/off, loaded/unloaded, fuel events. Rich data but disconnected from ELLIPS. - [ ] Does the mine ops team see maintenance data? Does maintenance see production targets?


TLD-02: Heavy Mobile Equipment PdM (Trucks & Shovels)

Field Value
Horizon H1→H2
Project PRJ-03 — Predictive Maintenance Platform
Status identified ★★ — Day 1 + Day 2 Mine Maintenance
Source Mining industry standard + cross-site PdM pattern + Day 2 Mine Maint: telemetry, duty cycle, wheel motor lifecycle data
Field champion Pete Austin (section mgr, 30 yrs) + Chase Lincoln (reliability eng) + John Lokowitz (pit electrical)

Problem statement:

Haul trucks and shovels are the highest-value mobile assets in the mine. At 170-190 ton capacity, each truck represents millions in capital. Tires cost $50-80K each. Unplanned breakdowns halt ore flow to the concentrator. Industry leaders (Rio Tinto, BHP, Vale) have achieved 30%+ reductions in unplanned downtime through PdM on mobile fleets.

Proposed solution: Telematics-based predictive maintenance for haul truck engines, drivetrains, tires, and hydraulic systems. Vibration/temperature/oil analysis for shovels and drills.

Day 1 evidence:

Site leader: "Generally, our maintenance strategy in the mining area is pretty calendar-based... I would say we probably over-maintain here, just because it's not the bottleneck, we spend a lot. But they run well, and they know engines last twenty thousand dollars. So we're changing engine every twenty thousand." 4 shovels, ~15 × 320-ton haul trucks, 1 primary crusher confirmed. Calendar-based maintenance = conservative approach, likely over-spending to prevent failures.

Day 1 evidence (Lunch session + Shop visit):

Onboard computers on all equipment — fault codes, warnings, diagnostic trouble codes (DTCs). "All of these machines put out so much information. What are we focused on? What can we process?" ★ Different OEM systems — Cat, Komatsu, multiple diagnostic platforms. "Half a dozen or more different types of system." Each OEM has its own proprietary fault code system. ★ Equipment history in ELLIPS — work orders go back, long history, well-documented. ★ PM vs reactive at ~70/30 — maintenance team reported. Better than steel sites. Shop confirmed: Wheel motors = $300,000 each. Engine modules are multi-million dollar assemblies. Tires = $70,000 each, chains = $100,000/set. Oil sampling program for wheel motors already in place. Fleet composition: 12-14 equipment classes. 320-ton haul trucks + 150-ton trucks. 4 electric rope shovels (Komatsu P&H). Fleet runs 24/7. Camera system on equipment — exists but "doesn't give you much information." Fire suppression systems — onboard fire systems. Shovel overloading — operators overload by ~15%, damaging equipment (especially engines). Need payload feedback.

Current state: Calendar-based PM. ~70/30 PM/reactive. Onboard computers + fault codes exist but fragmented across OEM systems. Oil sampling on wheel motors. Equipment history in ELLIPS. No unified telematics platform. Target state: Remaining useful life (RUL) predictions for key components. Unified fault code ingestion across OEMs. Optimized maintenance scheduling. Value estimate: $3-8M/yr (truck availability improvement + tire life extension + reduced catastrophic failures + reduced overloading damage) Confidence: High (proven at scale in mining industry globally, data exists — just fragmented) Data readiness: Partial — onboard data exists per-OEM, ELLIPS history exists, integration is the challenge Systems: Modular (dispatch/GPS), Cat/Komatsu onboard systems, ELLIPS (CMMS), oil sampling program Complexity: Medium (data integration from multiple OEM platforms is the main challenge) Dependencies: TLD-01 (data integration), OEM data access agreements

Comparison with other sites:

Steel mills focus PdM on fixed assets (cranes, bag houses, scrubbers). Mining PdM targets mobile equipment — fundamentally different. This is net-new for our engagement but well-proven in the global mining industry. Rio Tinto runs 400+ autonomous haul trucks; Vale reduced conveyor downtime 30% with PdM. Key Tilden advantage: ELLIPS work order history is well-maintained, and PM rate is already 70% — better starting point than steel sites.

Day 2 evidence (Mine Maintenance) ★★:

★★ Machine telemetry = untapped gold — Equipment onboard computers have fuel burn, duty cycle, component stress data. Currently extracted manually via laptop every 3-4 months during engine work. Pete: "Either we didn't buy the subscription or it's not going to work the way we wanted to — it's untapped data right now." ★★ Duty cycle inequality — Not every operating hour is equal. Priority #1 shovel sees trucks constantly, bottom priority sees a couple per hour. Pit bottom vs top rim = vastly different truck loads. "The truck fleet has been managed as a conveyor belt." All treated equal even though loads vary dramatically. ★★ Fuel burn = engine lifecycle metric — Engine ($1M) should last 1.4M gallons of fuel, not just X hours. "Sometimes you're burning 30 gallons an hour, sometimes 60." Operating hours alone miss the actual duty cycle stress on the engine. Fuel data is in onboard computer but not extracted regularly. ★★ Tons vs hours paradigm for shovels — Shovel ropes currently on tonnage-based intervals. Major components (motors, structures) on hours. Shovels running same hours but vastly different tons → identical maintenance schedules despite different wear profiles. ★ Wheel motor on 4th rebuild = uncharted territory — $300K each, manufacturer said 22K hours, CLF pushed to 35K. Rich rebuild history data from vendor reports (parts per rebuild). "We pick this right, we don't own, but AI can help us." ★ Shovel availability high 80s, truck availability ~85% — 6,000-6,500 operating hours/year per shovel. 70% utilization. ★ Shovels $27-30M each, 20-25 year asset, 150K hours lifecycle — "$1M/year" per shovel in operating costs.

Open questions: - [x] What telematics are on the haul trucks? → Onboard computers per OEM, GPS via Modular dispatch, camera system (limited). Data extracted manually via laptop every 3-4 months. - [x] Are shovels instrumented? → 4 Komatsu P&H electric rope shovels, onboard diagnostics. Rope wear tracked by tonnage. - [x] What is the current truck availability rate? → ~85%. Shovel availability high 80s. - [ ] How many unplanned truck breakdowns per month? - [x] What is the annual tire spend? → 108 tires/yr × $70K = ~$7.5M/yr. Plus chains $100K/set.


TLD-03: Fixed Plant PdM (AG Mills, Kilns, Conveyors)

Field Value
Horizon H1→H2
Project PRJ-03 — Predictive Maintenance Platform
Status identified ★ — Day 1 first contact + Day 2 Plant Maintenance
Source Cross-site PdM pattern + concentrator/pellet plant criticality + Day 2 maintenance team evidence
Field champion George Harmon (reliability engineering)

Problem statement:

The concentrator has 12 AG mills and 24 pebble mills running continuously. The pellet plant has rotary kilns firing at 2,200°F. Conveyors move material at every stage. These are fixed assets similar to steel mill equipment — but in a harsher environment (ore dust, heavy loads, thermal cycling). Unplanned mill or kiln downtime directly stops pellet production.

Proposed solution: Vibration monitoring, bearing temperature, motor current analysis for mills. Refractory and shell condition monitoring for kilns. Belt condition and alignment monitoring for conveyors.

Day 1 evidence:

Site leader on lube systems: "One things that we have a lot of problem with... lube systems and the running lines. The supply pumps usually fail. They overheat. Some of it might be contamination, some might be condition of the system... tolerances are out of specs." On 42 filters: "There's not real good instrumentation on each of the 42... things can pop up and you don't even recognize it for two or three days, and you might not even recognize there's a problem until three or four problems pile up on each other." Condition monitoring: "Concentrating and pelletizing, there's a lot more condition-based — thermography, vibration, oil analysis." SKF provides vibration analysis (hired analysts on-site). Hardwired sensors only — manual collection at intervals. No continuous field sensing.

Day 2 evidence (Plant Maintenance):

★★ George Harmon spends hours/week on failure analysis — "Trying to do failure analysis. Identifying. Seeing what's causing equipment delays. Hours a week looking for prints." ★★ DCS signs visible 3 months before failure — "When you go back and look... yeah, there were signs of this three months ago. Something had changed. Current increased, then we had a leaking seal, then vibration work orders, and eventually the part failed." ★★ Breadcrumb trail across systems — "Preceding equipment failure... there's indicators in our ELLIPS history. There's also indicators in the DCS — temperature, vibration, current. But it takes a long time to piece that information together after the fact, let alone before." ★ Bull gear risk example — George describes worn bull gear creating cascading risk to pinions, ancillary damage. Deferred maintenance creates compounding failure risk. ★ George's ideal: "Come in in the morning, ask my phone how's the plant running?" — describes real-time plant status, constraint identification, work order status, prioritization. = Maintenance copilot vision, independently articulated.

Current state: Condition-based maintenance in concentrator/pelletizing (thermography, vibration via SKF, oil analysis). But gaps: lube systems fail frequently, 42 filters have poor instrumentation, no continuous field sensing. Failure analysis requires piecing together breadcrumb trail across ELLIPS + DCS + drawings — currently takes hours, always after the fact. Target state: Predictive models for AG mill liner wear, kiln refractory condition, conveyor belt life. Real-time monitoring on filters and lube systems. Automated cross-system failure pattern detection before equipment fails. Value estimate: $2-5M/yr (mill availability + kiln campaign life + conveyor reliability) Confidence: Medium-High (gaps clearly identified, directly tied to bottleneck, DCS breadcrumb trail confirmed) Data readiness: Partial — SKF vibration data exists, DCS data exists, ELLIPS history well-maintained. But filter/lube instrumentation weak. Systems: DCS (Foxboro IA), SKF vibration, ELLIPS (CMMS), Pi historian (1.3B entries), drawing database (60K prints) Complexity: Medium Dependencies: TLD-01 (data integration), TLD-42 (cross-asset failure search)

Comparison with other sites:

Cleveland: bag house, scrubbers, cranes. Middletown: similar fixed assets. The concentrator mills and pellet kilns are the direct parallel — high-value, continuous-duty, harsh-environment assets. Lube system failures are a specific Tilden pain that may not have a parallel. Day 2 evidence: George Harmon's DCS breadcrumb trail description is the clearest articulation of the PdM opportunity at any site — signs visible 3 months out but nobody's watching.

Open questions: - [x] What vibration monitoring exists? → SKF vibration analysts (hired), hardwired sensors, manual collection at intervals - [ ] How is kiln refractory condition tracked? - [ ] What is the current AG mill liner replacement schedule? Predictive or time-based? - [ ] What conveyor belt failure rate? - [ ] What would it take to instrument the 42 filters?


TLD-04: Haul Truck Fleet Dispatching Optimization

Field Value
Horizon H2
Project new — Mining-specific
Status identified ★★ — Day 1 (Lunch session)
Source Day 1 lunch discussion — Molly (dispatch admin), mining team, engineering
Field champion Molly (dispatch administrator) + Tyler Craig (mining engineer)

Problem statement:

With 320-ton and 150-ton trucks running 24/7, optimal truck-shovel assignment and route planning directly impacts tons moved per hour. Dispatch is currently managed through the Modular system with GPS tracking, but the dispatcher role is the human bottleneck — shovel priorities come from engineering via Excel, dispatchers must manually configure the system, and non-regular dispatchers lose proficiency. Queue time at shovels, idle time at crushers, and haul route selection create compounding inefficiencies.

Proposed solution: AI-driven fleet dispatching — dynamic truck-shovel assignment based on shovel readiness, truck location, crusher capacity, and road conditions. AI-assisted dispatcher decision support for non-expert dispatchers. Automated ingestion of engineering shovel priorities (eliminate manual Excel → Modular data entry). Real-time optimization recommendations.

Day 1 evidence (Lunch session):

Modular dispatch system confirmed — GPS tracking, cycle times, shovel priorities, equipment status. "All of these equipment running off high precision GPS in the dispatch system for tracking." ★ Dispatcher = critical human bottleneck — "What kind of resources are available to them? When they have to sit in the chair and somebody is on vacation... they can call that person if they have to make a decision." ★ Training gap — "A couple weeks to get somebody through a shift. To optimize? You're probably talking your flow." Months to become proficient. ★ Intermittent dispatchers — "People coming in from the fields, not in there all the time... they haven't been in there for months, trying to get that muscle memory back." ★ Engineering priorities → dispatch — Shovel priorities come from engineering via Excel. "Engineering shovel priorities, what material we need to move faster than others, which ones need the most tracks in any given moment." ★ Manual errors — "That's another step where we can go. That's not that cost... errors because when people ask to take information from one document to another system, inevitably you get little mistakes. And these cost money." ★ Ore blending in real-time — Dispatcher must manage blend percentages: "We need 20% of all [ore types] to go in an hour into the crusher." Molly Bradley — dispatch administrator, manages network/interfaces. Key contact.

Current state: Modular GPS dispatch system. Dispatcher manually configures shovel priorities from engineering Excel. Non-expert dispatchers take months to become proficient. Real-time blend targets manually managed. Target state: AI-assisted dispatch optimization. Automated ingestion of engineering priorities. Decision support for non-expert dispatchers. Real-time blend optimization recommendations. Value estimate: $2-6M/yr (5-15% improvement in fleet productivity, reduced dispatch errors, improved ore blend consistency) Confidence: Med-High (Modular system and data exist, dispatcher pain clearly articulated, multiple speakers validated) Data readiness: Good — Modular collects GPS, cycle times, equipment status. Engineering priorities exist in Excel. Systems: Modular (dispatch/GPS), engineering Excel files, crusher DCS Complexity: Medium (data infrastructure exists, optimization layer is the addition) Dependencies: Modular data access, engineering cooperation

Comparison with other sites:

Closest analogy is PRJ-07 (logistics optimization) at Middletown — coil movement optimization (40 trips/day, 2hr/day manual planning). Same class of problem — human dispatcher + manual data + optimization opportunity. Palmer named logistics as his #1 priority. If Palmer's logistics interest applies to mining, this is a strong candidate.

Day 2 evidence (Mine Operations):

★★ Priority deviation = recurring problem. Brad Koski (ops mgr): "Recently we had a supervisor that did not read the map packet correctly... he moved the wrong piece of equipment. We needed shovel 32 down mining... instead they moved a drill." Plan priority miscommunication caused wrong equipment movement. ★★ Real-time priority flag requested by Andrew Mullen: "If we could have something that's saying real time, hey, you're getting way off plan. This is what I recommend to get back on plan." See TLD-50.Dispatch supervisor + field supervisor = two people per shift with hands in the plan. Communication gaps between them contribute to misalignment. ★ Dispatch doesn't push map to field equipment — "They don't have access to the map in the units out there. It's the supervisor and the dispatcher that have access."

Open questions: - [x] How are trucks currently dispatched? → Modular system with GPS, dispatcher manually configures shovel priorities from engineering Excel - [x] Is there GPS on all trucks? → Yes, high-precision GPS via Modular dispatch - [x] How many trucks and shovels in the active fleet? → 12-14 truck classes (320-ton + 150-ton), 4 shovels - [ ] What is the current fleet utilization rate? - [ ] What is the average queue time at shovels? - [ ] What is the current ore blend compliance rate?


TLD-05: Drill & Blast Pattern Optimization ★★★

Field Value
Horizon H2
Project new — Mining-specific
Status identified ★★★ — Day 1 + Day 2 Mine Operations (massive validation)
Source Mining industry practice + ore body complexity + Day 1 + Day 2 Mine Ops: Jeff Domann (pit supervisor, 28 yrs) + Dan Kernan + Kevin (mine eng)
Field champion Jeff Domann (pit supervisor, 28 yrs — blast crew + yard crew) + Tyler Craig (mine engineering)

Problem statement:

Drill and blast is the first step in the value chain. Pattern design (hole spacing, depth, explosive type, timing) determines fragmentation quality, which cascades through crushing and grinding efficiency. Over-blasting wastes explosive and creates fines; under-blasting creates oversize that jams crushers. With rotary drills boring 16" holes at 50' depth, the data exists to optimize.

Proposed solution: Data-driven blast design using drill performance data (rate of penetration, torque, vibration, pull-down pressure, rotary speed) as a proxy for rock hardness, combined with geological model data to optimize blast patterns for target fragmentation. Closed-loop integration: drill data → hardness index → explosive density per hole → Dyno's automated truck delivery.

Day 1 evidence:

"We just bought a drill with that capability... bolt-on packages that allow you to remote operate or to operate themselves. That's probably the easiest implementation of that technology." Every drill hole is sampled for ore quality at ~10ft spacing. Data exists from production drills. This is early-stage — one autonomous-capable drill, still learning.

Day 2 evidence (Mine Operations) ★★★:

★★★ Complete drill-to-blast pipeline described by Jeff Domann: Drill captures per-hole data (pull-down pressure, rotary speed, cycle time, high-precision GPS location). Explosive contractor (Dyno, adjacent property) has trucks capable of knowing which hole they're at and auto-loading correct density. "They have that capability on their trucks now." The technology exists on both ends — the missing piece is the data bridge between drill and explosive truck. ★★★ Blanket loading = current state: "Right now we basically blanket load the patterns. We load them to the same height all of them. Because that's the way we've always done it." Over-blasting soft holes, under-blasting hard holes. Direct savings on soft holes + better fragmentation on hard holes. ★★ Jeff's vision fully articulated: "If we could bring that data in, our explosive manufacturer has capabilities on their trucks to know what hole they're pulled up next to, say it's hole 100. They would know the hardness of the material in that hole and would automatically know how much to put in." Takes the human blaster out of the equation. ★★ Energy index concept already conceived: "We've been looking at adjusting our loads, getting energy indexes off the drills and then adjusting our load in the holes per hole based off of that energy index. We've never been able to bring that full circle just because there's so much data." ★ ~15,000 drill holes/year — massive multiplier. "If we can influence a half a percent or less, the multiplier is huge because trucks are going to take 200,000 loads this year." ★ Sampling confirmed: Every other hole sampled, sent to lab, assayed, fed into Vulcan model. Both exploration drilling (2" cores, 1,000 ft) and production drill sampling.

Current state: Blanket-loaded patterns (same explosive density every hole). Drill data captured (pull-down pressure, rotary speed, GPS, cycle time) but not processed into blast design. ~15,000 holes/yr. Dyno explosive contractor adjacent to site with auto-density truck capability. Both ends of the data pipeline exist — bridge is missing. Target state: Per-hole explosive density optimization. Drill hardness index → Dyno software → automated truck loading. Soft holes loaded lighter (savings), hard holes loaded heavier (better fragmentation). Human blaster removed from density decisions. Value estimate: $1-3M/yr (explosive savings on soft holes + grinding energy reduction from better fragmentation on hard holes + reduced rework from under-blasted zones) Confidence: Med-High (★★★ — both technologies exist, contractor has the capability, drill data exists, team has already conceptualized the solution. Gap is purely data integration.) Data readiness: High — drill data exists per hole (pull-down pressure, rotary speed, GPS), Dyno has software and truck capability. Need: data export format from drills + Dyno's input format. Systems: Drill onboard computers (high-precision GPS, pull-down, rotary), Modular dispatch, Dyno explosive delivery software, Vulcan mine model Complexity: Medium (data bridge + contractor coordination, not a build-from-scratch problem) Dependencies: Dyno's willingness to integrate, drill data export capability, Vulcan model rock type data

Comparison with other sites:

No parallel at steel mills. Purely mining-specific. Could apply at other CLF mining operations (Hibbing Taconite, United Taconite, Northshore). ★ This is the best-articulated mine-specific initiative from Day 2 — Jeff Domann is a clear champion with 28 years of domain knowledge and has already been thinking about this solution.

Open questions: - [x] What data do the rotary drills capture? → Pull-down pressure, rotary speed, cycle time per hole, high-precision GPS. Air and water possibly tracked too. - [x] How are blast patterns currently designed? → Blanket loaded — same density every hole. No per-hole optimization. - [ ] What fragmentation monitoring exists? Photo analysis? - [x] How variable is the ore body hardness across the pit? → High variability — iron formation vs. intrusive (waste rock). Vulcan model tracks rock types. - [ ] What is Dyno's input data format for their auto-density trucks? - [ ] What is the current explosive spend per year?


TLD-06: Ore Grade Control & Blend Optimization

Field Value
Horizon H2
Project new — Mining-specific (Terry Fedor flagged as Tier 2 priority)
Status identified ★★ — Day 1 first contact
Source Terry Fedor (VSP2), site profile research, Day 1 transcript
Field champion TBD — mine geology + process engineering (site leader flagged lab researcher as key)

Problem statement:

Tilden is a hematite operation using selective flocculation and amine flotation (historically the first mine to produce pellets from both hematite and magnetite ore, 1990-2009; magnetite reserves now exhausted). The ore grade varies (25-45% Fe) across the hematite body. Pellet chemistry must be customized per customer BF specs (flux ratios of limestone and dolomite). Optimizing the blend from different pit zones for consistent pellet quality while maximizing recovery rate is a complex multi-variable problem.

Proposed solution: Grade control model integrating geological model, drill data, assay results, and stockpile composition to optimize daily ore feed blend for target pellet chemistry. Real-time blend adjustment based on concentrator feed analysis.

Day 1 evidence:

Site leader: "Our ore body — non-homogeneous. It's a lot of variability in types of ore. The ore quality has a huge impact on how the concentrator runs." "We sample every single drill hole... to understand what's there to help us predict how the concentrator is going to react." "The plant and the reagent suite was designed in 1974, based on the ore quality we were seeing when we started mining... as this ore body gets deeper, the composition is changing... our reagents don't react the same way as they did in 1974." "We don't know which lever to pull at some times." 3D block models confirmed. Every production drill hole sampled (~10 ft spacing). Historical drill data goes back "quite a ways." Confirmed: ore body non-homogeneous, composition changing with depth, 1974 reagent suite no longer optimal for current ore.

Current state: Every drill hole sampled. 3D geological models exist. But ore quality data doesn't effectively predict concentrator response. Reactive adjustment with days-long feedback loops. Target state: Optimized ore blend from pit to pellet, reduced variability in pellet specs, improved recovery rate. Feed-forward link from drill data → concentrator adjustments. Value estimate: $3-8M/yr (pellet quality consistency + recovery rate improvement + reduced off-spec production) Confidence: Medium-High (strong signal from multiple statements, data exists on both sides of the gap) Data readiness: Partial — drill hole data exists, geological model exists, concentrator response data TBD Systems: Mine planning software (3D models confirmed), geological model, assay lab, concentrator control Complexity: Strategic (multi-system integration, requires geological + process engineering + logistics coordination) Dependencies: Geological model quality, assay data frequency, concentrator instrumentation

Comparison with other sites:

No direct parallel at steel mills. The closest analogy is caster chemistry optimization (PRJ-08) — tuning input chemistry for target product specs. But ore blending is more complex because the raw material varies naturally. Palmer scalability: If the approach works at Tilden, the methodology (not the specific model) could transfer to Hibbing, United Taconite, and Northshore.

★ KEY INSIGHT: TLD-06 is deeply intertwined with TLD-21 (Feed-Forward Control) and TLD-23 (Reagent Optimization). Together they form the Concentrator Optimization Bundle — the #1 opportunity at Tilden.

Open questions: - [x] How is ore blending managed today? → Drill hole sampling at ~10ft spacing, 3D block models, reactive adjustment - [ ] What assay frequency? Real-time analyzers or batch lab samples? - [ ] How many distinct ore zones are active in the pit? - [ ] What is the current off-spec pellet rate? - [ ] How are customer-specific flux ratios determined? - [ ] How far back does the drill hole assay data go?


TLD-07: Concentrator AG Mill Throughput Optimization

Field Value
Horizon H2
Project new — Mining-specific
Status identified ★★ — Day 1 (concentrator confirmed as bottleneck)
Source Mining industry practice, BHP Escondida example, Day 1 transcript
Field champion Dan McGrath (process engineer, metallurgical background)

Problem statement:

The 12 autogenous mills are the throughput bottleneck of the concentrator. AG mills grind ore using the ore itself as grinding media — performance depends on ore hardness, feed size, water ratio, and mill speed. Small improvements in throughput across 12 mills compound to significant production gains.

Proposed solution: ML-based mill control optimization using DCS data (power draw, feed rate, water addition, discharge density) to maximize throughput while maintaining target grind size.

Day 1 evidence:

Site leader: "Concentrating your primary bottleneck? Yep." 12 grinding lines confirmed. Concentrator is 1974-vintage equipment. "Pretty decent control system" exists. Iron recovery "almost like 70" — below the ~75% design benchmark for hematite flotation, with realistic upside to 80%. (The 90%+ figure at other CLF operations reflects magnetite magnetic separation, a different process.)

Day 1 evidence (Visit 1 — concentrator walkthrough):

G2 fuzzy logic control system confirmed — "SGS is kind of our engineering support. G2, which is a more like fuzzy logic type control system for grinding." ★ DCS from late 1990s — "Central control operators" managing the concentrator. ★ Pebble mill capacity is a binding constraint — "The amount of pebbles that we generate, the secondary capacity... can cause some pull back." ★ Grind-size cascade — When secondary capacity is constrained, "instead of making an 85 grain, you run a 99... just reference that your waste that energy." Overgrinding when constrained wastes energy AND causes downstream filter problems. ★ Screen maintenance — converted from double-deck to triple-deck screens. Screen loss = 8 hours downtime. Loop pump loss = 45 min to 2 hours. ★ Pebble crushers — 2 × MP 800 crushers, 450-500 tons/hr each. One lead, one backup. ★ Ore variability is THE challenge — "the biggest challenge here is just the variability of the ore, it's always changing, pretty much constant variability."

Day 2 evidence (Process & DCS Engineering):

★★ Foxboro IA DCS confirmed — "Our main DCS is a Foxboro IA system. We also have 100-year-old PLCs in other areas." ★★ G2 fuzzy logic detail — G2 reads DCS data directly, has decision trees and fuzzy logic sets. "Intended to model our best operators' decision making process." Third party (SGS) developed and tweaks it. G2 upgrade requested ~1.5 years ago, still in approval. "G2 is writing DCS setpoints — it's usually fuzzy logic." ★★ Operator variability quantified — "We could probably sit down and rank who we think the best control operator is." Best operators vs worst creates inconsistent performance. G2 intended to normalize this. "Not everyone looks at everything the same way." ★★ Six-bearing recirculation = specific failure mode — 75% success rate on intervention decisions. 25% of the time, they lose tons. "Some operators will step in protect the feed, some operators shut the crusher, some cut water." Inconsistent operator responses. ★★ HPGR impact on feed rates — HPGR installed April 2023, increased productivity. Nov 2025 feed rates dropped, struggled 7-8 months. "Strong indications primary pressure setting" but smoking gun not found. "I can't say I found the smoking gun for it." ★ Three main production drivers — "Feed rates, feed rates weigh recovery, and operating time. The three main drivers in this mill." ★ OCS on pellet side — Parallel expert system on pellet plant side: "OCS on the pellet plant side, which gives some of that higher-level control."

Current state: 12 AG mills + 24 pebble mills + G2 fuzzy logic controller (SGS). DCS = Foxboro IA (late 1990s) + legacy PLCs. G2 reads DCS directly, uses fuzzy logic sets to model best operator decision-making. G2 upgrade stuck in approval ~1.5 years. Secondary capacity (pebble mills) is the binding internal constraint. Over-grinding wastes energy and causes filter problems. Operator decision quality varies significantly — 75% success rate on six-bearing events, 25% loss of tons. HPGR installed April 2023, feed rate mystery since Nov 2025. Target state: ML optimization layered on top of G2 control, adaptive to ore variability. Better pebble mill capacity management. Value estimate: $2-5M/yr (2-5% throughput improvement across 12 mills + energy savings from reduced overgrinding) Confidence: Medium-High (G2 control system exists as foundation for optimization layer, DCS data confirmed) Data readiness: Partial — DCS data exists, G2 control system generates data. Question is historical depth and granularity. Systems: DCS (Foxboro IA, late 1990s), G2 fuzzy logic (SGS), pebble crushers, screen monitoring Complexity: Medium Dependencies: DCS data access, SGS/G2 data access, process engineering partnership (Dan McGrath)

Comparison with other sites:

No direct parallel at steel mills. BHP saved 3 GWh of energy at Escondida concentrator using AI optimization. Same class of problem. Could apply to Hibbing/United Taconite/Northshore grinding circuits. Tilden advantage: G2 fuzzy logic already exists as an advanced control layer — ML optimization would be an augmentation, not a greenfield deployment.

Open questions: - [x] Are AG mills DCS-controlled? → Yes, DCS from late 1990s + G2 fuzzy logic (SGS) for grinding control - [ ] What is the current throughput variability? - [ ] Is there a process historian capturing mill data? How far back? - [ ] What grinding energy per ton? - [x] How does ore hardness variation affect mill performance? → "Constant variability" confirmed. Pebble generation varies, constraining secondary capacity.


TLD-08: Flotation Recovery Optimization (→ POV SCOPE)

Field Value
Horizon H1→H2
Project TLD-P01 — Concentrator Desliming & Recovery Optimization (POV)
Status validated ★★★ — Day 1 + Day 2 + Apr 7: confirmed as downstream benefit of desliming POV
Source Day 1 transcript, Day 2 DCS session, Apr 7 IE×Tilden scoping call
Field champion Keith Holmgren (process linkage), Dan McGrath (metallurgical background)

Problem statement:

The hematite flotation circuit has 300 x 500-cubic-foot cells using amine flotation with corn starch flocculation. Recovery rate determines how much iron is captured vs. lost to tailings. Two flotation plants: Tilden 1 (sections 2-6) and Tilden 2 (sections 7-12). The flotation circuit is more automated than desliming — a silica sample every 7 minutes drives a feedback loop that adjusts amine (collector) addition. But flotation is the customer of desliming: if desliming doesn't throw away enough fine silica, flotation gets overwhelmed with material, amine costs spike, and recovery drops. Conversely, if desliming is too aggressive, iron is lost to tails before flotation. The optimum balance between desliming and flotation is the core optimization target.

Apr 7 context: Keith explained the desliming-flotation coupling in detail. The amount of amine needed in flotation depends directly on how much silica was removed in the D-slime step. There is "always this kind of balancing act we're trying to do to drive maximum recovery." Weight recovery range: 36-42% metallurgical weight recovery. Average iron recovery ~70%. A bad day is 60%. A great day loses "only" 20% of the iron — and most losses are inefficiency, not locked minerals. Ryan: 0.5% weight recovery = 100,000 tons of concentrate = tens of millions of dollars.

Key quotes:

"A half a percent of weight recovery is a hundred thousand tons of concentrate. So it's way more than a million dollars. You're tens of millions of dollars." (Ryan, Apr 7) "A horseshit day is 60%. That's when we're in addiction." (Keith, Apr 7) "A great day for us, we're throwing away 20% of the iron we grind. That's world class for us." (Keith, Apr 7) "The vast majority of the iron we lose isn't because it's locked with a quartz particle. It's just inefficient losses." (Keith, Apr 7)

Proposed solution: Improve flotation recovery as a downstream outcome of desliming optimization (TLD-23). Better desliming control → more stable flotation feed → reduced amine consumption → improved recovery. The POV measures flotation recovery improvement as the primary Y metric.

Day 1 evidence:

Site leader on recovery: "In terms of iron recovery, probably mostly greater than 90%... my recovery on the boat, almost like 70 different for us."

Day 2 evidence (Process & DCS Engineering):

★★ Flotation controlled by ONE silica reading every 14 minutes — Courier machine reads silica. Automated amine feedback loop. ★★ 6 lines per unit, no per-line data — "There's six lines on each one. You don't know if it's just one line causing the problem." ★★ No online chemistry data — "The only online chemistry we kind of have is that Courier machine reading silica, and then we have some online particle size analysis." ★ Tail samples every 6 hours — problem may be over by the time results arrive.

Apr 7 evidence:

★★★ Value quantified by Ryan: 0.5% weight recovery = 100K tons = tens of millions of dollars. "The tons are going to outweigh any of those saving dollar savings." ★★ Keith: most iron losses are inefficiency, not liberation — "The vast majority of the iron we lose isn't locked with a quartz particle. It's just inefficient losses." This means process optimization can actually capture the losses (vs. grinding finer, which is counterproductive — ultra-fines <5μm act like fluid and bypass flotation). ★★ Over-grinding makes it worse — "30 to 35% of our particles are smaller than 5 micron. So a lot of our iron losses are analogous to the bypass fraction in a cyclone circuit." Finer is NOT better.

Current state: Hematite flotation recovery ~70% vs. ~75% design benchmark. Weight recovery 36-42%. Most losses are inefficiency. ONE silica reading every 14 min controls flotation. 6 lines per unit, no per-line isolation. Target state: Improved and stabilized flotation recovery through better desliming control upstream. Reduced amine consumption. Value estimate: $4-10M/yr ★ — Each 1% recovery improvement = ~77K additional tons at $100+/ton. Ryan: 0.5% weight recovery = tens of millions. Highest single-value opportunity at Tilden. Confidence: High — site leadership committed to POV, mechanism well-understood (desliming → flotation coupling) Data readiness: Partial — Courier silica data in Pi, D-slime grab samples (Kevin's spreadsheet), historical tail samples "probably very detailed" Systems: DCS, Courier silica analyzer, flotation cell instrumentation, tail sampling Complexity: Medium Dependencies: TLD-23 (desliming optimization is the mechanism for flotation improvement)

Comparison with other sites:

No parallel at steel mills. Flotation is unique to mining/mineral processing. But the pattern — using upstream data to predict and optimize downstream process response — is the same thesis we validated at Cleveland (ops→maintenance) and Middletown (quality data → finishing lines).

Open questions: - [x] How are reagent doses currently determined? → Amine addition automated via Courier silica feedback loop (7-min cycle). Dispersant (PAA) is manual — met techs key in numbers. (Keith, Apr 7) - [ ] What instrumentation is on the flotation cells? - [x] What is the current iron recovery rate? → ~70% average, 60% bad day, 80% not a ceiling (confirmed Day 1, refined Apr 7) - [x] How much does recovery vary? → Weight recovery 36-42%, "horseshit day" = 60% iron recovery (Keith, Apr 7) - [x] What is the target recovery rate? → 0.5% weight recovery improvement = massive value. 75% design benchmark, 80%+ achievable. (Ryan, Apr 7)


TLD-09: Pellet Quality Prediction

Field Value
Horizon H2
Project PRJ-04 (reframed for mining)
Status identified — Day 1 (balling drum instability, high-flux energy)
Source Cross-site quality theme + pellet plant criticality
Field champion TBD (pellet plant / quality)

Problem statement:

Pellet quality (hardness, chemistry, size distribution) determines whether pellets meet customer BF specifications. The pelletizing process (balling drums → grate → kiln at 2,200°F → cooling) has multiple control points. Off-spec pellets must be recycled or scrapped. With customized flux ratios per customer, quality control is a multi-variable optimization.

Proposed solution: Predictive quality model linking concentrate chemistry, flux addition, balling drum parameters, grate/kiln temperatures, and cooling rates to final pellet properties. Real-time adjustment recommendations.

Day 1 evidence (Visit 1 — pelletizing walkthrough):

14 balling drums, 7 balling lines — each line has seed screens for size control, reject recirculation back to drum. ★ Green ball target size — "half by three eighths" inch. Critical for kiln air flow. ★ Fines are catastrophic — "Fines are really bad for us... we're blowing air through a vent, getting bed full of flames that produces channels for air to flow." Fines block airflow channels. ★ Moisture explosion risk — "Any moisture left in the bed is going to explode" when hitting 2,200°F preheat zone. "It happens from time to time... not pretty." ★ Operators monitor pressure as proxy — "The operators know that at any given position of their fan dampers... I would expect this pressure. I have a lot of fines, pressure will start to become very negative." ★ Multi-stage induration — Updraft drying → downdraft drying → preheat → kiln (2,200°F) → primary cooling → secondary cooling. Counter-current heat recovery. Complex multi-zone temperature management. ★ Compressive strength = key quality metric — pellets must survive BF burden column. ★ 2 dryers — number 2 dryer roughly double the capacity of number 1. Running both has become necessary recently due to high-clay ore. ★ Clay variability in ore — "In the last few years we've been... we've had to run both dryers... I remember being on that side... the clay holds a lot of moisture."

Day 1 evidence (Visit 2 — pellet plant deep walkthrough):

★★ Balling operator skill = THE human variable — "You're talking the range of finger-painting five-year-old to Bob Ross to Picasso." Massive skill variation across operators. "It's just not always easy to get people up there." ★ Recirculating loads = primary diagnostic — "First indication if it was bad — start seeing our recirculating loads go up." Leading indicator of balling/induration problems. ★ Upstream changes cascade invisibly — "Moisture or average particle size does behave differently in balling. The operators not going to see that sort of thing." ★ Dryer temperature is manual — "Operator always controls the temperature." DCS calculates moisture targets but temp is human. ★ Central control room — "Matt has control and start and stop almost everything, 95% of the equipment." ★ Process balance is fragile — "If you start losing in one area, you're going to lose it going forward." ★ Furnace residence ~220 minutes, ~12-15 min per zone.

Current state: 14 balling drums → 7 lines → grate-kiln process. Balling operator skill varies enormously ("five-year-old to Picasso") — this is the #1 human variable. Operators monitor pressure differentials and recirculating loads as proxy for bed quality. Upstream changes (moisture, particle size) cascade invisibly. Moisture management critical. Clay variability in recent ore causing new drying challenges. Multi-zone temperature control. Dryer temperature is manual. Target state: Predictive quality model linking concentrate moisture, chemistry, green ball size to kiln zone temperatures and final pellet compressive strength. Real-time adjustment recommendations. Early detection of moisture risk. Value estimate: $2-5M/yr (reduced off-spec + energy savings from optimized firing + reduced explosion incidents) Confidence: Medium-High (process well-understood from walkthrough, multiple control points, DCS data available) Data readiness: Partial — DCS temperature/pressure data exists, operator knowledge is tacit, lab analysis frequency TBD Systems: Pellet plant DCS, lab analysis, kiln instrumentation, dryer controls Complexity: Medium Dependencies: TLD-06 (blend quality affects pellet quality), TLD-21 (feed-forward from ore data)

Comparison with other sites:

Reframing of PRJ-04 (Through-Process Quality). At steel mills this is about slab → coil quality tracing. At Tilden it's about concentrate → pellet quality prediction. Same principle, different process. The clay variability issue is new — recent ore body changes have introduced challenges the plant wasn't designed for.

Open questions: - [x] How are kiln temperatures controlled? → DCS, counter-current heat exchange, operator monitoring of pressure differentials as proxy - [ ] What pellet quality parameters are tracked? Compressive strength frequency? - [ ] What is the current off-spec rate? - [ ] What lab analysis turnaround time? - [ ] How is moisture content measured pre-kiln? Inline or lab?


TLD-10: Kiln & Grate Temperature Optimization

Field Value
Horizon H2
Project new — Mining-specific
Status seed
Source Energy intensity of pelletizing
Field champion TBD (pellet plant)

Problem statement:

Rotary kilns fire at 2,200°F (natural gas, coal, or oil). This is the most energy-intensive step in pelletizing. Kiln temperature profile, residence time, and fuel type all affect pellet hardness and energy consumption. With 14 balling drums feeding multiple kilns, optimization across the pellet plant is complex.

Proposed solution: Kiln temperature profile optimization using ML — predicting optimal temperature curves based on green ball properties and target pellet specs, minimizing energy while maintaining quality.

Current state: TBD Target state: Optimized fuel consumption per ton of pellets, reduced energy cost Value estimate: $1-4M/yr (energy is a major cost in pelletizing) Confidence: Medium Data readiness: Likely partial (kiln temperatures logged) Systems: Pellet plant DCS, fuel management Complexity: Medium Dependencies: TLD-09 (quality model informs temperature targets)

Comparison with other sites:

Analogous to BF optimization at steel sites — high-temperature, energy-intensive, multi-variable. Middletown R&D flagged BF burden mix optimization (MDT-34) as a similar class of problem.

Open questions: - [ ] What fuel is currently used? Natural gas, coal, oil, or mix? - [ ] How are kiln temperatures currently controlled? - [ ] What is the annual energy cost for the pellet plant? - [ ] What temperature variability exists kiln-to-kiln?


TLD-11: Concentrator Energy Optimization

Field Value
Horizon H2
Project new — Mining-specific
Status identified — Day 1 (high-flux pellets tip energy constraint)
Source BHP Escondida example (saved 3 GWh + 118 GWh since FY2022)
Field champion TBD

Problem statement:

The concentrator (12 AG mills + 24 pebble mills + pumps + flotation air) is extremely energy-intensive. At BHP's Escondida mine, AI-driven energy optimization saved 118 GWh since FY2022. Grinding energy is typically 40-60% of a concentrator's total energy consumption.

Proposed solution: Energy management system optimizing grinding circuit energy based on ore hardness, throughput targets, and real-time power pricing. Load-shifting for non-critical processes during peak pricing.

Current state: TBD Target state: Reduced energy cost per ton of concentrate Value estimate: $1-3M/yr (depends on energy mix and pricing) Confidence: Medium Data readiness: To validate (power metering data, DCS data) Systems: DCS, power management, historian Complexity: Medium Dependencies: TLD-07 (throughput optimization interacts with energy)

Comparison with other sites:

No direct parallel at steel mills. Concentrator energy optimization is a mining-specific opportunity.

Open questions: - [ ] What is the annual energy cost for the concentrator? - [ ] What power metering granularity exists? - [ ] Is there variable power pricing (peak/off-peak)?


TLD-12: Maintenance Workflow Digitization (Copilot)

Field Value
Horizon H1
Project PRJ-06 — Maintenance Workflow Digitization
Status identified ★ — Day 1 + Day 2 Plant Maintenance
Source Cross-site pattern (CLV ★, MDT ★) + Day 1 + Day 2 Plant Maintenance: Adam Bingham real-world Copilot usage
Field champion Adam Bingham (hybrid maint — admin + hands-on) + 30-year maintenance planner

Problem statement:

Mining maintenance workers face the same paper/manual challenges as steel sites — plus outdoor environment, mobile equipment, and remote locations. Work orders are in ELLIPS but field documentation is poor. "People are very bad at self-documenting what they are doing." Voice/video capture would reduce friction.

Proposed solution: Mobile copilot for maintenance workers — voice capture for work order closure, video documentation of repairs, equipment history lookup via AI, troubleshooting guidance from work order history.

Day 1 evidence (Lunch session):

ELLIPS confirmed as CMMS — work orders, parts inventory, equipment history. "Material management program to ELLIPS." ★ Documentation problem confirmed — "Our people are very bad at self-documenting what they are doing. Speak to the machine and tell it what you just did to fix the problem versus make an extensive report." ★ Voice capture pitched and well-received — "Would you find it appealing for operators to just talk into a smart device? Maybe take a short video? That's something we're looking at building." ★ Cell phone availability unclear — "Company issued at this point?" — mobile device policy needs validation. ★ Training documentation scattered — "All of our documents? Been through a lot of departments. Bringing all of the decade work that has gone into making a solid training program."

Day 2 evidence (Plant Maintenance):

★★★ Adam Bingham already using Copilot in real maintenance — Used Copilot with electrical/hydraulic prints (PDF) for liner handler troubleshooting: "Attached that to a copilot chat and was able to give suggestions on troubleshooting, what voltages should be at Point A or Point B." Also used it for Spanish translation of troubleshooting steps for foreign OEM technicians. This is proof-of-concept in production. ★★ Gary validates ELLIPS is robust but cumbersome — "It's a very robust system... but it's just too cumbersome. For people to be out repairing everything and then coming back in and manually working with these systems. It's just not happening." ★ Teams app in the field discussed — Discussion about maintenance workers using Teams on mobile devices to report findings, check parts availability, create work orders from the field. ★ Gary's skepticism reframe — "I don't know what AI's going to do for our work order system... having AI tell me I need to get this done doesn't help me." Value is NOT more alerts — it's reducing time friction so work gets documented.

Current state: ELLIPS CMMS (robust system, well-maintained). Work orders exist but field documentation is thin. No voice capture. Paper-based inspections. Training documentation scattered. Adam Bingham already experimenting with Copilot for troubleshooting + print interpretation + translation. Target state: Digital maintenance workflow. Voice-to-work-order. AI-assisted troubleshooting from work order history. Mobile field access. Value estimate: $0.5-2M/yr Confidence: High (pattern validated at 3/3 sites, Adam Bingham's real-world usage = strongest proof of adoption potential) Data readiness: Partial — ELLIPS has history, field capture needs digitization Systems: ELLIPS (CMMS), Microsoft Copilot (already available), Teams (mobile), drawing database (60K prints) Complexity: Quick Win (if network coverage exists) Dependencies: Mobile device policy, network coverage in pit and plant

Comparison with other sites:

Same problem, different environment. Cleveland: crumpled PMs. Middletown: 30-page DMP packets nobody reads. Tilden: "people are very bad at self-documenting." ★ Adam Bingham's real-world Copilot usage for print troubleshooting and language translation is the most advanced grassroots AI adoption we've seen at ANY site. He's the natural champion for a pilot.

Open questions: - [ ] What mobile devices do maintenance workers carry? - [ ] Is there Wi-Fi or cellular coverage in the pit? - [x] What is the current work order process? → ELLIPS for work orders, paper for field documentation, poor self-documentation of repairs - [x] What CMMS is used? → ELLIPS - [x] Any existing AI usage? → Yes — Adam Bingham using Copilot with PDFs, prints, translation for real troubleshooting


TLD-13: Procurement Automation (Parts & Consumables)

Field Value
Horizon H1
Project PRJ-06 — Maintenance Workflow Digitization
Status seed
Source Cross-site pattern (MDT ★)
Field champion TBD

Problem statement:

Mining operations consume enormous quantities of parts and consumables — tires ($50-80K each), crusher liners, mill liners, conveyor belts, explosives, reagents. The same $500 threshold pain and procurement friction validated at steel mills likely exists here, amplified by the remote Upper Peninsula location and seasonal access constraints.

Proposed solution: AI-assisted procurement — automated reorder points for consumables, predictive demand based on production plan, vendor comparison, and approval routing.

Day 1 evidence (Lunch session + Shop visit):

ELLIPS manages parts inventory — "Material management program to ELLIPS." ★ No barcode scanning — warehouse is manual entry: "They could use scanners... it's all manual." ★ Massive consumable costs — Tires $70K each, chains $100K/set, wheel motors $300K, engine modules multi-million, $50M/yr in reagents. ★ Parts inventory cleanup potential — same pattern as MDT: large inventory with manual tracking.

Day 2 evidence (Process & DCS Engineering):

Lead time management problem — "Supplier says it's a five-week lead time. But I got a twelve-week lead time because they're never on time. That information is no good. I can't trust it." Risk aversion kicks in → 300 units of everything. ★ MinMax algorithms tried before, people complained — "In the past created algorithms to do the minmax changes. People complained because there's certain things that we'll miss." ★ AI can adapt to external factors — Discussion about AI building in real lead times vs. stated lead times, without emotional bias. "It will build it in, but without the emotional aspect."

Current state: ELLIPS for parts/inventory management. Manual warehouse processes (no barcode scanning). Massive consumable spend. Target state: Optimized consumable inventory, automated reordering, barcode scanning in warehouse (TLD-30), reduced stockouts and excess Value estimate: $1-3M/yr (consumable spend is massive — tires, reagents, liners, explosives) Confidence: High (validated at MDT as self-funding starter project, ELLIPS data exists) Data readiness: Partial — ELLIPS has inventory data, but manual warehouse processes limit data accuracy Systems: ELLIPS (CMMS/inventory), procurement (TBD if separate) Complexity: Quick Win Dependencies: TLD-30 (warehouse digitization enables better data)

Comparison with other sites:

Middletown identified procurement as the self-funding starter that generates savings to fund subsequent projects. Mining consumable spend ($70K/tire, $50M/yr reagents, multi-million engine modules) far exceeds steel mill consumable spend per unit. Same self-funding cascade logic applies: procurement savings fund larger projects.

Day 2 evidence (Purchasing & Warehouse Logistics):

ELLIPS search functions "terrible" — can't find parts without exact stock code. Descriptions have commas/semicolons in wrong places breaking search. "Five minutes to two hours per box or per item that came in trying to find it in the system." ★ All POs go through ELLIPS to corporate — centralized system, same format. "Everything's centralized into one system. That makes it fairly easy to tackle." ★ No purchasing people showed up — despite session being named "purchasing." Pain validated through warehouse/maintenance lens instead. ★ Vendor-managed inventory untracked — safety supplies (glasses, etc.) managed by vendors with rough quantity targets. No receiving verification. "Just a spreadsheet at the end of the month from the vendor."

Day 2 evidence (Plant Maintenance):

★★ Parts delays = weekly — "Do you face delays because you don't have parts? Weekly." Critical confirmation of pain frequency. ★ Supplier changes cause disruption — New supplier learning curve, parts don't fit right, lead times uncertain. ★ Ukraine war disrupted supply chains — Radiation equipment isotope lead times pushed to 4 years. ★ Referenced Middletown's Oracle catastrophe — 32K part numbers reset to 15-day default lead times. AI can prevent these cascading data errors. ★ Hidden inventory problem — Spare parts stored off-books by successive managers: "How many bull gears we got? Because we can't inventory them on the books because we aren't supposed to keep 'em here, but we ordered 'em." Critical spares hidden because of inventory policy. ★ Min/max based on usage + lead time — Gary: "Adjust your mins and maxes based upon usages and lead times. Right-sizing inventory for us in maintenance is huge."

Day 2 evidence (Mine Maintenance):

★★ Inventory min/max system not smart enough — Auto-adjusts reorder quantities based on usage, but doesn't understand set sizes. "Keeping three injectors isn't enough because there's 10 in the engine. We're constantly fighting that because the system is adjusting inventories based on how many we used." ★★ Parts go inactive after 1 year — If not ordered within a year, ELLIPS deactivates the stock code. Must contact Mary Bianchi (purchasing) to reactivate. More friction on parts that are needed infrequently but critically. ★ Pricing data stale — Lead times and pricing don't update until you actually purchase. "If you bought something five years ago, it'll tell you a price, and you go put it in — it's three times the cost." ★ Burns Harbor comparison: 2,793 invoice discrepancies — Eric shared current invoice discrepancy report. Some hundreds of days old, vendors not getting paid. Tilden "lesser degree but 200 days." ★ Ordering restrictions at other sites — Eric (IE) shared Burns Harbor experience: only 2 people can create part numbers, parts can't be ordered even if you have them identified and available. "I would just order it on eBay and pay out of my own pocket." ★ Weekend/off-hours procurement gap — 24/7 operations but purchasing is business hours only. Vendor time zone differences add delay. "Twiddling your thumb for 45 minutes."

Open questions: - [x] What procurement system is in use? → ELLIPS (inventory management), POs route through ELLIPS to corporate - [ ] What is the total annual consumable spend? (tires, liners, reagents, explosives combined) - [x] How are reorder points currently managed? → MinMax in ELLIPS, but algorithms tried before → people complained. System adjusts based on usage but doesn't understand set sizes (injectors example). - [x] Parts delay frequency? → Weekly. Confirmed Day 2 Plant Maintenance. - [ ] Same $500 PO threshold pain as steel mills?


TLD-14: Mining Knowledge Capture / Virtual SME

Field Value
Horizon H1→H2
Project new (aligns with Virtual SME — leadership's #1 at Middletown)
Status identified ★★★ — Day 1 + Day 2 Plant Maintenance + Day 2 Mine Operations
Source MDT Day 5 readout (leadership #1) + Day 1 + Day 2 Plant Maintenance + Day 2 Mine Ops: supervisor toolkit, training, procedures
Field champion Dan Clarendon (safety/training) + Adam Bingham (already using Copilot) + Brad Koski (ops mgr) + Lynn Casco (mine administrator)

Problem statement:

Mining expertise is highly specialized — ore body knowledge, blast design, concentrator operation, pellet plant chemistry. The Day 1 introductions confirmed extremely long tenures (25-30+ years common), creating both a deep knowledge base and a critical retirement risk. Environmental compliance is in spreadsheets and people's heads. Dispatcher training takes months for proficiency.

Proposed solution: Structured knowledge capture from experienced operators and engineers, organized into a searchable knowledge base. Virtual SME assistant for troubleshooting, training, and operational guidance. L0/L1/L2 knowledge tiers per department.

Day 1 evidence (Lunch session + End of day):

Long tenures across team — Dan Clarendon: 25 years mining + 4 years safety/training. Brad: 21 years production. Maintenance planner: 30 years. Tyler Craig: since 2011. Todd Davis: ~20 years. ★ Training documentation scattered — "Bringing all of the decade work that has gone into making a solid training program." ★ Dispatcher knowledge = critical bottleneck — New dispatchers take weeks to get through a shift, months to optimize. "People coming in from the fields, haven't been in there for months, trying to get that muscle memory back." ★ Environmental compliance knowledge at risk — Brent (environmental): "Dozens and dozens of compliance states that reside either in an outdated spreadsheet or in my head or in two other guys' heads." ★ Concentrator process knowledge — 1974 reagent suite design, selective flocculation, ore body knowledge. "Helped actually design Tilden" (lab researcher).

Day 2 evidence (Process & DCS Engineering):

★★ 1.3 billion entries in Pi historian — "1.3 billion entries about different systems." The data exists but is inaccessible — "this is a solution just ingesting that, making it easy to search through." ★★ Experience drain acute — "Our experience went from many, many years down to shift managers right now. Most experienced one is five minutes." Newest shift managers have minimal experience. ★★ Decision tree charts = crutch, not learning — "I've dumb down the jobs because we have a lot of newer people, so we put a lot more decision trees in. The people don't learn the job. They learn the chart. And 80% of the time the chart's right. Sometimes it's wrong." ★ Scattered knowledge sources — "We have data in a lot of different [places]. Pi data, instruction manuals, procedures, and content documents. But if you don't know where to look, you'll never find it." ★ Interest in AI teaching tool — Team asked "Could this software teach in the moment?" Strong interest in interactive learning, not just lookup.

Day 2 evidence (Plant Maintenance):

★★★ Hundreds of unread equipment manuals — "We have hundreds of equipment manuals. I don't think anyone here has ever cracked one open. It's in a 300-page manual — they're not gonna sit there and flip through it." Manuals span from 1970s to new equipment. All scanned into database (Oliver France?) but "awkward to search." ★★ HPGR as knowledge base pilot — Team-nominated: 10+ manuals, 1,200+ pages, European parts nobody understands. "Nobody's ever read" the manuals. New equipment = maximum knowledge gap. Covered in sensors = data-rich. Team consensus that HPGR is the right pilot for a knowledge base. ★★ Adam Bingham already building proof-of-concept — Used Copilot with liner handler manuals (PDF) for troubleshooting + translation. "I've actually been exploring opportunities to implement it — just a basic knowledge base, searchable for maintenance SOPs." Self-starter. ★ Copilot Studio exploration — Adam has explored Copilot Studio for building agents. Has the technical awareness to be an early adopter. ★ Night shift 8-month veteran problem — Gary: "On a weekend, on a night shift, it's based on a guy that's been here eight months to drive the operation. Making those decisions as safely as possible. Sometimes we miss." ★ Training content creation demand — "Is the company also looking at content creation for training material? Especially with the turnover. Maintenance is more of a visual, hands-on learning environment." ★ Caterpillar AR/VR reference — Room discussed Caterpillar's model: one expert in Arizona guiding field tech via AR goggles. Aspirational for Tilden.

Day 2 evidence (Mine Operations) ★★:

★★ Supervisor toolkit = consensus need. Dan Kernan: "My vision was providing feedback to whether it's a supervisor or a crew — how am I doing? I'm assuming I'm doing a great job. Nobody tells me." Confidence was the keyword — "these are younger guys, they don't want to look silly out there." ★★ Training in-bread problem. Brad Koski: "I call our training in-bread. The people that sign up to be a trainer... a question comes up and they don't know the answer and they say, well, I don't know. Just do it like this and it'll all be fine." AI could provide real-time answers during training. ★★ Procedures on SharePoint = effectively inaccessible. Jeff Domann: "It's so cumbersome nobody's going to go dig out the... I know I wouldn't do that." Knowledge relay chain: operator → supervisor → senior supervisor → look it up. Takes hours. Cable layout rework example — new hires wing it, experienced crew has to redo. ★ Lots of training docs, too many words — "We do have a lot of training documents, but it's a lot of words. Searching them and finding them is very prohibitive." ★ Game-ification suggested — AI scenarios for new supervisors: "A fails, what's the best call here?" Run through decision scenarios before learning the hard way.

Current state: Deep expertise concentrated in long-tenured individuals. Hundreds of unread equipment manuals (scanned into database but unsearchable). Training documentation exists but scattered across Pi (1.3B entries), manuals, procedures, and content documents — "if you don't know where to look, you'll never find it." Decision tree charts used as crutch for newer operators — "they don't learn the job, they learn the chart." Environmental compliance = spreadsheets + tribal knowledge. Dispatcher training = months. Shift manager experience declining fast. Adam Bingham already building grassroots knowledge base with Copilot. Target state: Searchable knowledge base, AI-assisted training, Virtual SME for field support. Environmental compliance knowledge preserved. Dispatcher training accelerated. HPGR = first pilot scope (team-nominated). Value estimate: $0.5-2M/yr (knowledge preservation + faster onboarding + reduced errors + compliance risk reduction) Confidence: High (leadership's #1 at MDT, mining expertise even more specialized, retirement risk evident from tenure data, grassroots champion already experimenting) Data readiness: Medium — Hundreds of scanned manuals already in database, ELLIPS history exists, Pi historian (1.3B entries), training docs exist. Content is there, it just needs to be made accessible. Systems: Knowledge management system (TBD), existing training documentation, ELLIPS history, Pi historian, equipment manual database, environmental spreadsheets, Microsoft Copilot (already available) Complexity: Medium (technology is ready, content capture is the work) Dependencies: Stakeholder willingness to participate, SharePoint metadata tagging

Comparison with other sites:

Middletown: Virtual SME was leadership's #1 preference at Day 5 readout. Paul expanded scope in real-time. Brian Benning = champion. Tilden has a STRONGER case: longer tenures, more specialized knowledge (ore body, chemistry), more dangerous environment, environmental compliance concentrated in 2-3 heads, AND a grassroots champion (Adam Bingham) already experimenting with Copilot. HPGR pilot = well-scoped, team-nominated starting point.

Open questions: - [x] What is the average age/tenure of experienced operators? → 25-30+ years common. Dan Clarendon, maintenance planner, Brad all 20+ years. - [ ] What training program exists for new hires? - [x] Which roles have the most concentrated tribal knowledge? → Dispatchers, concentrator operators, environmental compliance (Brent + 2 others), lab/R&D - [x] How is institutional knowledge currently preserved? → Scattered training docs, ELLIPS work order history, environmental spreadsheets + heads


TLD-15: Mine Plan & Production Scheduling ★★

Field Value
Horizon H3
Project PRJ-02 (reframed for mining)
Status identified ★★ — Day 2 (Purchasing & Warehouse Logistics — JR's cascading vision)
Source Cross-site scheduling theme + Day 2 cascading optimization articulation
Field champion JR (senior ops/maintenance leader)

Problem statement:

Mine planning integrates geological models, equipment availability, customer orders, stockpile levels, shipping schedules, and environmental constraints into a production schedule. The seasonal shipping window (Apr–Jan) adds a unique constraint. If mine planning is Excel-based (like slab scheduling at Middletown), there is optimization potential.

Proposed solution: AI-assisted mine plan optimization — dynamic scheduling based on equipment availability, ore quality targets, stockpile management, and shipping window constraints.

Day 2 evidence (Purchasing & Warehouse Logistics):

★★ Cascading production optimization vision articulated by senior ops leader (JR): Annual forecast (e.g. 7.5M tons) → backtrack to production requirements per mill → reagent needs → shipping/vessel schedule → maintenance downtime windows → what-if scenarios. "If we need 7.5 million tons this year, boom. What does that mean? That means x amount of time costing. How much maintenance can we reduce?" ★ $1M per mill shutdown — significant cost per downtime event. Optimization of maintenance windows against production targets = high value. ★ Production model is a powerful Excel spreadsheet — captures reagent usage, tonnage forecasts, but doesn't cascade into maintenance parts or shipping logistics. ★ What-if scenario demand — "What if we added one more train? How would that increase our potential for moving materials? Build the case for us." This is exactly what an AI scheduling system would enable. ★ Leadership vision — "That's the long-term vision. That's what leadership wants — to be able to look through and ask questions about optimizing every single step of the process."

Current state: Production model is an Excel spreadsheet (powerful but isolated). No cascading into maintenance parts or shipping logistics. No what-if capability. Target state: Optimized mine plan with dynamic re-planning capability, cascading from tonnage targets through reagents, shipping, and maintenance Value estimate: $3-8M/yr (depends on current inefficiency) Confidence: Medium (leadership articulated the vision; Excel-based current state = clear upgrade path) Data readiness: Partial — production model Excel exists, BCS shipping data exists, ELLIPS maintenance data exists, but not connected Systems: Production model (Excel), BCS (shipping), ELLIPS (maintenance), Modular (fleet) Complexity: Strategic Dependencies: TLD-16 (shipping coordination), TLD-36 (maintenance budget forecasting), TLD-21 (concentrator feed-forward)

Comparison with other sites:

Reframing of PRJ-02 (Scheduling/S&IOP). At steel mills this is about slab sequencing and order scheduling. At Tilden it's mine sequencing and production planning. The Tilden team articulated the most complete vision of cascading optimization of any site so far — from annual forecast through every operational layer.

Day 2 evidence (Mine Operations) ★★:

★★ Vulcan confirmed as mine planning software. Kevin: "I use a software called Vulcan. It's a 3D mine modeling software. It's got basically the ore body, the rock body, and then you go through and make your cuts." Outputs → Excel for assumptions, historical data from Business Objects/dispatch. Process is tedious. ★★ Planning chain detailed: Vulcan (long-range corporate) → Kevin (monthly/annual Excel) → Gwen (weekly planner, also uses Vulcan tighter detail) → Greg Kavlak (senior supervisor) → daily PowerPoint map packet → 2:00 PM supervisor meeting → night shift handover → 7:00 AM Brad/Greg catch-up. ★★ Month-end reporting pain: Kevin: "At month end I'm trying to answer back a month why we didn't meet the budget plan. I'm taking in eight or nine different reports from eight or nine different sources... I don't even dive into that depth, but all of that information exists." ★ Dispatch data correction = tedious upstream dependency: Molly manually corrects operator button-press errors in dispatch data to keep cycle time tracking accurate. "That process is tedious... she's doing it for 12-hour shifts on end." This feeds into all planning metrics. See TLD-49.Kevin → Jeff consumable forecasting overlap: Kevin uses Vulcan material types to predict drill bit consumption. "If we had a good number that we counted on... we could easily get a little more refined planning model." Currently uses assumptions.

Open questions: - [x] What mine planning software is used? → Vulcan (3D mine modeling). Corporate does long-range, Kevin does monthly/annual, Gwen does weekly. - [x] How far ahead is the mine plan? → Annual targets, broken into quarterly forecasts, replanned as needed - [x] How are customer orders translated into production targets? → Annual tonnage targets from corporate → production model Excel → everything else cascades - [x] How does the seasonal shipping window affect planning? → Dominates. Vessel availability = primary constraint. See TLD-16. - [ ] How many distinct ore zones are active at any given time? - [ ] What is the typical variance from monthly plan at end of month?


TLD-16: Vessel/Shipping Schedule & Rail Coordination ★★★

Field Value
Horizon H1→H2
Project PRJ-07 (reframed for mining logistics)
Status identified ★★★ — Day 2 (Purchasing & Warehouse Logistics session)
Source Day 2 transcript — organically surfaced by railroad/transportation team as their #1 pain
Field champion Kevin (train scheduling), railroad/transportation team, dock operations

Problem statement:

Tilden's entire output depends on vessel shipping via a single dock. The vessel schedule is controlled by a corporate traffic department and changes daily on a 30-day rolling basis. Three shipping contractors provide 4-6 vessels. Weather, dock breakdowns, and contractor priorities create extreme variability. Kevin (train scheduling) spends 3-4 hours every day replanning based on the latest vessel lineup. Multiple people across traffic, dock, and train crew scheduling are involved. The call board for train crews is manual — crews don't know if they're working tomorrow. When vessels don't show, train crews are wasted. When 4 vessels show at once, they can't staff enough. The dock is 130 years old with single-source failure points. No ground storage at dock — it's flow-through, so the entire mine-to-dock chain must stay synchronized.

Key quotes:

"That's probably our biggest business challenge this year." "We really have a hard time looking 24 hours in advance to what we're really going to do." "Every day, having to keep replanning somebody's train crews based on the vessel schedule." "We're so vessel-dependent." "Intentionally unintelligent" — on the communication siloes between stakeholders.

Communication siloes confirmed:

Site can't talk to vessel captains. Site can't talk to contractor offices. Only corporate traffic talks to contractor corporate. Multiple layers of indirect communication create information lag. "There's only certain people who can talk to certain people."

Day 2 evidence (Purchasing & Warehouse Logistics):

★★★ Daily replanning 3-4 hours — Kevin + multiple stakeholders rebuild schedule every day ★★★ BCS (Best Cargo System) — years of shipping data exists (delays, schedules, cancellations, vessel histories). "We have two years almost of scheduling data in SharePoint. We have endless amount of vessel histories." ★★★ Cascade effect — vessel variability → train crew waste → dock idle → stockpile backs up → concentrator slowdown. All Manpower schedules, maintenance opportunity windows, and work plans affected. ★★ 950 rail car fleet — no predictive data on car lifecycle/wheel wear ★★ Train crew call board — fully manual, daily staffing decisions ★★ Dock is 130 years old — motor failures, single source of failure, no stockpile buffer ★ Internal logistics cascade — mine → concentrator → pellet plant material movement also affected. ~4% delta between iron units delivered vs. plant's measurement. ★ Sentinel disruption example — Vooban described prior work on cargo disruption prediction (92% accuracy predicting Suez blockage). Team receptive.

Proposed solution: Multi-layer logistics optimization: 1. Layer 1 (H1): Daily schedule optimizer — takes BCS vessel lineup, weather forecasts, dock status, crew availability → generates optimal train crew and loading schedule. Replaces 3-4 hours of manual replanning. 2. Layer 2 (H2): Predictive disruption model — learns from BCS historical delays, weather patterns, dock failure modes → provides probability-weighted schedule with contingencies. 3. Layer 3 (H2+): Integrated mine-to-dock flow optimization — ties production rate, stockpile levels, rail scheduling, and vessel arrivals into single model.

Current state: BCS for vessel data, SharePoint for scheduling history, manual daily replanning (3-4 hrs), call board for crews. No integrated planning tool. Target state: Automated daily schedule generation, predictive vessel arrival confidence, optimized crew scheduling, reduced wasted crew deployments Value estimate: $2-5M/yr (crew waste reduction, dock utilization improvement, stockpile flow optimization, reduced demurrage) Confidence: High — massive existing data in BCS, clear daily pain, the team articulated the need unprompted Data readiness: Strong — BCS has years of vessel/delay data, SharePoint has scheduling history, dock has operational codes Systems: BCS (Best Cargo System — vessel scheduling/shipping), SharePoint (schedule history), ELLIPS (maintenance), Modular (fleet dispatch/GPS), Excel (production model) Complexity: Medium (Layer 1 is solvable with existing data; Layers 2-3 add complexity) Dependencies: Corporate traffic department cooperation for vessel forecast data

Comparison with other sites:

Reframing of PRJ-07 (Logistics). At Middletown this is coil movement optimization (40 trips/day, 2hr/day manual planning, Palmer's #1 priority). At Tilden the logistics problem is external (vessel-dependent) not internal, but the pattern is identical: complex daily scheduling done manually with too many variables for a human to optimize. Palmer named logistics as his #1 priority — this is a strong cross-site story.

Open questions: - [x] How is rail scheduling managed? → Manual daily replanning, 3-4 hours/day by Kevin + team - [x] How are ship arrivals coordinated? → Corporate traffic dept, daily updates on 30-day rolling schedule, 3 contractors - [ ] What is the average demurrage cost? (not directly stated) - [x] How is stockpile level tracked? → Drone volumetric surveys (±3%), belt scales, production model Excel - [ ] Can we get access to BCS data for analysis? - [ ] Would corporate traffic department cooperate on vessel forecast sharing?


TLD-17: Haul Road & Pit Slope Monitoring

Field Value
Horizon H2
Project new — Mining-specific (safety + operational)
Status seed
Source Mining safety requirements, deepening pit economics
Field champion TBD (geotechnical / mine engineering)

Problem statement:

As the pit deepens, haul road conditions and pit wall stability become increasingly critical. Haul road quality affects truck tire life, fuel consumption, and cycle times. Pit slope stability is a safety imperative — wall failures can be catastrophic. With the 20-year expansion plan, monitoring and prediction become more important as the pit geometry changes.

Proposed solution: Geotechnical monitoring system integrating survey data, deformation sensors, and weather data for slope stability prediction. Haul road condition monitoring for maintenance prioritization.

Current state: TBD — what geotechnical monitoring exists? Target state: Predictive slope stability alerts, optimized haul road maintenance Value estimate: $0.5-2M/yr (safety value + haul road efficiency) Confidence: Low-Medium (depends on existing monitoring infrastructure) Data readiness: To validate Systems: Survey/GPS, geotechnical sensors, weather station Complexity: Medium-High Dependencies: Geotechnical expertise

Open questions: - [ ] What geotechnical monitoring is in place? - [ ] What is the current pit depth? How fast is it deepening? - [ ] Has there been any pit wall instability? - [ ] How are haul roads maintained? What triggers maintenance?


TLD-18: Environmental Compliance Analytics (Selenium/Water)

Field Value
Horizon H2
Project new — Mining-specific
Status seed
Source EGLE reports, selenium contamination history
Field champion TBD (environmental)

Problem statement:

Tilden has a documented selenium contamination issue — elevated levels in Goose Lake and nearby streams, fish advisories. A new water treatment plant is under consideration. Environmental compliance is a critical constraint, with tribal coordination requirements and watershed protection obligations. AI-driven monitoring could improve early detection and treatment optimization.

Proposed solution: Environmental monitoring analytics — predictive models for selenium discharge, water treatment optimization, real-time compliance dashboard, early warning for exceedances.

Current state: Monitoring exists (EGLE regulatory requirement), water treatment under consideration Target state: Predictive environmental compliance, optimized water treatment, reduced regulatory risk Value estimate: $0.5-2M/yr (regulatory risk avoidance + treatment efficiency) Confidence: Medium (regulatory driver provides motivation, but may be lower priority than production) Data readiness: Likely partial (monitoring data exists per regulations) Systems: Environmental monitoring, water treatment (if built), EGLE reporting Complexity: Medium Dependencies: Water treatment plant decision

Open questions: - [ ] What environmental monitoring data is collected and how frequently? - [ ] Is the new water treatment plant approved/funded? - [ ] What is the current selenium compliance status? - [ ] What are the regulatory consequences of exceedances?


TLD-19: Tire Management & Prediction

Field Value
Horizon H1
Project PRJ-03 — Predictive Maintenance Platform
Status identified ★★★ — Day 1 + Day 2 Mine Maintenance
Source Day 1 lunch session + shop visit + Day 2 Mine Maintenance: full lifecycle management detail
Field champion Pete Austin (section mgr, 30 yrs) + Chase Lincoln (reliability eng)

Problem statement:

Haul truck tires cost $70,000 each (confirmed), chains $100,000/set. Tire life depends on haul road condition, loading practices (shovel overloading by 15% confirmed), truck speed, and road grade. Tire failures cause unplanned downtime and are safety hazards. Real-time tire monitoring already exists — the question is whether the data is being used optimally.

Proposed solution: Predictive tire management using existing real-time monitoring data + Modular dispatch data (road conditions, speed, payload) to predict remaining tire life and optimize replacement scheduling. Integration with operator feedback to reduce damaging behaviors.

Day 1 evidence (Lunch session + Shop visit):

Real-time tire monitoring system confirmed — "We also have a real-time tire monitoring system. Each tire has a temperature, pressure, forces." ★ Can correlate with GPS/dispatch data — "Can match that to the pits and help understand if your turns are too tight." ★ Wake-up health monitor — System that checks tire/brake pad condition. ★ $70,000 per tire, $100,000 per chain set (shop visit) ★ Wheel motors $300,000 each — tire-related damage can cascade to wheel motors. ★ Shovel overloading by ~15% — "Gets overloaded by 15%. If you don't do that right... damage the billion-dollar engine." Operator behavior is a variable — "Isn't some of it still driven by operator interaction, which is of course variable."

Day 2 evidence (Mine Maintenance) ★★★:

★★★ Full tire lifecycle management detail — 108 tires/year × $70K each = ~$7.5M/yr tire spend. Complex lifecycle: brand new tires always go on front (only 2 tires, higher load), moved to rear (4 tires) after ~2,000 operating hours for safety reasons. Rear tires run to failure deliberately — "there's four tires back there, nobody's back there, so it's not a safety issue." ★★ Tire prediction needs multiple variables — Pete: "You need to know operating hours (not meter hours), back out idle time, know if it was in front position or rear position and for how long, temperature, pressure, and what we've done to it over its life." Current approach: operating hours for front, remaining tread depth + condition for rear. ★★ Annual Bridgestone allotment order in August — "In August we have to tell Bridgestone how many tires we're going to use next year." Get it wrong → run out of tires or pay massive premium. World tire shortage is a real risk — "the world doesn't make enough tires." Historically, the quantity is accurate but the process takes too long. ★ Catastrophic failures mostly from road hazards — Rock damage, sharp berms, not from wear prediction misses. Rare on front (safety zone), controlled risk on rear. ★ Tire supplier provides cost-per-hour tracking — When tires come off, remaining tread depth goes into supplier's program. Result is cost/hour/tire from supplier. ★ OEM has tire tracking program — Bridgestone has its own analytics but doesn't account for Tilden's specific duty cycles and road conditions.

Current state: Real-time tire monitoring (temp, pressure, forces) exists. GPS/dispatch data exists. 108 tires/yr at $70K each ($7.5M+). Front/rear rotation managed manually. Bridgestone allotment order in August. Monitoring data collected but not fully leveraged. Supplier provides cost/hour tracking. Target state: Predicted tire life per-tire incorporating duty cycle (not flat hours), optimized front→rear rotation timing, accurate August allotment forecasting, operator-specific feedback on tire-damaging behaviors Value estimate: $1-3M/yr (upgraded from $0.5-2M — tire life extension + allotment accuracy + reduced catastrophic failures + reduced overloading damage. 10-15% of $7.5M annual spend) Confidence: High (monitoring infrastructure exists, detailed lifecycle data from supplier, well-understood tire physics) Data readiness: Good — real-time monitoring, Modular GPS, dispatch cycle data, supplier cost tracking all confirmed Systems: Real-time tire monitoring system, Modular (dispatch/GPS), ELLIPS (CMMS), Bridgestone supplier portal Complexity: Medium (upgraded from Quick Win — multi-variable prediction model needed, duty cycle integration adds complexity) Dependencies: Data integration between tire monitoring, dispatch, and ELLIPS (see TLD-45)

Open questions: - [x] Are tires currently monitored? → Yes — real-time temp, pressure, forces per tire - [x] What is the annual tire spend? → 108 tires/yr × $70K = ~$7.5M/yr - [x] What is the average tire life? How variable? → ~2,000 operating hours on front, then moved to rear. Rear run to failure. Total life varies with road hazards (10% variance estimated). Highly variable due to rock damage. - [ ] How many tire-related unplanned stops per month? - [x] Is tire monitoring data stored/accessible for analytics? → Supplier tracks cost/hour. Real-time monitoring exists. Modular dispatch has routes/loads.


TLD-20: Safety Analytics (Proximity/Fatigue/Collision)

Field Value
Horizon H2
Project new — Mining-specific
Status identified — Day 1 (Lunch session)
Source Day 1 lunch discussion — safety culture strong, paper processes confirmed
Field champion Dan Clarendon (safety/training, 25 years mining)

Problem statement:

Mining safety at Tilden is culturally strong ("Everybody's got to go home. We all take it really seriously") but operationally paper-based. MSHA-regulated workplace examinations use paper cards. Equipment operators have onboard camera/proximity systems of limited effectiveness. The MSHA fine risk for non-compliance adds urgency.

Day 1 evidence (Lunch session):

Safety culture strong — Pre-tour safety briefing was thorough: "If you feel queasy, let one of us know. We don't want anybody to get hurt." ★ Paper-based safety processes — Take-5 workplace examinations on paper cards. See TLD-24 for detailed evidence. ★ Camera system on equipment — exists but "doesn't give you much information." ★ Fire suppression systems — onboard fire systems on mobile equipment. ★ MSHA fines — "Do we have an understanding of how much in fines we paid?" — fine tracking was discussed. ★ Operator scorecards — "Done some work with providing scorecards back to individual operators." Crew-level performance tracking exists.

Current state: Strong safety culture. MSHA-regulated. Paper-based workplace examinations. Camera systems on equipment (limited). Fire suppression. Fine tracking. Target state: Digital safety capture, real-time proximity alerts, operator fatigue monitoring, near-miss analytics, automated MSHA compliance Value estimate: $0.5-2M/yr (compliance risk reduction + incident prevention + fine avoidance) Confidence: Medium-High (paper process pain clearly articulated, MSHA driver provides motivation) Data readiness: Partial — dispatch/GPS data exists for equipment tracking, safety data is paper-based Systems: Modular dispatch (GPS), camera systems, MSHA reporting, paper Take-5 cards Complexity: Medium Dependencies: Digital capture infrastructure (see TLD-24)

Comparison with other sites:

Middletown validated safety analytics (MDT-13) with Dave + Palmer + Eric Archer sponsorship — 550-person training example. Mining safety is MSHA-regulated (stricter than OSHA for steel). Dan Clarendon (25 years mining + 4 years safety/training) is a natural champion.

Open questions: - [x] What safety technology is currently deployed? → Camera systems (limited), fire suppression, GPS tracking via Modular - [ ] What is the current incident/near-miss rate? - [ ] How is MSHA compliance currently managed? How much in fines? - [ ] What safety analytics does the site currently do?


TLD-21: Concentrator Feed-Forward Control ★★★ (NEW → DEFERRED to TLD-P17)

Field Value
Horizon H2
Project TLD-P17 — Mine-to-Concentrator Ore Intelligence (deferred)
Status validated ★★★ — Day 1 + Day 5 Readout. Deferred: Ryan (Mar 28): "I don't think that's where we want to start. Probably two or three projects down the road." Keith (Apr 7) confirmed: concentrator-first, ore tracking later.
Source Day 1 transcript — emerged from multiple statements about ore variability → concentrator response gap
Field champion TBD — Dan McGrath + lab researcher (process engineering + R&D) + mine engineering TBD

Problem statement:

Tilden has drill hole data that characterizes ore quality at ~10ft spacing across the entire pit. They also have a concentrator that responds dramatically to ore quality changes — reagent suites, recovery rates, and throughput all shift. But there is no predictive model linking drill data to concentrator response. The current approach is reactive: ore enters the plant, the plant responds, operators adjust, and it takes days to reach steady state. With $50M/yr in reagent spend and ~70% iron recovery (vs. ~75% design benchmark, with realistic upside to 80% through ore-adaptive reagent optimization), the cost of this reactive gap is enormous.

Key quotes:

"If there was some learning based on the ore quality that comes in from the mining area, what adjustments happen in the concentrator in order to proficiently process it — that'd be super beneficial." "We don't know which lever to pull at some times." "When a change comes through to get back to steady state, takes a long time so you may pull whatever... it may take several days for you to realize and actually see like okay yeah that helped." "We sample every single drill hole... to understand what's there to help us predict how the concentrator is going to react."

Proposed solution: ML-based feed-forward control model that takes drill hole assay data + geological model + historical concentrator response data and predicts the optimal reagent suite adjustment before the ore reaches the plant. Reduces the days-long reactive feedback loop to a proactive, data-driven adjustment.

Current state: Reactive. Drill data collected but not linked to concentrator optimization. Days-long feedback loop. Target state: Predictive model pre-adjusting concentrator parameters based on incoming ore quality. Same-shift optimization. Value estimate: $2.5-5M/yr (reagent savings + recovery improvement + throughput stability) Confidence: High (data exists on both sides of the gap, problem clearly articulated, ML well-suited to this class of problem) Data readiness: Partial — drill data exists, concentrator process data likely exists, the bridge is what needs building Systems: Mine planning (3D models), drill data, concentrator DCS, assay lab, historian Complexity: Medium (data integration + ML model, not a systems overhaul) Dependencies: TLD-06 (ore grade data), TLD-07/08 (concentrator process data)

Comparison with other sites:

This is Tilden's version of the information flow problem. At Cleveland, the gap was between operations and maintenance. At Middletown, it was between finishing lines and quality data. At Tilden, it's between the pit (ore quality data) and the concentrator (process response). Same thesis, different manifestation. Palmer scalability: The methodology — using upstream data to predict and optimize downstream process response — is a universal pattern that could apply to any CLF mine with variable ore bodies.

Open questions: - [ ] What format is drill hole assay data stored in? How far back? - [ ] What concentrator process data is logged? At what frequency? - [ ] Has anyone attempted to build this model before? - [ ] What would the lab researcher / R&D team say about feasibility? - [ ] How much lead time exists between drill data and ore reaching the plant?


TLD-22: Filter Performance Monitoring (42 Filters) (NEW)

Field Value
Horizon H1
Project new — Quick Win directly gating the bottleneck
Status identified ★★ — Day 1 first contact
Source Day 1 transcript — site leader described cascading undetected failures
Field champion TBD — concentrator operations

Problem statement:

The concentrator has 42 filters for dewatering. These filters have poor instrumentation — there is no real-time monitoring of individual filter performance. When a filter degrades, it goes undetected for 2-3 days. By the time operators identify the problem, 3-4 filters may have cascading issues, compounding the impact on the bottleneck process.

Key quotes:

"There's not real good instrumentation on each of the 42... really understanding how each one is operating." "We have an issue, it takes a while to go step 42 filters to find out where those problems are." "Things can pop up and you don't even recognize it for two or three days, and you might not even recognize there's a problem until three or four problems pile up on each other."

Proposed solution: Instrument the 42 filters with basic performance monitoring (pressure differential, flow rate, vibration) and build anomaly detection to flag degradation early. Dashboard for operators showing individual filter health. Alert system for proactive intervention.

Day 2 evidence (Process & DCS Engineering):

★★ Filter instrumentation costs quantified — "Every filter tank has its own level controller, but it's isolated stand. It's never been brought into DCS." One pilot filter integrated to DCS works great. Per-filter: "A couple thousand dollars" for level sensor + $1,500-2,000 per AI/AO module (handles 4 filters each, need 11 modules). Total ~$60-80K hardware. ★★ No individual filter production data — "We have a scale — one reads 24 filters worth of production and one reads the other 18." Can't tell which filters are underperforming. ★ Maintenance vs. process disconnect — "Many times maintenance says it's green — it's running. Whereas we'll say it's running, but very inefficiently. We're losing vacuum on this filter." ★ Filter constraint emerging — "We run in periods of time where filters [constrain]. When we're filter constrained, that feeds back to operating time on prior mills and also impacts pellet unit operating time."

Current state: 42 filters with local level controllers — NONE connected to DCS. One pilot filter integrated to DCS "works really good." Cost per filter: ~$2-3K sensor + $375-500 share of AI/AO module = ~$3K total per filter (~$125K for all 42). Production only measured in aggregate (24 on one scale, 18 on another). Maintenance-process disconnect: maintenance sees "green/running" while process engineering sees "running inefficiently, losing vacuum." 2-3 day detection lag for degradation. Cascading failures compound before detection. Target state: Real-time per-filter monitoring, same-shift anomaly detection, proactive intervention Value estimate: $1-3M/yr (directly gates concentrator throughput — the bottleneck) Confidence: High (clear problem statement, proven sensor/analytics technology, directly tied to bottleneck) Data readiness: Low (the problem IS the lack of instrumentation — needs sensor investment) Systems: DCS, new sensor infrastructure, analytics platform Complexity: Quick Win (sensor + analytics, well-understood technology) Dependencies: Sensor procurement and installation

Comparison with other sites:

Pattern: "instrument the gap." At Cleveland, the gap was data collection on cranes and bag houses. At Middletown, the gap was Ametek cameras at 60% accuracy. At Tilden, it's 42 un-instrumented filters. Same class of problem, different asset.

Open questions: - [ ] What would it cost to instrument the 42 filters? - [ ] What filter parameters would be most diagnostic? (pressure, flow, vibration?) - [ ] Is there DCS capacity to integrate additional sensor inputs? - [ ] How long is a filter replacement cycle?


TLD-23: Reagent Suite Optimization — Dispersant Standardization (NEW → POV SCOPE)

Field Value
Horizon H1
Project TLD-P01 — Concentrator Desliming & Recovery Optimization (POV)
Status validated ★★★ — Day 1 + Mar 28 working session + Apr 7 IE×Tilden: confirmed as POV Track 1
Source Day 1 transcript + Ryan email + Mar 28 Keith/Ryan working session + Apr 7 IE×Tilden scoping call
Field champion Keith Holmgren (Sr Director Mining Technology, 32 yrs at Cliffs, 18-19 at Tilden — THE concentrator SME) + met tech team (TBD)

Problem statement:

Tilden spends approximately $50 million per year on chemical reagents for the concentrator. The desliming circuit — where polyacrylic acid (PAA) dispersant controls how much tailings are rejected — is the most operator-dependent variable. Met techs manually key in dispersant rates across 4 sections (sections 2-3, 4-6, 7-9, 10-12), chasing an optimum that shifts with ore variability, water temperature, pH, and ore mineralogy. The PAA acts as a "throttle" on weight rejection: zero PAA → ~10% rejection, max PAA → ~50% rejection, with the optimum around 30-35%. The relationship between PAA dose and weight rejection varies with multiple uncontrolled variables. If under-dispersed, the flotation circuit gets overwhelmed and can literally flood the plant. The operator response bias is conservative: protect against flooding rather than optimize recovery.

Apr 7 context: Keith Holmgren provided the most detailed technical walkthrough of the desliming process to date. The 4 sections each combine 3 primary mills into a homogeneous slurry at a splitter point where PAA is added. Each section is controlled independently. The starch selectively flocculates iron ore minerals; calcium/magnesium hardness non-selectively flocculates particles; the PAA chelates the hardness ions to control the balance. This is the core mechanism the POV will model.

Key quotes:

"We spend, I should know this number, but I don't. I'm probably not exaggerating, probably 50 million dollars a year on chemicals just to help us process." (Day 1) "The goal would be to develop a program to standardize the measurements taken by the met tech and use the information along with the available process data to standardize the adjustments made to the process." (Ryan email) "These folks are coming in and keying in these numbers right here. These four numbers on how much dispersant they add that are chasing an optimum." (Keith, Mar 28) "This polyacrylic acid addition becomes a throttle on how much tailings we throw away in this D-slime thickener." (Keith, Apr 7) "If I'm a good met tech, what I do is I take that sample at the same time the lab takes their metallurgical samples... and I store that in my head." (Keith, Apr 7) "Let's make as much as we can with this AI system. All responses like that best guy. And even if we don't get it perfect, at least it's consistent." (Keith, Apr 7)

Proposed solution: Data-driven dispersant dosing standardization. Correlate met tech measurements, process variables (tailing sump levels, DTU pump speeds, thickener profiles, dispersant rates), and metallurgical balance outcomes to recommend optimal dispersant adjustments per section. Goal: make every met tech response as good as the best 35-year veteran's response — consistent, then improvable.

Current state: 1974-era reagent suite design, reactive adjustment, days-long feedback loop, $50M/yr spend. Met techs manually key in 4 dispersant numbers per section. Operator-to-operator variability is significant — experienced operators (28+ yrs) being replaced by staff with 2 yrs and half a week of training. Target state: Standardized, data-driven dispersant adjustment recommendations per section. Consistent response to same process conditions. Reduced reagent spend + improved recovery. Value estimate: $2.5-5M/yr (5-10% reduction in $50M reagent spend = $2.5-5M, plus recovery gains). Ryan: 0.5% weight recovery = 100K tons = tens of millions of dollars. Confidence: High — POV-level commitment from site leadership (Ryan + Keith) and IE (Bob Zadel + Erico) Data readiness: Kevin has a spreadsheet of D-slime grab samples (4 samples/day). Pi historian has process data. Apr 7 action: Keith to confirm whether dispersant numbers are logged in DCS/Pi or paper only. Systems: Concentrator DCS (Foxboro IA), Pi historian, D-slime grab sample records, reagent dosing logs Complexity: Medium (data integration + ML model, well-scoped POV) Dependencies: Met tech cooperation for workflow mapping, Keith Holmgren availability for domain knowledge capture

Comparison with other sites:

No direct parallel at steel mills. But the value pattern is familiar: large ongoing cost + variable input + reactive management = optimization opportunity. At Middletown, the analog was the $104M inventory with 10% duplicates (MDT-31). Different domain, same economic logic.

Open questions: - [x] What reagent types are used? → PAA (polyacrylic acid) dispersant in desliming, cooked corn starch (selective flocculant), amine collector in flotation, pH management (lime), water chemistry control (Keith, Apr 7) - [ ] What is the breakdown of the $50M spend by reagent type? - [x] How are reagent dosing decisions currently made? Who decides? → Met techs manually key in 4 dispersant numbers per section. Decisions based on beaker settling tests + metallurgical balance results (every 6 hrs) + experience (Keith, Apr 7) - [ ] What reagent dosing data is logged? → Apr 7 action item: confirm whether dispersant numbers are in DCS/Pi or paper - [ ] Has anyone from the R&D lab attempted to model optimal reagent usage?

Apr 7 scoping note: Water chemistry explicitly out of scope per Keith — "more of a course longer term control" vs. dispersant which is "instantaneous real time, changing every shift, sometimes every 10 minutes." Ryan agreed: reference water chemistry data but don't try to control it in the POV.


TLD-24: Workplace & Equipment Inspection Digitization (NEW)

Field Value
Horizon H1
Project PRJ-06 — Maintenance Workflow Digitization
Status identified ★★ — Day 1 (Lunch session — multiple speakers, strong energy)
Source Day 1 lunch discussion — Take-5 workplace exams, mobile equipment inspections, paper reporting
Field champion Dan Clarendon (safety/training)

Problem statement:

Every shift at Tilden, every worker fills out a "Take-5" workplace safety examination card — a paper card documenting safety hazards, corrective actions, and workplace conditions. 50 cards per shift go to the supervisor, who manually punches data into a spreadsheet. No reminders for corrective actions. No trending. No follow-up. Separately, mobile equipment pre-trip inspections are entirely paper-based, taking up to 2 hours. Production reporting (bucket counts, tonnage) is also paper → spreadsheet with multi-day processing lags.

Key quotes:

"Every person is required to do a quick examination for their shift and we do it on a card. Paper card comes in to the supervisor. Supervisor punches that into a spreadsheet." "He has fifty cards in his hand and he wants to be out in the field and see what's going on." "There's not usually very many [comments] because people just write in and that's just the stuff they like to see." "Directions to be as simple as taking your smartphone and capturing a video or some pictures. All that information would flow freely. Nobody would have to touch anything and the reports would build themselves." "The equipment is all paper-based. 2 hours and things like that."

Proposed solution: Digital inspection capture — voice/video/photo for Take-5 workplace exams, equipment pre-trip inspections, and production reporting. AI extracts structured data, builds exception reports, triggers corrective action reminders. Supervisors get automated dashboards instead of 50 paper cards.

Current state: Take-5 on paper cards → supervisor spreadsheet. Equipment inspections paper-based (2 hours). Production reporting manual (multi-day lag). No corrective action tracking. Target state: Voice/camera capture → automatic structured data → exception reporting → corrective action tracking → compliance dashboards Value estimate: $0.3-1M/yr (supervisor time savings + compliance improvement + corrective action completion + MSHA fine avoidance) Confidence: High (clear problem, strong group energy, solution concept validated in discussion, proven at MDT as same-pattern opportunity) Data readiness: Low-Medium (paper data exists but not digital — this IS the digitization) Systems: Mobile devices, MSHA reporting, supervisor spreadsheets, ELLIPS Complexity: Quick Win (mobile app + AI extraction, well-understood technology) Dependencies: Mobile device policy, network coverage

Comparison with other sites:

Same pattern as Middletown: paper processes → digital capture → automated reporting. MDT had 30-page DMP packets. Tilden has 50 paper Take-5 cards per shift. The solution pattern is identical — voice/video capture with AI extraction. This is a Day 1 quick win that demonstrates value fast.


TLD-25: Mine Production Reporting Automation (NEW)

Field Value
Horizon H1
Project new — Mining-specific
Status identified ★ — Day 1 (Lunch session)
Source Day 1 lunch discussion — paper-based production reporting
Field champion Brad (production, 21 years)

Problem statement:

Mine production reporting (bucket counts, tonnage, equipment hours) is paper-based. Reports take "at least a few full days" to process. Tonnage must be manually converted between different units. Manual entry introduces errors. Leadership can't see production data until days after the fact.

Key quotes:

"Paper reports are being submitted to me. Right now bucket counts, the numbers, tonnage is very cumbersome." "A lot of manual entry, you have to re-enter in Somerset because it needs to be converted from one tonnage to another." "How long does it usually take? At least a few full days."

Proposed solution: Automated production data capture from Modular dispatch (truck cycles, tonnage) and crusher DCS (throughput) into a production reporting dashboard. Eliminate paper reports and manual conversion. Real-time shift-level production visibility.

Current state: Paper-based reporting. Multi-day processing lag. Manual tonnage conversion. Multiple forms. Target state: Automated production reporting from existing systems (Modular + DCS). Real-time dashboards. Value estimate: $0.2-0.5M/yr (time savings + data accuracy + faster decision-making) Confidence: High (Modular and DCS already capture the raw data — this is a reporting/integration layer) Data readiness: Good — Modular captures truck data, DCS captures crusher/plant data. Integration is the gap. Systems: Modular dispatch, crusher DCS, production spreadsheets Complexity: Quick Win (data integration + dashboard) Dependencies: Modular data access, DCS data access


TLD-26: Operator Performance & Payload Analytics (NEW)

Field Value
Horizon H1
Project new — Mining-specific
Status identified ★ — Day 1 (Lunch session + Shop visit)
Source Day 1 multiple sessions — operator scorecards, payload histograms, overloading damage
Field champion Tyler Craig (mining engineer)

Problem statement:

Haul truck operator behavior directly impacts fleet costs — overloading by 15% damages engines ($millions per engine) and tires ($70K each). Payload histograms exist in the Modular system but feedback to operators is manual and after-the-fact. Some operator scorecards are being created manually, but the process is cumbersome.

Key quotes:

"Done some work with providing scorecards back to individual operators for the crew." "Going through our payload increase — a lot of manual collection. Loading histograms and everything else could be kicked to a supervisor." "Gets overloaded by 15%. If you don't do that right — damage the billion-dollar engine." "A lot of calculations still being run by a human. There's mistakes that can be there."

Proposed solution: Automated operator performance analytics from Modular dispatch data — payload compliance, cycle times, fuel efficiency, hauling behavior. Real-time or shift-end scorecards delivered directly to supervisors. Identify top performers and coaching targets automatically.

Current state: Manual scorecard creation from dispatch data. Payload histograms collected but not automated to feedback. Overloading damage is a known problem. Target state: Automated operator scorecards, real-time payload feedback, supervisor coaching dashboards Value estimate: $1-3M/yr (reduced equipment damage from overloading + fuel savings + productivity improvement from coaching) Confidence: Med-High (data exists in Modular, manual scorecards already being attempted, clear $ impact from overloading) Data readiness: Good — Modular captures cycle times, payloads, routes. Automation is the gap. Systems: Modular dispatch, GPS, onboard weighing, supervisor tools Complexity: Quick Win (analytics on existing Modular data) Dependencies: Modular data access, supervisor buy-in for feedback process


TLD-27: Environmental Compliance Knowledge System (NEW)

Field Value
Horizon H1→H2
Project new — Mining-specific (aligns with TLD-14 Virtual SME)
Status identified ★ — Day 1 (End of day session)
Source Day 1 end-of-day — Brent (environmental manager)
Field champion Brent (environmental manager)

Problem statement:

Tilden has dozens of environmental compliance states (selenium, water quality, wetlands, air permits) tracked in "outdated spreadsheets or in my head or in two other guys' heads." Compliance tracking is knowledge-dependent, with seasonal variability adding complexity. If Brent or the other 2-3 people who know the compliance landscape leave, the risk of EGLE violations increases dramatically.

Key quotes:

"We got dozens and dozens of compliance states that reside either in an outdated spreadsheet or in my head or in two other guys' heads." "From those we gather all this data that we manually track against." "Seasonalities, a big issue. Things in the base and where we do see some inductions out there."

Proposed solution: Environmental compliance knowledge base — structured capture of all compliance requirements, thresholds, monitoring schedules, and response procedures. AI-assisted tracking of compliance status against regulatory requirements. Automated alerts for approaching thresholds. Integration with Esri GIS tools for geospatial compliance tracking.

Current state: Spreadsheets + tribal knowledge (2-3 people). Manual tracking. Seasonal variability managed from experience. Target state: Structured compliance knowledge base, automated monitoring alerts, Esri-integrated geospatial tracking Value estimate: $0.3-1M/yr (compliance risk reduction + regulatory fine avoidance + knowledge preservation) Confidence: Medium (clear problem, but scope depends on Esri integration and regulatory complexity) Data readiness: Low-Medium (monitoring data exists per regulations, compliance rules are in heads/spreadsheets) Systems: Environmental monitoring systems, Esri GIS (planned), EGLE reporting, spreadsheets Complexity: Medium Dependencies: Esri partnership (Thursday workshop), environmental monitoring data access

Comparison with other sites:

No direct parallel at steel mills. Mining has heavier environmental regulatory burden (EGLE, MSHA, tribal coordination). This is a specific instance of the Knowledge Capture theme — environmental compliance knowledge concentrated in 2-3 heads. Burns Harbor has EPA history — may be relevant there too.


TLD-28: Utilities/Energy Consumption Forecasting (NEW)

Field Value
Horizon H2
Project new — Mining-specific
Status identified — Day 1 (End of day session)
Source Day 1 end-of-day — power consumption, natural gas budgeting
Field champion TBD (production planning)

Problem statement:

Utilities and energy costs are significant and hard to predict. The power contract is "difficult to understand" and consumption tracking is "complicated." Natural gas forecasting has gone wrong (Burns Harbor January spike example). Drivers are production rate, equipment running hours, ore type (high-flux pellets consume more energy), and weather. Budget variance is a recurring problem.

Key quotes:

"The contract is difficult. Reconciling that, understanding that better — opportunity for us." "There's always variance on either side and part of it is how the possible, how it all comes in, how we process it." "It goes back to that discussion about ore quality — lots of variability in usage rate."

Proposed solution: Energy consumption forecasting model linking production schedule, ore type, equipment utilization, and seasonal factors to predict power and natural gas usage. Budget variance reduction through better forecasting.

Current state: Production model exists ("two clicks and run hours, that's the local thing that spits out some suggestions"). Budget variance recurring. Power contract complexity. Target state: ML-based energy forecasting, budget variance reduction, optimized energy procurement Value estimate: $0.5-2M/yr (budget variance reduction + energy procurement optimization) Confidence: Medium (clear pain, but lower priority than concentrator optimization) Data readiness: Partial — production model exists, energy data exists, integration TBD Systems: Production planning tool, power metering, natural gas metering, DCS (equipment hours) Complexity: Medium Dependencies: Energy data access, production planning data


TLD-29: HR/Workforce Overtime Forecasting (NEW)

Field Value
Horizon H1
Project new — Cross-site potential
Status identified — Day 1 (End of day session)
Source Day 1 end-of-day — HR workforce forecasting discussion
Field champion TBD (HR/operations)

Problem statement:

Workforce hours and overtime forecasting is done manually with spreadsheets. History data exists on premiums, headcount, and overtime. But predicting forward hours, overtime spend, and scheduling is "really hard." Holiday show-ups, absences, and seasonal patterns create variability that manual methods can't capture well.

Key quotes:

"I have all these inputs. And yet, it's really hard to try to predict forward how many hours people work. Overtime. When they pay." "It just seems like something — spreadsheets and data, you pull it all together. You should be able to pull that together to make a better forecast."

Proposed solution: ML-based workforce forecasting using historical hours, overtime, absence patterns, seasonal factors, and production schedule to predict weekly/monthly labor costs and identify scheduling optimization opportunities.

Current state: Spreadsheet-based manual forecasting. Historical data exists. Poor predictability. Target state: Automated workforce forecasting, overtime prediction, scheduling optimization Value estimate: $0.2-0.5M/yr (overtime cost reduction + scheduling efficiency) Confidence: Medium (clear need, lower priority than operational improvements) Data readiness: Partial — historical HR data in spreadsheets, needs structuring Systems: HR spreadsheets, payroll, scheduling Complexity: Quick Win (once data is structured) Dependencies: HR data access


TLD-30: Parts Warehouse Digitization (Barcode/Scanner) (NEW)

Field Value
Horizon H1
Project PRJ-06 — Maintenance Workflow Digitization
Status identified ★ — Day 1 (Shop visit)
Source Day 1 shop visit — manual inventory management confirmed
Field champion TBD (warehouse/materials management)

Problem statement:

The parts warehouse at Tilden manages inventory through ELLIPS but with entirely manual processes — no barcode scanning, no automated tracking. Parts charged to work orders are tracked manually. For equipment worth $300K per wheel motor and $70K per tire, inventory accuracy is critical but dependent on human data entry.

Key quotes:

"They could use barcode, they could use scanners. Now that's happening right now. It's all manual." "Material management program to ELLIPS." "These are already charged, not charged — charged to work orders."

Proposed solution: Barcode/RFID scanning infrastructure for parts warehouse. Integration with ELLIPS for real-time inventory tracking. Automated work order charging. Foundation for procurement automation (TLD-13) and PdM parts planning (TLD-02).

Current state: ELLIPS for inventory management. All manual data entry. No scanning infrastructure. Target state: Barcode/scanner-based warehouse, real-time inventory in ELLIPS, automated work order charging Value estimate: $0.2-0.5M/yr (inventory accuracy + time savings + enables TLD-13 procurement automation) Confidence: High (straightforward technology, clear pain, ELLIPS integration exists) Data readiness: Good — ELLIPS already has part numbers and work orders, just needs scanning capture layer Systems: ELLIPS (CMMS/inventory), barcode/RFID scanners, warehouse management Complexity: Quick Win (proven technology, ELLIPS integration point exists) Dependencies: Scanner hardware procurement, ELLIPS configuration

Day 2 evidence (Purchasing & Warehouse Logistics):

Vendor-managed inventory completely untracked — safety supplies (glasses, etc.) managed by vendors. "They've been told a rough quantity that we want to have on hand. No, there's nobody keeping track of that." Only data: monthly spreadsheet from vendor. ★ No receiving verification — "From a receiving standpoint, when it comes in, they're not necessarily counting. There's 50 cases here, 20 cases safety glasses — it is what it is." ★ Project already in place — acknowledged that digitization projects exist to change this, but "implementation is going to be a lot of manual entry."

Comparison with other sites:

Same inventory management pain as Middletown ($104M inventory, 32K parts, 10% duplicates). Tilden's specific gap is the physical scanning infrastructure. This is a foundation piece that enables TLD-13 (procurement automation) and improves TLD-02 (PdM parts planning). AI can reduce the "lot of manual entry" barrier by automating part matching and data entry.


TLD-31: Stockpile Ore Distribution Modeling ★★ (NEW)

Field Value
Horizon H2
Project new — ★★ Mining-specific, zero hardware needed
Status identified ★★ — Day 2 (Process & DCS Engineering)
Source Day 2 transcript — organically surfaced by process engineering team
Field champion Keith (technical group) + Todd Davis (mine engineering) + Dan Collins (concentrator)

Problem statement:

The concentrator has 12 grinding lines fed from a stockpile via bottom-draw chutes. A tripper system deposits ore onto the stockpile over these lines, changing position every 15 minutes (manually operated). Extreme variation exists between concentrator sections (2-3, 4-6, 7-9, 10-12) — one section may get a heavy dose of carbonate ore while another gets minimal. This causes wildly different reagent responses per section. The team believes this is driven by heterogeneous ore distribution in the stockpile. GPS on trucks knows where each load originated, quality data (CRV values) is assigned per truck, and the tripper position is tracked. But NO model exists linking stockpile deposition patterns to per-section feed quality.

Key quotes:

"Oftentimes, you'll see extreme variation between those sections, and it's oftentimes tied to in my belief heterogeneous distribution of [ore types]." "In a Panacea world, the best thing could have in front of the concentrator is a stir mix. The problem is that's hard to do with rock." "If we had some knowledge of where along the modeling, is that ore in terms of which section represents what is represented in some chunk of time — we may be able to come back and control." "We have the hardware we need."

Proposed solution: ML model that takes GPS truck origin data + ore quality (CRV values, mineralogy) + crusher arrival time + tripper position + stockpile geometry to model per-section ore distribution over time. Output: predicted per-section ore quality that enables proactive reagent adjustments. Self-improving model that correlates predictions with actual concentrator section response.

Current state: GPS on all trucks (Modular), quality data per truck, tripper position logged, stockpile geometry known. Dead zones between lines from inactive core piles. NO model linking deposition to per-section feed quality. Operators react to section-by-section variation after the fact. Target state: Predictive stockpile ore distribution model. Per-section feed quality predictions enabling proactive reagent management. Reduced section-to-section variation. Correlated with actual response for continuous improvement. Value estimate: $2-5M/yr (reduced reagent waste from section variation + improved recovery consistency + reduced operator firefighting) Confidence: Med-High (hardware exists, data exists on both sides, team organically proposed the concept, self-improving model) Data readiness: Good — Modular GPS, truck quality data, crusher timestamps, tripper position all confirmed. Integration is the work. Systems: Modular dispatch (GPS/truck data), crusher DCS, concentrator DCS, mine planning (ore quality assignments) Complexity: Medium (data integration + ML modeling, no new hardware) Dependencies: TLD-06 (ore quality data), TLD-21 (feed-forward control — stockpile model feeds INTO feed-forward)

Comparison with other sites:

No parallel at steel mills. This is uniquely mining. But the methodology — using upstream logistics data to predict downstream process variability — is a universal pattern. Key advantage: zero hardware investment needed. All data sources already exist. This is purely a data integration + ML problem.

★ KEY INSIGHT: TLD-31 may be the most achievable piece of TLD-21 (feed-forward control). Instead of trying to predict concentrator response from drill hole data (complex, long data pipeline, many unknowns), start by modeling the stockpile distribution to explain per-section variation. This is a more tractable problem with higher-confidence data and could be the Phase 1 entry point for the entire concentrator optimization bundle.

Open questions: - [ ] What is the tripper position logging frequency? Real-time or manual? - [ ] How many stockpile draw points (chutes) per section? - [ ] What is the residence time of ore in the stockpile? - [ ] Can the model account for dead zones / inactive core piles? - [ ] What is the current section-to-section reagent cost variance?


TLD-32: Concentrator Operator Decision Support — Live CRP (NEW → POV SCOPE)

Field Value
Horizon H1
Project TLD-P01 — Concentrator Desliming & Recovery Optimization (POV)
Status validated ★★★ — Day 2 + Mar 28 working session + Apr 7: Keith's "live CRP" concept confirmed as POV deliverable
Source Day 2 transcript + Mar 28 Keith/Ryan working session + Apr 7 IE×Tilden scoping call
Field champion Keith Holmgren (conceptualized the live CRP), Scott Hebert (ops specialist 30yr), Sean Halston (ECSM)

Problem statement:

Concentrator control operators have highly variable skill levels and approaches. The existing Control and Response Plans (CRPs) are static Word documents in the ISO system — "people that tend to be technicians, frankly, are the guys that throw away the directions and they never read." Decision tree charts "dumb down the jobs" — operators learn the chart but not the underlying process. Keith Holmgren's vision is a "live CRP" — an analytic tool that evaluates the current state of the process, identifies the current bottleneck (primary mill limited vs. pebble mill limited), and provides actionable recommendations rather than requiring operators to navigate if-then logic trees.

Apr 7 context: Keith described how an experienced met tech operates: take beaker samples at the same time as lab samples, cross-reference visual assessment with metallurgical results, store the calibration in your head, then check every hour or two for changes. When bed height changes dramatically (e.g., 100ml → 150ml), take immediate action on dispersant or risk flooding flotation. New met techs with 2 years experience and half a week of training cannot replicate this. The AI system should make "all responses like that best guy — and even if we don't get it perfect, at least it's consistent."

Buy-in emphasis (Apr 7): Keith stressed from expert system experience that "if you don't have the control room operators in love with it, everybody will find reasons to shoot at the thing that's trying to do a better job than they've been doing." Ryan wants a daily management board where process engineers check whether the system's recommendations were followed.

Key quotes:

"It's more of like a live control and response plan... taking all the if-then boxes and saying, never mind all that, here's where you're at and here's where we suggest you go this shift." (Keith, Mar 28) "Let's make as much as we can with this AI system. All responses like that best guy. And even if we don't get it perfect, at least it's consistent and then we can come in and tweak it." (Keith, Apr 7) "If you don't have the control room operators in love with it and the metallurgists in love with it, everybody will find reasons to shoot at the thing." (Keith, Apr 7) "Every morning when our process engineers come in... they'd be able to look at a board and say, here's what we were supposed to do. Did we do all those things?" (Ryan, Apr 7)

Proposed solution: AI-powered decision support for concentrator operations that: (1) evaluates current process state from DCS data + met tech measurements, (2) classifies the current operating regime and bottleneck, (3) recommends specific dispersant adjustments and corrective actions based on best-operator patterns, (4) replaces static CRP documents with live, context-aware guidance. Advisory mode only — no closed-loop control.

Current state: Static CRPs (unread Word docs), decision trees that "dumb down jobs," 75% success rate on six-bearing events, enormous operator-to-operator variability, new hires with 2 yrs experience replacing 28-yr veterans. Target state: Live decision support replacing static CRPs. Consistent operator response. Daily management board tracking adherence. Teaching mode that builds operator capability. Value estimate: $1-3M/yr (reduced ton losses from variability + more consistent operations + faster operator training) Confidence: High — site leadership explicitly described the tool they want, buy-in strategy articulated Data readiness: Good — DCS process data exists, met tech measurements exist (paper + potentially DCS), G2 logs (pending SGS access) Systems: DCS (Foxboro IA), G2 fuzzy logic (SGS), operator logs, Pi historian Complexity: Medium Dependencies: TLD-23 (dispersant standardization provides the data model), TLD-54 (beaker vision provides standardized measurement input)

Comparison with other sites:

Closest analogy is the "best caster operator" concept from Middletown R&D discussions. Same pattern: capture best operator behavior, propagate through AI assistance. Mining-specific application but universal methodology.

Open questions: - [ ] How frequently do six-bearing events occur? Per shift? Per day? - [ ] What DCS variables correlate with successful vs. failed interventions? - [ ] Can we instrument best-operator sessions to capture their decision patterns? - [ ] How many control operators / met techs are there? What's the skill distribution? - [x] What does Keith envision the tool looking like? → "A snapshot with the current parameters... I would conclude this. I'm pebble mill limited, I'm primary limited... here are the actions or checks that we should be chasing." (Keith, Mar 28)


TLD-33: HPGR Feed Rate Root Cause Analysis (NEW)

Field Value
Horizon H1
Project new — Data analytics quick win
Status identified ★ — Day 2 (Process & DCS Engineering)
Source Day 2 transcript — unexplained feed rate drop, smoking gun not found
Field champion Sean Halston (ECSM) + Todd Davis (lead process engineer)

Problem statement:

The HPGR (High Pressure Grinding Rolls) was installed in April 2023, increasing concentrator productivity. In November 2025, feed rates dropped unexpectedly and struggled for 7-8 months. Strong indications point to primary pressure settings, but the team has not found the "smoking gun." Feed rates improved late 2025, then January 2026 was "a very poor feed rate month," now recovering. With all available data, the engineering team cannot fully explain the pattern.

Key quotes:

"In April of 2023, we installed a new piece of equipment called the HPGR. Feed rates went up by a good amount. Then, in mid-November 2025, our feed rates dropped off. We struggled for the next seven months, eight months." "With all the data we have available, I can't say I found the smoking gun for it." "Feed rates, recovery, and operating time — the three main drivers in this mill."

Proposed solution: ML-based root cause analysis on historical DCS data from April 2023 to present. Correlate feed rate variations with all available process variables (HPGR settings, primary pressure, crusher output, pebble mill capacity, ore quality, seasonal factors). Identify hidden correlations and contributing factors that human analysis has missed.

Current state: Feed rate mystery. Engineering team has hypotheses (primary pressure settings) but no definitive answer. Extensive DCS data available. Target state: Identified root cause(s) of feed rate variability. Actionable recommendations for sustained high feed rates. Value estimate: $1-3M/yr (sustained high feed rates = more tons through the bottleneck concentrator) Confidence: Medium (data exists, but no guarantee ML will find what humans couldn't) Data readiness: Good — DCS logs from 2023 to present, HPGR operational data, crushing data Systems: DCS (Foxboro IA), HPGR control system, crusher DCS Complexity: Quick Win (data analysis project, no infrastructure changes) Dependencies: DCS data access, HPGR operational logs

Comparison with other sites:

Same "data science investigation" pattern as Middletown cobble root cause analysis (PRJ-05). Different process, same approach — throw ML at a complex multi-variable problem where human analysis has plateaued.

Open questions: - [ ] What DCS data granularity is available from the HPGR? - [ ] What changed in November 2025? (Ore type, HPGR settings, seasonal?) - [ ] What correlations has the engineering team already investigated? - [ ] Is the HPGR data accessible alongside concentrator DCS data?


TLD-34: Pellet Calcium Control Automation (NEW)

Field Value
Horizon H1
Project new — Quick Win, data exists
Status identified ★ — Day 2 (Process & DCS Engineering)
Source Day 2 transcript — process engineers called it "easy application test case"
Field champion Process engineering team (Sean Halston, Todd Davis)

Problem statement:

Pellet calcium control (flux addition to meet customer BF specifications) requires human adjustment every 6 hours based on lab samples. The team has "more than enough data" to support predictive or automated control. This was explicitly called out as an "easy application test case" for AI — low risk, clear data, well-understood process.

Key quotes:

"If you're looking for an easy application test case, calcium control. We're already pretty good at that. It takes a human every six hours making adjustments. We have more than enough data."

Proposed solution: Automated calcium control using DCS data + lab sample history + ML prediction of required flux adjustments. Reduce human intervention frequency while maintaining or improving pellet quality compliance.

Current state: Human-adjusted every 6 hours based on lab samples. Data exists. Process well-understood. Target state: Automated or AI-assisted calcium control. Reduced human adjustment frequency. Tighter quality compliance. Value estimate: $0.5-2M/yr (reduced quality variability + labor efficiency + potential energy savings from tighter control) Confidence: High (team explicitly identified this as easy test case, data confirmed available) Data readiness: Good — "more than enough data" per process engineering team Systems: Pellet plant DCS, lab analysis system, flux dosing controls Complexity: Quick Win (well-defined control problem, data exists, low risk) Dependencies: Lab data access, DCS integration

Comparison with other sites:

No direct parallel at steel mills. But same pattern as any "automate a regular human adjustment based on data that already exists." Low risk, high demonstration value. This could be the first proof-of-value at Tilden — easy win that builds credibility before tackling the concentrator optimization bundle.

Open questions: - [ ] What calcium/flux parameters are tracked? - [ ] What is the current quality compliance rate for calcium specs? - [ ] How much variability exists between human adjusters? - [ ] What DCS data feeds into the calcium control decision?


TLD-35: ELLIPS Inventory Master Data Cleanup ★★ (NEW)

Field Value
Horizon H1
Project PRJ-06 — Maintenance Workflow Digitization
Status identified ★★ — Day 2 (Purchasing & Warehouse Logistics)
Source Day 2 transcript — warehouse team described search/matching pain unprompted
Field champion Warehouse team (names TBD)

Problem statement:

ELLIPS inventory search functions are "terrible." Parts can't be found without exact proprietary stock codes. Descriptions have commas and semicolons in wrong places, breaking text searches. When parts arrive, warehouse staff spend 5 minutes to 2 hours per box trying to match items to existing ELLIPS records. 95% of the time the part exists somewhere in the system, but it's unfindable. When staff can't find it, they create a new entry — leading to proliferating duplicates and worsening data quality over time.

Key quotes:

"If you don't have a specific proprietary stock code number, it's very difficult to even find a part." "Part numbers don't match up with what's actually used. Descriptions are very bad. They have put commas and semicolons in places that don't belong." "Five minutes to two hours per box or per item that came in trying to find it in the system." "You get the list with 12 or 15 or 20 things that are the same thing, but spelled differently."

Proposed solution: Two-phase approach (same recipe validated at Middletown MDT-31): 1. Phase 1: AI-driven master data cleanup — semantic analysis of all ELLIPS inventory records. Identify duplicates, normalize descriptions, flag obsolete stock, consolidate part numbering. 2. Phase 2: Intelligent search interface — natural language search over the cleaned inventory. Staff describe what they need in plain language; AI matches to correct ELLIPS stock code. Prevents future data degradation by routing all new part creation through AI validation.

Current state: ELLIPS with poor descriptions, duplicates, broken search. Manual matching process taking hours daily. Target state: Clean master data, natural language search, AI-mediated part creation preventing future degradation Value estimate: $0.5-2M/yr (time savings on part matching + inventory reduction from deduplication + enables TLD-13 procurement automation) Confidence: High (identical pattern validated at Middletown, proven recipe from other clients) Data readiness: Good — ELLIPS data exists, AI cleanup is a well-understood NLP problem Systems: ELLIPS (CMMS/inventory) Complexity: Quick Win (proven recipe, NLP/semantic matching is mature) Dependencies: ELLIPS data export access, stakeholder buy-in for master data changes

Day 2 evidence (Mine Operations):

★★ Ops side validates the pain — Erico: "I need a mop bucket. I got to find a mop bucket in there. I might spend 20 minutes, half hour looking for a mop bucket." Writing work orders: "I need to write a work order for the ready room to fix the wall outlet. How do I find the equipment reference number? I might take me two hours." ★★ Headlamp/headlight duplicate example — Jeff: "There's a bunch of different headlights or headlamps with few variations or one lowercase at some point... there's seven of them in there." Even ordering basic safety supplies is a frustration. ★ Workaround = search colleagues' past orders — "If I got to order something, I'll pull up all the orders you did, dump it in a spreadsheet, control F." Ops people tap maintenance because "they're more intimate with" ELLIPS. ★ Universal consensus — Brad Koski (ops mgr): "The Ellipse one would encompass all of us. That would be a huge win to just clean that up, make it more user-friendly. Very old system. Way outdated."

Comparison with other sites:

Direct parallel to MDT-31 (Inventory Rationalization & Master Data Cleanup) — Middletown has $104M inventory, 32K parts, 10% duplicates in Oracle. Tilden has the same problem in ELLIPS. Same recipe applies. Cleveland also has inventory cleanup potential in Tabware/Axiom. This is a cross-site pattern that scales to PRJ-06. ★ Mine Ops session extends validation beyond maintenance — operations people equally frustrated.

Day 2 evidence (Mine Maintenance):

★★ OEM catalog duplication = structural root cause — CLF creates its own stock codes for every part even though OEMs (Cat, Komatsu) already have complete catalogs. New haul truck = 5,000 parts that must be manually coded into ELLIPS. "We're duplicating all the efforts that suppliers have already done." This is continuous because equipment is constantly purchased. ★★ 10+ CLF stock codes for the same part — People can't find existing entries → create new ones → proliferating duplicates. John: "You need to make a Cliff-specific part number. And then there'll be 10 different ones for the part." ★ Parts go to old equipment that no longer exists — Inventory from "late 70s early 80s" still in warehouse. Manual cleanup found old stock that was never removed. "Keeps us from putting the new stuff in, because we don't want to add more dollars." ★ Quick reference parts books maintained separately — Mine maintenance keeps its own "common parts" books alongside ELLIPS and OEM catalogs. "Doing it twice again." ★ Cross-reference gap — No mapping between CLF stock codes and OEM part numbers. "Cat does [have cross-reference]. We don't."

Open questions: - [ ] How many total parts in ELLIPS master data? - [ ] Can we get a data export for analysis? - [ ] Who owns ELLIPS data governance? - [ ] Is there an existing project to clean up the data? - [x] What drives data degradation? → OEM catalog duplication, unfindable entries → new duplicates created, inconsistent naming conventions.


TLD-36: Maintenance Parts & Budget Forecasting (NEW)

Field Value
Horizon H1→H2
Project new — links to TLD-15 (production scheduling) and TLD-13 (procurement)
Status identified ★ — Day 2 (Purchasing & Warehouse Logistics)
Source Day 2 transcript — maintenance/ops leadership described forecasting gaps
Field champion JR (senior ops/maintenance leader), maintenance budget planners

Problem statement:

Tilden's production model (a powerful Excel spreadsheet) drives annual/quarterly forecasting for tonnage and reagent usage. But maintenance parts spend is forecasted using straight-line averages — "those are the tough ones." For variable items like flotation wear parts, car wheels ($10K each), pump rebuilds ($50K), and dozens of incidental categories, the straight-line approach is inaccurate. Small items ("Walmart effect") add up to $300K/year but aren't tracked to specific categories. The team has 65+ item categories in some groupings, each too tedious to analyze individually. The result: maintenance budgets are unreliable, scopes get missed, and there's no connection between production intensity and maintenance spend.

Key quotes:

"Straight-line averages are the tough ones." "I've got 65 different items in there. I can do it, but I just gotta go through 65 times. It doesn't make sense." "You've got this small things that add up — $300,000 a year that you're spending. But we do not track specific items, because it's just way too much." "Is it feasible? If we increase tonnage, do we have enough capacity? Is it worth the cost?"

Proposed solution: AI-driven maintenance spend forecasting that connects production intensity (tonnage, run hours, mill configurations) to historical parts consumption patterns. Replaces straight-line averages with correlation-based predictions. Enables what-if scenarios: "If we run 8M tons next year, how much will wheels/pumps/flotation cost?"

Current state: Production model Excel → reagent forecasts work well. Maintenance parts = straight-line averages. 65+ item categories analyzed manually if at all. $300K/yr "Walmart effect" untracked. Target state: Production-correlated maintenance parts forecasting. Automatic budget adjustment when tonnage targets change. Item-level tracking for high-cost categories. Value estimate: $0.5-2M/yr (better budget accuracy → reduced emergency procurement, less excess inventory, fewer missed scopes) Confidence: Med-High (data exists in ELLIPS + production model, but correlation quality unknown) Data readiness: Partial — ELLIPS has parts consumption history, production model has tonnage/run-hour data, but not connected Systems: ELLIPS (CMMS/parts history), production model (Excel), procurement Complexity: Medium (correlation analysis + dashboard, not real-time control) Dependencies: TLD-35 (clean master data enables better analysis), TLD-13 (procurement automation benefits from better forecasts)

Comparison with other sites:

No direct parallel at steel mills — they have similar budget challenges but this wasn't surfaced as a distinct initiative. At Tilden the connection between production intensity and maintenance spend is clearer because the mine has distinct operating modes (full production vs. reduced, different mill configurations). Could become a template for cross-site maintenance budget intelligence.

Day 2 evidence (Mine Maintenance):

★★ Comprehensive Excel fleet models exist — Pete's team manually builds fleet lifecycle models in Excel. Balance: resources, fleet maintenance, shop space, parts availability (only 3 wheel motors available from supplier at a time → 4-week rebuild), supplier lead times, vendor pricing. Impressive manual work but brittle and time-consuming. ★★ Budget season = massive manual vendor outreach — Pete: "It shouldn't be me that has to say in August I need all vendors to give me updated pricing for 2026. That's too much information for any one person to take in at the end of the day." ★ Wheel motor rebuild cost trajectory — 4th rebuild expected to cost $100K+ more than previous rebuilds. Rebuild reports from vendors track parts used each cycle. AI could predict next rebuild cost based on trend. ★ Shovel rope intervals adjust dynamically — "Shovel ropes: that number has changed dramatically, especially with which supplier provides them. Within a day of changing them when they fail." Team actively adjusts intervals based on experience — data for training. ★ Budget pressure = more risk acceptance — "We've been pressed into being more aggressive. Accepting more risk." Wheel motor target pushed from 22K to 35K hours. The budget model needs to quantify accepted risk, not just cost.

Open questions: - [ ] Can we get access to the production model Excel? - [ ] How far back does ELLIPS parts consumption history go? - [ ] What are the top 10 highest-variance maintenance spend categories? - [ ] Who owns maintenance budgeting? Same person as JR? - [x] How are fleet lifecycle decisions made? → Excel models built by Pete's team. Balance resources, shop space, parts availability, vendor pricing. Annual process. Impressive but manual.


TLD-37: Railroad Asset Maintenance Analytics (NEW)

Field Value
Horizon H2
Project PRJ-03 — Predictive Maintenance Platform (reframed for rail)
Status identified — Day 2 (Purchasing & Warehouse Logistics)
Source Day 2 transcript — railroad maintenance team described data gaps
Field champion Railroad maintenance team (names TBD)

Problem statement:

Tilden operates a 950 rail car fleet, one main rail line, and multiple locomotives. Rail maintenance decisions are driven by two federally mandated inspection tools: a Geo car (annual, measures rail geometry and loading forces) and an X-ray car (inspects rail integrity). Visual inspections also occur. However, ELLIPS data for railroad maintenance is "very limited." Rail breaks occur ~1/week during spring/fall temperature transitions, taking 4-5 hours each to repair. No predictive data exists for car wheel wear, rail wear correlation to traffic patterns, or car lifecycle management. The team explicitly acknowledged "I don't think the data is there yet."

Key quotes:

"Understanding of service — how many trips it's made, how many miles it's only done, how many tons it has on it. We don't have that information." "I don't think our ELLIPS data is awesome there to be able to forecast." "We have a Geo car that comes annually. You could take that data year over year and try to correlate it." "I think that's a good AI project, but I don't know that we have the data yet."

Proposed solution: Phase 1: Aggregate existing inspection data (Geo car, X-ray car, visual inspections) into a structured dataset. Correlate with traffic patterns, weather, and maintenance history. Phase 2: Build degradation models for rail sections and car fleet. Predict rail break risk and optimize maintenance/replacement scheduling.

Current state: Geo car + X-ray car annual data (years of history), visual inspections, limited ELLIPS railroad maintenance data. 950 car fleet with no lifecycle tracking. Rail breaks ~1/week in transition seasons (4-5 hrs each). Target state: Integrated rail inspection database, degradation prediction models, optimized rail/car maintenance scheduling Value estimate: $0.3-1M/yr (reduced rail break downtime + optimized car fleet maintenance + better planning) Confidence: Low-Medium (team self-assessed data readiness as poor for AI; Geo car/X-ray data may compensate) Data readiness: Mixed — Geo car and X-ray data exist (years of history), but ELLIPS railroad data is limited. No car lifecycle tracking. Systems: Geo car data, X-ray car data, ELLIPS (limited), rail scheduling Complexity: Medium (data integration challenge, limited baseline data for cars) Dependencies: Geo car / X-ray car data access, ELLIPS improvement (TLD-01)

Comparison with other sites:

No direct parallel at steel mills. Rail/logistics infrastructure is mining-specific. However, the pattern of "federally mandated inspection data that's collected but never analyzed for prediction" exists elsewhere (e.g., safety inspection data). Lower priority than concentrator and shipping initiatives, but worth logging as the team was thinking about it.

Open questions: - [ ] How many years of Geo car data are available? - [ ] X-ray car data format — is it digital or film? - [ ] Can we correlate rail break locations with Geo car measurements? - [ ] Is there any traffic counter or tonnage tracking on the main line?


TLD-38: HPGR Knowledge Base + PdM Pilot (NEW ★★★)

Field Value
Horizon H1
Project PRJ-06 + PRJ-03 — Maintenance Workflow + PdM (combined pilot)
Status identified ★★★ — Day 2 Plant Maintenance (team-nominated)
Source Day 2 Plant Maintenance — maintenance team consensus
Field champion Adam Bingham (hybrid maintenance, AI early adopter) + George Harmon (reliability eng)

Problem statement:

The High Pressure Grinding Rolls (HPGR) is a major new piece of equipment at Tilden (installed April 2023). It has 10+ manuals totaling 1,200+ pages that nobody has read. European parts that are unfamiliar to the crew. Troubleshooting "always takes a couple days because nobody knows anything about it because it's new." The HPGR is covered in sensors generating rich operational data, but this data is not being leveraged for prediction. The maintenance team themselves nominated the HPGR as the ideal pilot scope.

Key quotes:

"There's literally hundreds of drawings... 10 different manuals probably, 1,200 pages of information that no one here has read." "It's got a bunch of parts, at least an electrical standpoint, that none of us have seen. It's like all European parts." "Guys go up to it, don't know about it, solving a puzzle." "Since it's so new, to be easy to drum up all the documentation... looking at something that's 40 years old and trying to find all the manuals, right?" "It's covered in sensors, right?"

Proposed solution: Two-for-one pilot combining TLD-14 (Knowledge Base) and TLD-03 (Fixed Plant PdM): 1. Knowledge base: Ingest all HPGR manuals, electrical/hydraulic schematics, OEM documentation into a searchable AI knowledge base accessible via Copilot. Maintenance team asks questions in plain language, gets troubleshooting guidance with manual references. 2. PdM: Connect to HPGR sensor data to build initial condition monitoring / anomaly detection models. Since the equipment is new and poorly understood, PdM provides early warning before surprise failures.

Current state: 10+ manuals (1,200+ pages) unread. European parts unfamiliar. Rich sensor data uncollected/unanalyzed. Troubleshooting takes days. No baseline understanding of failure modes. Target state: Searchable knowledge base for all HPGR documentation. Copilot troubleshooting assistance. Sensor-based anomaly detection. Faster troubleshooting. Reduced unplanned downtime. Value estimate: $0.5-2M/yr (reduced HPGR downtime + faster troubleshooting + knowledge preservation for new equipment) Confidence: High (team-nominated, documentation exists and is digital, sensors confirmed, Adam Bingham = ready champion) Data readiness: Good — manuals are digital, equipment is sensor-rich. ELLIPS work order history for HPGR is building. DCS/sensor data access TBD. Systems: HPGR OEM sensor system, ELLIPS, Microsoft Copilot, SharePoint (for manual storage), DCS Complexity: Quick Win (for knowledge base component), Medium (for PdM component) Dependencies: HPGR sensor data access, SharePoint metadata tagging, network connectivity

Comparison with other sites:

This is the best-scoped pilot candidate we've found at any site. It combines knowledge base + PdM on a single piece of well-documented new equipment that the team cares about. Cleveland's bag house PdM is more complex (old, less documented). Middletown's Virtual SME has broader scope (harder to pilot). The HPGR has everything: digital manuals, rich sensors, clear pain, motivated champion (Adam Bingham), and team buy-in.

★ RECOMMENDATION: This should be the lead pilot project at Tilden. It demonstrates value on both the knowledge base and PdM dimensions, has a clear champion, and was nominated by the maintenance team themselves. Success here validates the approach for expansion to other equipment.

Open questions: - [ ] What specific sensor data does the HPGR capture? (types, frequency, storage) - [ ] Is HPGR sensor data accessible via DCS or separate OEM system? - [ ] What is the current HPGR availability rate? - [ ] How many HPGR-related work orders exist in ELLIPS since April 2023? - [ ] Who is the OEM? What format are the manuals in?


TLD-39: Major Repair Schedule Optimization (NEW)

Field Value
Horizon H1→H2
Project new — Mining-specific
Status identified ★★ — Day 2 Plant Maintenance
Source Day 2 Plant Maintenance — Gary and Steve described scheduling pain
Field champion Gary (area maintenance) + Steve (referenced)

Problem statement:

Tilden runs multiple major line repairs simultaneously, each costing millions of dollars. Scheduling these repairs requires manually extracting data from ELLIPS into Excel, merging into Microsoft Project, and recalculating daily as jobs go over or under on time. Senior supervisors update schedules every day trying to get end dates. Critical chain project management means one stream getting ahead/behind affects the overall repair. Resource conflicts (cranes, crafts, shifts, contractors) must be manually juggled. The process is "cumbersome" and consumes significant supervisor time.

Key quotes:

"Our senior supervisors are updating the line repairs every day, trying to get an end date. If you could make that quick, so they're analyzing data not inputting data — that would be a huge win." "On a major repair, we have albums of work orders going in and all the people making changes. In order to capture any changes, you've got to go extract that data back out of ELLIPS, merge it into the project." "If AI can do that for you — a guy can just type in these are my seven jobs and this is where we're at — it updates all that for you. That would be a godsend." "$1 million per mill shutdown."

Proposed solution: AI-assisted major repair scheduling — automated data extraction from ELLIPS, real-time schedule updates as work orders change, critical chain recalculation, resource conflict detection (cranes, shifts, crafts), and automated reporting. Integration with ELLIPS work orders and Microsoft Project.

Current state: Manual ELLIPS → Excel → MS Project pipeline. Daily manual updates by senior supervisors. Multi-million dollar repairs with cascading dependencies. No automated resource conflict detection. Target state: Auto-updating repair schedules from ELLIPS data. Critical chain recalculation. Resource optimization (cranes, crafts, shifts). What-if scenario capability. Value estimate: $1-3M/yr (supervisor time savings + better repair coordination + reduced downtime from scheduling conflicts) Confidence: Med-High (pain clearly articulated, data exists in ELLIPS, well-defined workflow to automate) Data readiness: Good — ELLIPS work orders exist, MS Project templates exist, resource data exists Systems: ELLIPS (CMMS), Microsoft Project, resource scheduling tools Complexity: Medium (integration + optimization logic, but existing tools provide foundation) Dependencies: ELLIPS data access, MS Project integration

Comparison with other sites:

Similar scheduling pain exists at steel sites (outage planning) but was not articulated this clearly. Tilden's "$1M per mill shutdown" gives a clear cost anchor. Could scale to Burns Harbor BF reline planning ($700M+).

Day 2 evidence (Mine Maintenance):

★★ Seasonal road restrictions = hard scheduling constraint — "Anything from mid-March through June — we're not going to get the bucket because you can't drive it in. Seasonal road restrictions." All large parts/equipment must arrive before March or after June. This creates a hard window that major repair scheduling must account for. ★ Wrong bucket ordered example — A loader bucket was ordered, multiple shifts invested in removal and installation, only to discover it was the wrong fitment. "Imagine — a couple shifts into it then you don't want to be the guy who masked the order up." AI validation of part fitment could prevent costly errors. ★ Similar to Cleveland BF part number error — Eric confirmed: "a part number that didn't get updated, assumed the right chemistry, part failed — down millions of dollars."

Open questions: - [ ] How many major repairs per year? - [ ] What is the average major repair duration and cost? - [ ] Is Microsoft Project the standard or are other tools used? - [ ] What resource conflicts are most common? (cranes, specific crafts) - [x] Are there seasonal constraints on parts delivery? → Yes — mid-March through June road restrictions. All big parts must be staged before March or after June.


TLD-40: Maintenance Resource & Workforce Scheduling (NEW)

Field Value
Horizon H1→H2
Project new — Operations
Status identified ★ — Day 2 Plant Maintenance
Source Day 2 Plant Maintenance — Gary described workforce scheduling challenges
Field champion Gary (area maintenance)

Problem statement:

Beyond major repairs, day-to-day maintenance workforce scheduling requires manually juggling shift assignments, overtime allocation, contractor coordination, and absence patterns. "This guy usually dumps on a Friday" — human patterns affect planning but aren't captured in any system. Union contract constraints add complexity. ELLIPS has capability for resource scheduling but it's "manually intensive" and underutilized.

Proposed solution: AI-assisted maintenance workforce scheduling — automated crew assignment based on skills, shift patterns, work order priorities, and historical absence patterns. Overtime optimization within union contract constraints.

Current state: Manual scheduling. Human pattern knowledge ("this guy dumps on Fridays"). ELLIPS has resource scheduling capabilities but too cumbersome to use. Target state: Optimized daily maintenance crew assignments. Absence pattern prediction. Overtime optimization. Contractor coordination. Value estimate: $0.5-2M/yr (reduced maintenance inefficiency from crew idle time, better resource utilization) Confidence: Medium (pain described but not quantified; depends on data availability for patterns) Data readiness: Partial — ELLIPS has some resource data, HR/attendance data TBD Systems: ELLIPS (CMMS), HR/attendance systems, union contract rules Complexity: Medium (union constraints, HR data sensitivity) Dependencies: HR data access (privacy concerns flagged in transcript), union agreement

Day 2 evidence (Mine Maintenance):

★★ Comprehensive resource balancing described — Pete's team balances: workforce (skills, shifts, overtime), shop space (how many machines fit), parts availability (only 3 wheel motors from supplier at a time, 4-week rebuild), contractor availability, and fleet production needs. "We try to use Excel right now to put that in there. It's not perfect, but it gets us a model that's consistent." ★ Operations constraints on scheduling — "You're not taking the orange shovel when the crusher's running. Wait for the rock shift. You got to know that a week in advance." Operational availability windows must be integrated into maintenance scheduling. ★ George confirmed: 4-hour Monday morning scheduling process — Enter equipment hours, cross-reference with ELLIPS PMs, build next week's schedule. "Could be automated." ★ Equipment availability windows for PMs — "Trying to get eight machines that need to be in for a week. You don't want two trucks in at the same time or two bulldozers." Spread scheduling across fleet — a constraint optimization problem.

Open questions: - [ ] What attendance/absence tracking system exists? - [ ] Is maintenance inefficiency (idle time) tracked? - [ ] What union contract constraints affect scheduling? - [ ] Is HR data accessible for AI use? (privacy concerns raised) - [x] How are weekly schedules built? → 4 hours Monday morning. Manual hours entry from operator sheets → ELLIPS → spreadsheet → resource balancing. George Beelon leads.


TLD-41: Deferred Maintenance Risk Quantification (NEW)

Field Value
Horizon H2
Project new — Strategic
Status identified ★★ — Day 2 Plant Maintenance
Source Day 2 Plant Maintenance — Gary's "pay now or pay later" framework
Field champion Gary (area maintenance) + George Harmon (reliability eng)

Problem statement:

The mine faces constant risk management decisions: defer maintenance to save costs now, or invest now to prevent larger failures later. Cause and effect can be years apart — "cause in '26, effect in '27 or 28." "Maintenance amnesia" means the organization forgets why decisions were made. Human bias (CYA — "I don't want my equipment to be the one that fails") distorts risk assessments. The mine vs. plant resource allocation tug-of-war ("does the dollar go to the shovel in the pit or the bull gear in the plant?") needs data-driven resolution.

Key quotes:

"Pay now or pay later. Paying later is almost always more expensive." "Cause and effect aren't always closely related in time. Cause can be over a year's period." "AI is emotionless. We as humans tend to be very emotionally driven. Everyone has a little bit of CYA." "Using AI to take the emotion part away and really dive into what the site or the corporation's priorities really should be. Where does that money go?" "We are continuously risk managing. We're in a cost-per-ton game."

Proposed solution: AI-driven deferred maintenance risk quantification — modeling the cost trajectory of deferred work (current PM interval → missed PM → cascading failure → emergency repair), incorporating equipment criticality, production impact, and safety risk. Emotionless risk scoring across mine and plant assets to support budget allocation decisions.

Current state: Risk management decisions based on experience, emotion, and political dynamics (mine vs. plant). PM intervals include CYA buffer (conservative). No quantitative model for "pay later" cost. Cause-and-effect amnesia. Target state: Quantified risk model for every major asset — cost of deferral vs. cost of maintenance. Budget allocation recommendations. Historical cause-and-effect tracking. CYA-free PM interval optimization. Value estimate: $1-5M/yr (better capital allocation + fewer catastrophic failures from deferred maintenance + optimized PM intervals) Confidence: Medium (conceptually powerful but requires significant data integration and modeling) Data readiness: Partial — ELLIPS has work order history, cost data in Oracle/financial systems, but connecting cause-and-effect across years is a modeling challenge Systems: ELLIPS (CMMS), Oracle/financial, DCS (condition data), production data Complexity: Medium-High (multi-system integration, long-horizon modeling, organizational change) Dependencies: TLD-01 (data integration), TLD-03 (condition monitoring data), financial system access

Comparison with other sites:

Gary articulated this more clearly than any stakeholder at any site. Cleveland maintenance has the same problem (70/30 reactive) but nobody framed it as "pay now or pay later with years of delay." This is an H2 strategic initiative that could transform maintenance budget conversations across the entire CLF footprint.

Open questions: - [ ] How are maintenance budgets currently set? (annual, quarterly, rolling?) - [ ] What financial systems track maintenance costs per asset? - [ ] How far back does cost history go in ELLIPS + Oracle? - [ ] Is there a current risk register or scoring framework?


TLD-42: Cross-Asset Failure Pattern Search (NEW)

Field Value
Horizon H1
Project PRJ-01 — Ops-Maintenance Data Integration
Status identified ★★ — Day 2 Plant Maintenance
Source Day 2 Plant Maintenance — George Harmon's failure analysis workflow
Field champion George Harmon (reliability engineering)

Problem statement:

When a piece of equipment fails, George Harmon and his team spend "an hour or two looking at the work order history on that particular node." But they also need to check if similar equipment has experienced the same failure pattern — and searching across similar equipment in ELLIPS is extremely difficult. The keyword search is "very limited." Drawing search in the 60,000-print database takes hours. Finding patterns across equipment types is the missing capability for proactive failure prevention.

Key quotes:

"It can take an hour or two looking at the work order history on that particular node. That's without even looking at any of the similar equipment to see if similar equipment experiences the same thing." "Searching ELLIPS — yeah, it's pretty awkward." "We spend hours a week looking for drawings. Equipment drawings or structural drawings."

Proposed solution: AI-powered search across ELLIPS work order history and drawing database. Enable natural language queries like "show me all bearing failures on AG mills in the last 3 years" or "find similar failure patterns across pebble mills." Cross-reference work orders with DCS trend data to identify leading indicators.

Current state: Manual ELLIPS search — 2 min per equipment, limited keyword search, no cross-equipment pattern detection. 60K drawings in database with awkward search. George spends hours/week piecing together failure patterns manually. Target state: Natural language search across ELLIPS history + drawings. Cross-asset failure pattern detection. Automated pre-failure indicator identification from DCS data. Value estimate: $0.5-2M/yr (faster failure analysis + proactive failure prevention from pattern detection) Confidence: High (pain clearly quantified, data exists in ELLIPS and drawing database, well-scoped) Data readiness: Good — ELLIPS work order history is well-maintained, drawing database exists (60K prints scanned) Systems: ELLIPS (CMMS), drawing database, DCS, Pi historian Complexity: Quick Win (for search enhancement), Medium (for cross-system pattern detection) Dependencies: ELLIPS data access, drawing database format

Comparison with other sites:

Same pain at every site with different CMMS. George's specific quantification ("hour or two per equipment, hours per week on drawings") makes this the most clearly scoped version of the maintenance search problem. Quick win potential: even basic RAG over ELLIPS + drawings would save George's team significant time.

Open questions: - [ ] What database format is the drawing archive in? - [ ] How many years of ELLIPS work order history? - [ ] Is there a standardized equipment taxonomy in ELLIPS? - [ ] How many equipment types have 5+ similar units (enabling pattern detection)?


TLD-43: Maintenance Training Content Generation (NEW)

Field Value
Horizon H1→H2
Project Virtual SME — aligns with TLD-14
Status identified ★ — Day 2 Plant Maintenance
Source Day 2 Plant Maintenance — room discussion on training
Field champion TBD (training/HR)

Problem statement:

Maintenance is a "visual, hands-on type learning environment" — yet training materials are mostly written documents that nobody reads. With high turnover and new employees, the gap between documentation and effective training is widening. AI can generate visual training content (step-by-step procedures, video-like walkthroughs, interactive guides) from existing manuals and SOPs.

Key quotes:

"Being able to show them the procedure instead of reading through a two-page document is going to be huge." "Content creation for training material — that could be pretty big too, especially with the turnover."

Proposed solution: AI-generated visual training content from existing manuals and SOPs. Step-by-step procedure guides with schematics. Interactive Q&A training modules. Potentially AR/VR guided maintenance (Caterpillar model discussed — one expert guiding field tech via goggles).

Current state: Written procedures, 300-page manuals nobody reads, scattered training documentation. Target state: Visual, interactive training content auto-generated from existing documentation. Procedure walkthroughs. AR/VR remote support (aspirational). Value estimate: $0.3-1M/yr (faster onboarding + reduced training incidents + knowledge retention) Confidence: Medium (concept well-received, but execution complexity for visual content generation is higher than text-based knowledge base) Data readiness: Medium — manuals and SOPs exist, but converting to visual training requires additional processing Systems: Training documentation, equipment manuals, Microsoft Copilot, potential AR/VR platform Complexity: Medium (text-based = quick, visual/AR = longer term) Dependencies: TLD-14 (knowledge base provides content foundation), TLD-38 (HPGR pilot includes training dimension)

Open questions: - [ ] What training programs currently exist? - [ ] What is the average onboarding time for new maintenance hires? - [ ] Is there any video documentation of maintenance procedures? - [ ] What mobile devices / AR hardware is available?


TLD-44: Employee Onboarding Automation (NEW)

Field Value
Horizon H1
Project new — IT process
Status identified — Day 2 Plant Maintenance
Source Day 2 Plant Maintenance — supervisor described onboarding friction
Field champion TBD

Problem statement:

Employee onboarding requires multiple manual tickets across different systems: ServiceNow for IT access, phone/laptop/monitor provisioning, building access, ELLIPS permissions, etc. A supervisor described wanting to "go to a chatbot and say I've got employee number X starting on date Y, they need same access as this person, a phone, laptop, two monitors — supply all equipment, have it set up by date Z." Currently requires multiple separate tickets and manual follow-up.

Proposed solution: AI-powered onboarding assistant that takes a single natural language request and generates all necessary tickets across systems (ServiceNow, IT provisioning, facilities, ELLIPS access). Tracks completion and alerts supervisor when everything is ready.

Current state: Multiple manual tickets across systems. No automation. Supervisor must track each separately. Target state: Single-request onboarding that triggers all provisioning tasks automatically. Value estimate: $0.1-0.5M/yr (supervisor time + faster onboarding + reduced missed access/equipment) Confidence: Medium (concept clear, ServiceNow integration exists, but cross-system automation requires IT cooperation) Data readiness: N/A (process automation, not data analytics) Systems: ServiceNow, IT provisioning, ELLIPS, facilities management Complexity: Quick Win (if ServiceNow APIs exist) Dependencies: IT cooperation, ServiceNow configuration

Comparison with other sites:

Generic IT process improvement — same pain exists everywhere. Low strategic value relative to operational initiatives but high visibility for employee satisfaction.

Open questions: - [ ] Is ServiceNow the ITSM platform at Tilden? - [ ] How many new hires per year? - [ ] What are all the systems that need provisioning for a new maintenance hire?


TLD-45: Modular Dispatch ↔ ELLIPS Automated Integration (NEW ★★)

Field Value
Horizon H1
Project PRJ-01 — Ops-Maintenance Data Integration
Status identified ★★ — Day 2 Mine Maintenance
Source Day 2 Mine Maintenance — maintenance team described disconnected systems
Field champion Pete Austin (section mgr) + George Beelon (maintenance scheduling) + Chase Lincoln (reliability eng)

Problem statement:

Modular dispatch system has rich equipment data — operating hours, keys on/off, loaded/unloaded status, GPS routes, cycle times, fuel events — that ELLIPS needs for accurate PM scheduling. Today these systems are completely disconnected. Equipment hours are manually transcribed from handwritten operator sheets into ELLIPS, consuming 4 hours every Monday morning. ELLIPS then makes flawed predictions because it lacks real operating context (e.g., pushing PMs out when a machine was simply down, not actually running less).

Key quotes:

"Talking with Molly, there isn't really a way to put that into ELLIPS right now. The information's there. They just don't know how to get it into ELLIPS." "The dispatch system has all that. But trying to get that to interface with ELLIPS properly is where the problem comes in." "About 4 hours entering those hours and taking them out, putting in another spreadsheet. And then trying to get a schedule ready." "ELLIPS is not smart enough to realize that it was broke down for two days. It just looks at that as a machine wasn't used."

Proposed solution: Automated data pipeline from Modular dispatch to ELLIPS. Phase 1: Operating hours (keys on/off, excluding idle time) automatically flow into ELLIPS meter readings, eliminating Monday morning manual entry. Phase 2: Richer data (loaded vs unloaded hours, route/grade, fuel consumption) feeds into a data layer that augments ELLIPS PM scheduling with actual duty context. This enables duty-cycle aware maintenance (see TLD-46).

Current state: Modular dispatch and ELLIPS completely disconnected. Manual handwriting transcription of operator inspection sheets. 4 hours/week just for data entry. ELLIPS makes flawed PM predictions from bad inputs. Target state: Automated operating hours flow from dispatch → ELLIPS. Eliminated manual data entry. Accurate PM scheduling based on real equipment usage. Foundation for duty-cycle maintenance. Value estimate: $1-3M/yr (4 hrs/week × 52 weeks × multiple schedulers saved + better PM timing + reduced over/under-maintenance + enables TLD-46 and TLD-02) Confidence: High (both systems exist and have good data — the gap is purely integration) Data readiness: Good — Modular has rich data, ELLIPS accepts meter readings. Integration is the challenge. Systems: Modular dispatch, ELLIPS (CMMS), Microsoft Fabric (potential data layer) Complexity: Medium (system integration, data mapping, validation rules for bad data) Dependencies: IT cooperation, ELLIPS API/import capability, Modular data export

Comparison with other sites:

No direct parallel — steel sites don't have mobile fleet dispatch systems. But the pattern (operations data system disconnected from maintenance system) is identical to every other site. This is TLD-01 (Ops-Maint Integration) made concrete and actionable for the mine fleet. Success here = template for other system integrations at Tilden and potentially Burns Harbor (if they have similar fleet ops).

Open questions: - [ ] Does Modular have an API or data export capability? - [ ] Does ELLIPS support automated meter reading imports? - [ ] What data validation rules are needed? (bad operator punches, incorrect entries) - [ ] Who on the IT side would own this integration?


TLD-46: Duty-Cycle Based Maintenance (Tons vs Hours) (NEW ★★)

Field Value
Horizon H2
Project new ★★ — Mining-specific paradigm shift
Status identified ★★ — Day 2 Mine Maintenance
Source Day 2 Mine Maintenance — Pete Austin, Eric Lemus, team consensus on hours ≠ work
Field champion Pete Austin (section mgr, 30 yrs) + Eric Lemus (IE)

Problem statement:

All fleet maintenance at Tilden is scheduled on operating hours, but not every hour is equal. A shovel loading trucks continuously at 100% utilization does dramatically more work than one seeing a couple trucks per hour. A truck hauling from the bottom of the pit (full throttle, 7 mph, 30 minutes uphill) does far more work than one running on the rim. Tire wear, engine stress, wheel motor degradation, rope wear, and bucket/ground-engaging tool wear are all driven by actual work done (tonnage, load cycles, fuel burn) not flat hours. The current flat-hours approach leads to over-maintenance on light-duty assets and under-maintenance on heavy-duty ones.

Key quotes:

"Not every hour is equal. The number one priority unit, the loading unit — you're going to see trucks all the time as fast as you can load them. If you're the bottom priority, you might see a couple trucks an hour." "This guy sees 10 times more tons than this guy, but based on the hours — you'll see the same maintenance." "Shovel ropes — it's a steel cable rope, on a tonnage-based interval. That number has changed dramatically." "The engine that costs you a million dollars — you should be able to go 1.4 million gallons of fuel. Instead of just tracking the hours."

Proposed solution: Build duty-cycle weighted maintenance models that replace flat operating hours with composite work metrics: tonnage moved (from dispatch/scale data), fuel consumed (from onboard computers), load cycles (from dispatch), route difficulty (from GPS/grade data). Start with shovel ropes and truck engines (where the data and physics are clearest), then expand to all major components. This is a paradigm shift from time-based to work-based maintenance.

Current state: Flat operating hours for most components. Some tonnage-based intervals (shovel ropes). Budget pressures pushing intervals out without quantified risk. "We still don't know if 35,000 hours is the right number." Target state: Duty-cycle weighted maintenance intervals for all major mobile fleet components. Quantified risk at each interval extension. Per-asset wear profiles based on actual work done. Value estimate: $2-5M/yr (optimized PM intervals → reduced over-maintenance on light-duty + prevented failures on heavy-duty + better budget accuracy + enables capital planning TLD-47) Confidence: Med-High (concept is sound, data sources identified, but model development requires correlation analysis) Data readiness: Partial — Modular has tons/routes, onboard computers have fuel/duty data (currently extracted manually every 3-4 months), ELLIPS has maintenance history. Need to connect these. Systems: Modular dispatch (tonnage, routes, GPS), onboard computers (fuel, duty cycle), ELLIPS (maintenance history), scale data Complexity: Strategic (paradigm shift, requires multi-source data fusion and new maintenance philosophy) Dependencies: TLD-45 (Modular-ELLIPS integration provides data pipeline), TLD-02 (PdM platform provides analytics layer), onboard data extraction frequency improvement

Comparison with other sites:

No parallel at steel sites — fixed assets don't have the same "duty cycle variability" problem. This is uniquely mining. However, the principle (maintenance driven by actual work, not calendar time) aligns with the broader PdM philosophy across all sites. Tilden has partial precedent: shovel ropes are already on tonnage intervals. Extending this to all components is the innovation. Mining industry leaders (Rio Tinto, BHP) use duty-cycle maintenance — CLF would be catching up to best practice.

Open questions: - [ ] Can Modular provide per-trip tonnage data? Is it accurate? - [ ] How often can onboard computer data be extracted? Can it be automated? - [ ] What component failure modes are most correlated with tonnage vs hours? - [ ] Are there industry benchmarks for duty-cycle maintenance in iron ore mining? - [ ] Can we access the Utah facility benchmarking data Pete mentioned?


TLD-47: Fleet Capital Replacement & Lifecycle Planning (NEW ★★)

Field Value
Horizon H2→H3
Project new — Strategic/Capital Planning
Status identified ★★ — Day 2 Mine Maintenance
Source Day 2 Mine Maintenance — Pete Austin described Excel-based fleet lifecycle models
Field champion Pete Austin (section mgr, 30 yrs)

Problem statement:

Haul trucks cost $12M each (14 in fleet). Shovels cost $27-30M each (4 in fleet). These are 20-25 year assets with complex lifecycle economics: engines ($1M+), wheel motors ($300K, now on 4th rebuild), truck bodies that wear out. At 120,000 hours, cumulative component replacement exceeds the value of a new truck. Capital replacement must be spread across years (can't buy 14 trucks at once), and purchasing them all within 18 months means all maintenance peaks simultaneously. Currently planned in Excel with manual vendor pricing collection, rebuild history tracking, and intuition-based replacement decisions.

Key quotes:

"At 120,000 hours, we're gonna have engine, two wheel motors, the truck body is going to be worn out — you're over the face value of a new one." "We can't go there and ask for all 14 in one year. We better be figuring out how to spread it out over a long period." "We didn't buy them all within a year and a half. So that will happen" (referring to synchronized maintenance peaks). "Computers are helpful, first of our class, to run to spread things out." "How do you predict [shovel lifecycle] far enough ahead and say, here's what it is? And reliably. Confidently. So people believe it."

Proposed solution: AI-driven fleet capital lifecycle model that integrates: component rebuild history (number of rebuilds, cost per rebuild, vendor reports), operating data (hours, duty cycle from TLD-46), maintenance spend trajectory per asset, and market data (new equipment pricing, lead times). Produces multi-year CAPEX plans with optimized replacement sequencing, repair-vs-replace recommendations per asset, and risk-quantified budget forecasts.

Current state: Excel-based lifecycle models. Manual vendor pricing collection during budget season. Rebuild history tracked across multiple vendors and formats. Replacement decisions based on experience + Excel models. No integrated view of fleet-wide replacement trajectory. Target state: Integrated fleet lifecycle dashboard. Per-asset total-cost-of-ownership tracking. Optimized replacement sequencing. Automated budget forecasting. Repair-vs-replace recommendations at each major rebuild decision point. Value estimate: $3-8M/yr (optimized replacement timing → avoid over-investing in end-of-life assets + avoid synchronized fleet aging + better CAPEX allocation across the mine) Confidence: Medium (concept clear, data exists across systems, but requires connecting multiple data sources and building financial models) Data readiness: Partial — ELLIPS has maintenance history, vendors have rebuild reports, Excel models have lifecycle logic. Not connected. Systems: ELLIPS (CMMS), vendor rebuild reports, fleet Excel models, Modular (operating data), purchasing (pricing data) Complexity: Strategic (multi-year planning horizon, financial modeling, requires buy-in from capital planning and operations) Dependencies: TLD-02 (fleet PdM data), TLD-46 (duty-cycle data improves lifecycle prediction), TLD-36 (budget forecasting integration)

Comparison with other sites:

Unique to mining — steel sites don't have $12M mobile assets with 20-year replacement cycles to optimize. However, the principle of data-driven capital allocation applies to Burns Harbor's BF reline ($700M+) and other major CAPEX decisions. If built for Tilden, the fleet lifecycle model could inform corporate capital planning methodology.

Open questions: - [ ] Can we access the Excel fleet lifecycle models? - [ ] How many years of rebuild history per truck are available? - [ ] What vendor rebuild report format is used? Digital or paper? - [ ] What is the current fleet age distribution? When are the next replacement decisions? - [ ] How is the CAPEX approval process structured? Who decides truck replacement timing?


TLD-48: OEM Parts Catalog & PM Procedure Auto-Import (NEW ★★)

Field Value
Horizon H1
Project PRJ-06 — Maintenance Workflow Digitization
Status identified ★★ — Day 2 Mine Maintenance
Source Day 2 Mine Maintenance — Pete Austin + John Lokowitz described manual duplication
Field champion Pete Austin (section mgr) + Chase Lincoln (reliability eng)

Problem statement:

Every time Tilden purchases new equipment (which is continuous), someone must manually create CLF stock codes in ELLIPS for every part (5,000+ per haul truck), even though the OEM (Cat, Komatsu, etc.) provides a complete parts catalog with their own part numbers. Similarly, all PM procedures and schedules from the OEM must be manually typed into ELLIPS. This is a continuous, repetitive process that consumes planning resources, introduces data quality issues (multiple stock codes for the same part), and delays maintenance readiness for new equipment.

Key quotes:

"Computer gives us access to everything that's on that truck. Why do I have to go in and create Cliff's own stock code for each one of those parts?" "We're going way too far having to duplicate what all these different suppliers are doing." "Same thing with all the maintenance procedures. Chase will have to take that — here's everything you need to do for that haul truck and what intervals — and type all that into our system." "We're different in our area because we continuously buy new machinery. There's 5,000 different part numbers on that truck." "A lot of people don't have the ability to even make a part number, so it just doesn't get ordered."

Proposed solution: AI-powered OEM catalog ingestion: when new equipment is purchased, automatically parse the OEM's parts catalog and PM schedule, map to existing ELLIPS stock codes (using semantic matching from TLD-35), create new stock codes only when truly new parts are identified, and auto-populate PM tasks with correct intervals and procedures. Integrate OEM cross-reference (e.g., Cat already maintains part number cross-references) into ELLIPS search.

Current state: Manual stock code creation for every part on new equipment. Manual PM procedure entry from OEM manuals. Only 2-3 people have permission to create stock codes. Planning teams maintain separate "quick reference" parts books alongside ELLIPS and OEM catalogs. Target state: Automated OEM catalog ingestion into ELLIPS. Cross-reference between CLF stock codes and OEM part numbers. Auto-populated PM procedures from OEM schedules. Reduced manual data entry for new equipment onboarding. Value estimate: $0.5-2M/yr (planner time savings + faster new equipment readiness + reduced data quality issues + fewer stockouts from missing stock codes) Confidence: High (NLP/semantic matching is mature, OEM catalogs are structured data, same technology as TLD-35 inventory cleanup) Data readiness: Good — OEM catalogs are digital, ELLIPS has API/import capabilities. The matching problem is well-defined. Systems: OEM parts catalogs (Cat, Komatsu, etc.), ELLIPS (CMMS/inventory), PM scheduling module Complexity: Quick Win (builds on same NLP technology as TLD-35, structured data matching) Dependencies: TLD-35 (clean master data makes matching easier), OEM data access agreements, ELLIPS import capability

Comparison with other sites:

Applicable wherever CLF buys new equipment — Cleveland (Tabware), Middletown (SWAMI), Burns Harbor (Tabware) all face the same problem of manually entering OEM data into their CMMS. Mining is worse because equipment turnover is continuous and each truck has 5,000+ unique parts. Success here = template for cross-site roll-out.

Open questions: - [ ] What format do OEM parts catalogs come in? (digital database, PDF, Excel?) - [ ] Does Cat/Komatsu provide API access to parts data? - [ ] How many new pieces of equipment are purchased per year? - [ ] What is the average time to fully onboard a new truck into ELLIPS? (stock codes + PM procedures) - [ ] Who are the 2-3 people who can create stock codes? Are they bottlenecks?


TLD-49: Dispatch Status Auto-Correction (NEW)

Field Value
Horizon H1
Project PRJ-01 — Ops-Maintenance Data Integration
Status identified ★★ — Day 2 Mine Operations
Source Day 2 Mine Ops — Kevin (mine eng) described Molly's tedious correction work
Field champion Molly (dispatch administrator) + Kevin (mine engineering)

Problem statement:

The Modular dispatch system tracks cycle time components for every piece of equipment (drilling, hauling, loading) based on operator button presses. When operators miss a button press or push the wrong one, it creates incorrect status data that cascades through all reporting — Business Objects reports, efficiency calculations, budget vs. actual comparisons. Molly (dispatch administrator) manually scans 12-hour shifts worth of data to find and correct these errors. Kevin: "That process is tedious and just having to run the reports, look through there. Quite honestly, I think you could easily train something to pick it up right away."

Key quotes:

"If the person's not pushing the button correctly to track the cycle time components, it throws everything else off." "It's usually pretty obvious. You can go to the data and go, no, okay. I can see exactly where they missed this button. It's a really actually easy thing to spot, but it's just tedious because you're doing it for 12-hour shifts on end." "All those status codes, everything else, populate into the maintenance group too. So I think there's some wins there for them."

Proposed solution: AI-powered anomaly detection on dispatch status data. Automatically identify missed/incorrect button presses (pattern recognition — obvious sequence breaks), auto-inject corrected status codes, flag ambiguous cases for human review. Real-time rather than post-shift batch correction.

Current state: Molly manually reviews every shift's dispatch data, correcting button-press errors. Hours/week of tedious scanning. Errors cascade into all downstream reports and maintenance scheduling. Target state: Automated anomaly detection and correction. Molly shifts from correction to exception review. Downstream data quality improves for all consumers. Value estimate: $0.3-1M/yr (Molly's time recaptured + downstream data quality improvement + faster reporting + more accurate maintenance scheduling from ELLIPS) Confidence: High (pattern is "easy to spot" per Kevin — straightforward ML/rules-based classification) Data readiness: High — dispatch data exists with historical corrections (training data for the model) Systems: Modular dispatch, Business Objects Complexity: Quick Win (well-defined pattern recognition problem, training data exists from historical corrections) Dependencies: Modular data access


TLD-50: Real-Time Mine Plan Deviation Alerting (NEW ★★)

Field Value
Horizon H1→H2
Project new — Mining-specific
Status identified ★★ — Day 2 Mine Operations
Source Day 2 Mine Ops — Andrew Mullen, Brad Koski, Dan Kernan articulated the need
Field champion Brad Koski (ops mgr) + Andrew Mullen (corporate)

Problem statement:

The daily mine plan (PowerPoint map packet + priority spreadsheet) sets the execution target, but deviations happen constantly and there's no real-time feedback loop. Brad's example: a supervisor moved a drill instead of a shovel because he misread the map packet — the priority mismatch wasn't caught until the next shift. More broadly, the team wants to know during a shift when execution is drifting from plan — not discover it the next morning. Andrew Mullen: "If we could have something that's saying real time, hey, you're getting way off plan. This is what I recommend to get you back on plan."

Key quotes:

"If that could generate something to flag comparing to what the actual set priorities were on the dispatch. And when there's a mismatch, they have to answer to it real time." "Not just a reason why, but how do you get back to it?" "Quite often we miss why they had to call an audible. There's not a good explanation given." "Now we're doing it the next day. How do we get so far off plan? But if we could have something saying real time, hey, you're getting way off plan."

Proposed solution: Real-time comparison engine: daily mine plan priorities (from Gwen's plan) vs. actual dispatch status (from Modular). When equipment assignment doesn't match plan priority order, alert the dispatch supervisor and field supervisor with: (1) what's mismatched, (2) impact estimate, (3) recommended recovery action. Log all deviations and recovery decisions to build a shift-level knowledge base over time. Also: prompt operators/supervisors to document why when audibles are called, so lessons compound.

Current state: Deviations discovered next shift or next day. No real-time feedback. Audible decisions not documented. Monthly variance analysis is retrospective. Target state: Real-time plan vs. actual comparison. Automated alerts on priority mismatches. Decision documentation prompts. Shift-level knowledge base that compounds. Value estimate: $1-3M/yr (reduced plan deviation impact + faster recovery + knowledge accumulation) Confidence: Med-High (dispatch data + plan data both exist, comparison is straightforward, multiple leaders articulated the need) Data readiness: Good — Modular dispatch (actual) + daily plan (PowerPoint/Excel). Need: automated plan ingestion. Systems: Modular dispatch, daily mine plan (PowerPoint + Excel), email/notification system Complexity: Medium (plan ingestion requires parsing PowerPoint/Excel; comparison logic is straightforward; alerting infrastructure needed) Dependencies: TLD-04 (dispatch data quality), TLD-49 (clean dispatch data)


TLD-51: Shift Handover & Ops Knowledge Base (NEW ★★)

Field Value
Horizon H1
Project PRJ-01 + Virtual SME
Status identified ★★ — Day 2 Mine Operations
Source Day 2 Mine Ops — Brad Koski, Dan Kernan, Jeff Domann described shift reporting gaps
Field champion Brad Koski (ops mgr) + Dan Kernan

Problem statement:

Mine operations knowledge about what happened during a shift lives in two places: (1) a supervisor email (Excel-based template with headings for safety, equipment, crusher, haulage, etc.) and (2) a dispatch PDF report with tonnage and database details. The quality of the shift email varies wildly by supervisor — some provide great detail, others leave out major events. There's no cumulative learning system. Mistakes and successful recovery actions aren't captured for future reference. Dan Kernan: "A lot of it just lives in our memories."

Key quotes:

"It gives a rough idea. There's always room for more detail. It's as much as they want to put in there." "It's hard to figure out exactly what happened on the shift sometimes from that sheet." "But if we could have something that's saying real time... whether it's 100% right or not, at least it's a tool." "A lot of it just lives in our memories." "How good are we at learning from past mistakes? Could use some improvement."

Proposed solution: AI-enhanced shift handover system: (1) auto-generate shift summaries by combining dispatch data + supervisor notes + equipment status into a comprehensive narrative, (2) prompt for missing information when significant events are detected but not documented, (3) build a searchable ops knowledge base that accumulates shift-over-shift, (4) surface relevant past incidents when similar conditions arise ("last time this happened, the recovery action was...").

Current state: Supervisor shift email (Excel template, variable detail) + dispatch PDF (tonnage/database). No searchable knowledge base. No cross-shift learning. Informal verbal handovers at shift change. Target state: AI-augmented shift summaries, prompted documentation, searchable ops knowledge base, "lessons learned" surfaced proactively. Value estimate: $0.5-2M/yr (reduced repeat mistakes + faster recovery + better shift transitions + institutional knowledge preservation) Confidence: High (data sources exist, shift email + dispatch reports = training data, proven concept at other operations) Data readiness: Good — shift emails (Excel), dispatch PDF reports, both accumulated over years Systems: Email, Modular dispatch, SharePoint Complexity: Quick Win (NLP summarization + RAG knowledge base = mature technology) Dependencies: Supervisor buy-in for prompted documentation


TLD-52: Labor/BLA Contract Knowledge Assistant (NEW)

Field Value
Horizon H1
Project Virtual SME
Status identified ★★ — Day 2 Mine Operations
Source Day 2 Mine Ops — Lynn Casco (mine administrator) + Brad Koski
Field champion Lynn Casco (mine administrator) + Brad Koski (ops mgr)

Problem statement:

Supervisors regularly have questions about the USW labor contract (BLA) and HR procedures. Experienced supervisors know the contract; new ones don't. Union employees sometimes try to test new supervisors' knowledge. Currently, supervisors must wait for Lynn or someone else who's familiar with the contract to be available — but nobody is on-site 24/7. Weekend and night shift supervisors are on their own. Lynn: "It would be amazing if they could ask somewhere, a ChatGPT type situation, what do we do now? And the answer is there for them rather than waiting for somebody that's not here on the weekends."

Key quotes:

"It would be amazing if they could ask somewhere, a ChatGPT type situation where, okay, what do we do now? And the answer is there for them rather than waiting for somebody that's not here on the weekends." "The guys will try and fool them on some things... the young supervisor needs to have the answer right now with confidence." "Knock it out, case closed."

Proposed solution: AI chatbot trained on the USW labor contract, HR policies, and common workplace scenarios. Supervisors can ask contract interpretation questions and get instant, accurate answers with contract clause references. Available 24/7 on phone or tablet. Logs common questions to identify training gaps.

Current state: Contract questions require calling someone who knows. Weekend/night shifts = nobody available. New supervisors vulnerable to misinformation from experienced union employees. Target state: 24/7 contract knowledge assistant. Instant answers with confidence. Reduced HR/labor disputes from misinformation. Value estimate: $0.2-0.5M/yr (reduced grievances, supervisor time savings, faster resolution, compliance risk reduction) Confidence: High (RAG over contract documents is a proven pattern, similar to MDT Virtual SME) Data readiness: High — labor contract is a well-defined document. HR policies exist. Common Q&A patterns available from Lynn. Systems: Microsoft Copilot (or equivalent chatbot), SharePoint (contract storage) Complexity: Quick Win (single document corpus, well-defined Q&A, proven RAG pattern) Dependencies: Legal/HR approval for AI contract interpretation, union considerations

Comparison with other sites:

Extension of Virtual SME concept. Middletown has IAM union (not USW) — same need exists there. Cleveland and Burns Harbor are USW. If the contract is standardized across USW sites, one trained model could serve multiple locations. This is a low-cost, high-impact quick win that demonstrates AI value to the workforce.


TLD-53: Drill Consumable Predictive Ordering (NEW)

Field Value
Horizon H1
Project new — Mining-specific
Status identified ★ — Day 2 Mine Operations
Source Day 2 Mine Ops — Jeff Domann (pit supervisor, 28 yrs) + Kevin (mine eng)
Field champion Jeff Domann (pit supervisor)

Problem statement:

Drill consumables (bits, steels, wear parts) are currently managed by Jeff walking over to the supply area, eyeballing what's left, and ordering more. Kevin's annual budget model uses assumptions from Vulcan (footage by rock type) but can't predict consumption at a monthly/weekly resolution. The drill data exists (footage per hole, rock formation, bit hours) to build a predictive model that says "you'll need X bits next week based on planned drilling in intrusive rock."

Key quotes:

"Right now it's basically I go over there, look, and okay, we got five bits, we're gonna get a couple more to make sure we got enough." "If something was crunching the numbers to say, okay, this in this formation you only get X number of feet." "We have the resolution by hole and the data by hole to say this drill drilled this much footage in this type of material." "To tighten up that model... if we had a good number that we counted on, we could easily get a little more refined planning model for prediction on spend."

Proposed solution: Predictive consumable ordering based on: (1) planned drilling footage from weekly mine plan, (2) rock type from Vulcan geological model, (3) historical bit life by rock type, (4) current inventory levels. Auto-generate weekly consumable forecasts and ordering recommendations. Enable Kevin's budget model to use data-driven bit life predictions instead of assumptions.

Current state: Jeff eyeballs inventory + experience-based ordering. Kevin uses assumed bit life in annual budget. No week-level prediction. Target state: Weekly automated forecast: "Based on next week's drill plan (X holes in intrusive, Y in iron formation), you'll need Z bits. Current stock: W. Order recommendation: Z-W." Value estimate: $0.2-0.5M/yr (reduced emergency orders + better inventory levels + improved budget accuracy) Confidence: Medium (data exists per hole, but correlation between drill metrics and bit life needs validation) Data readiness: Good — drill data by hole (footage, formation type, pull-down pressure), Vulcan rock types, historical consumption Systems: Drill onboard computers, Modular dispatch, Vulcan, ELLIPS (inventory) Complexity: Quick Win (correlation analysis + forecasting model, well-defined inputs and outputs) Dependencies: TLD-05 (drill data quality), Vulcan rock type data export


TLD-54: Beaker Test Vision & Standardization (NEW — POV SCOPE)

Field Value
Horizon H1
Project TLD-P01 — Concentrator Desliming & Recovery Optimization (POV)
Status validated ★★ — Mar 28 Keith concept + Apr 7: Bob Zadel confirmed feasibility (Vooban CV background)
Source Mar 28 working session (Keith's idea) + Apr 7 IE×Tilden scoping call (Bob validated)
Field champion Keith Holmgren (conceived the idea) + met tech team

Problem statement:

Between the 6-hourly metallurgical balances, met techs evaluate desliming performance by taking a 1-liter beaker sample from the D-slime thickener discharge, placing it on a shelf, timing it, and visually measuring the iron mineral bed that settles in the bottom. This beaker settling test is the primary real-time feedback mechanism for dispersant adjustment — the only way to gauge whether the D-slime circuit is on target between lab results. The problem: the measurement is entirely subjective. The same 30% weight rejection looks like 110 mils one day and 90 mils the next depending on ore "fluffiness." Each met tech has their own technique — Keith would put his thumb at the 1,000ml mark and withdraw when it got wet to standardize volume. No two met techs do it the same way. New met techs don't have the experiential calibration that took veterans decades to develop.

Apr 7 context: Keith proposed photographing the beaker tests with a standardized camera setup — "we've got these things now with excellent video or photo, my head was going, do we standardize this test to be a photograph that gets loaded and now that gets tracked into this system and then the AI is actually looking at it." Bob Zadel immediately validated: "A lot of their AI usage previously was all based on cameras and actually optimizing just what you're saying. Using a camera to make that decision... the consistency of people goes away." Keith also noted the autocompo PSI sampling stations already provide a controlled environment for the test — "it's in a little telephone booth shack, already fed a representative slurry stream for that section."

Key quotes:

"Literally look at it and try to infer how much D-slime weight rejection they're creating at that moment in time." (Keith, Mar 28) "30% is tomorrow might be 110 mils rather than 90 mils today. So there's always this experiential piece." (Keith, Mar 28) "Do we standardize this test to be a photograph that gets loaded and now that gets tracked into this system and then the AI is actually looking at it." (Keith, Apr 7) "A lot of their AI usage previously was all based on cameras and actually optimizing just what you're saying." (Bob Zadel, Apr 7)

Proposed solution: Standardize beaker settling tests through camera-based measurement. Install standardized camera setup at autocompo PSI sampling stations (one per section). Met tech takes beaker sample as usual, places it in standardized position, camera captures image at timed intervals. Computer vision model measures bed height, settling rate, and visual characteristics objectively. Removes operator-to-operator variability in measurement. Feeds standardized measurements into the dispersant optimization model (TLD-23).

Current state: Subjective visual measurement, operator-dependent, no two met techs calibrated the same way, no historical image data Target state: Camera-standardized beaker test with AI vision analysis, objective measurement, historical tracking, feeds into dispersant model Value estimate: $0.5-2M/yr (enabling value — standardized measurement improves dispersant model accuracy, which drives the larger TLD-23/TLD-08 value) Confidence: High — Keith conceived the approach, Bob validated feasibility, autocompo PSI stations provide controlled environment, Vooban has computer vision expertise Data readiness: Low initially (need to start collecting images), but infrastructure exists (sampling stations, cameras available) Systems: Camera hardware (to install), autocompo PSI stations (existing), DCS integration, image storage Complexity: Quick Win — well-bounded scope, proven technology (industrial computer vision) Dependencies: TLD-23 (this feeds into the dispersant optimization model)

Comparison with other sites:

Pattern: instrument the human judgment gap. At Middletown, Ametek cameras at 60% accuracy represent the same pattern — AI vision replacing subjective human assessment. At Tilden, the beaker settling test is the equivalent of Middletown's surface inspection problem. Different asset, same class of solution.

Open questions: - [ ] What camera/lighting setup is needed at autocompo PSI stations? - [ ] Can we collect a labeled training dataset in parallel with current visual assessments (photo + met tech's reading + lab result)? - [ ] How many sampling stations? (4 sections × how many per section?) - [ ] Could a cobot automate the full beaker test? (Keith mentioned a friend who builds custom industrial robots; Bob mentioned cobots) - [ ] What about Kemira's robotic settling test device? (Keith mentioned prior conversations — "ship may have sailed" with supplier change, but concept validates demand)


New Project Candidates

Mining-specific initiatives that don't map to existing PRJ-01 through PRJ-08.

Initiative Proposed Project Why It's New
TLD-04 Fleet Dispatching Mining Fleet Optimization No parallel in steelmaking — mobile equipment dispatching
TLD-05 Drill & Blast Drill & Blast Intelligence Mining-specific — upstream of any steelmaking process
TLD-06 Blend Optimization Ore Grade & Blend Control Terry Fedor Tier 2 — unique dual ore body, no steel parallel
TLD-07, 08, 11 Concentrator Concentrator Process Optimization Mineral processing — no steel parallel
TLD-10 Kiln Optimization Pellet Plant Optimization Analogous to BF optimization but mining-specific
TLD-17 Pit Slope Geotechnical Monitoring Mining safety — no steel parallel
TLD-18 Environmental Environmental Compliance Mining-specific regulatory burden
TLD-20 Safety Mining Safety Intelligence MSHA-specific, proximity/fatigue — mining domain
TLD-21 Feed-Forward Control ★★★ TLD-P17: Mine-to-Concentrator Ore Intelligence (deferred) Bridges ore data to plant response — deferred per Ryan/Keith Apr 7
TLD-22 Filter Monitoring Quick Win — Filter Performance 42 un-instrumented filters gating the bottleneck
TLD-23 Reagent Optimization ★★★ TLD-P01 POV: Dispersant Standardization (Track 1) $50M/yr value anchor — POV confirmed Apr 7
TLD-24 Inspection Digitization Quick Win — Workplace & Equipment Inspections 50 paper cards/shift, paper vehicle inspections
TLD-25 Production Reporting Quick Win — Automated Reporting Multi-day paper → spreadsheet processing lag
TLD-26 Operator Analytics Operator Performance & Payload 15% overloading = $M engine/tire damage
TLD-27 Environmental KB Environmental Compliance Knowledge Dozens of compliance states in 2-3 heads
TLD-31 Stockpile Modeling ★★ TLD-P17: Mine-to-Concentrator Ore Intelligence (deferred) Zero hardware needed — deferred per Ryan/Keith Apr 7
TLD-32 Operator Decision Support ★★★ TLD-P01 POV: Live CRP Augments operator decisions — POV confirmed Apr 7
TLD-33 HPGR Investigation HPGR Feed Rate Analysis Data science investigation — "can't find the smoking gun"
TLD-34 Calcium Control Quick Win — Pellet Calcium "Easy test case" — data exists, 6hr human cycle → automated
TLD-45 Dispatch ↔ ELLIPS ★★ Modular-ELLIPS Integration Core infrastructure gap — dispatch data → CMMS. Enables TLD-46, TLD-02
TLD-46 Duty-Cycle Maint ★★ Tons vs Hours Paradigm Paradigm shift — hours ≠ work. Mining-specific. Industry best practice.
TLD-47 Fleet Capital ★★ Fleet Lifecycle Planning $12M trucks, $30M shovels. Multi-year CAPEX optimization.
TLD-48 OEM Auto-Import OEM Catalog & PM Auto-Import Eliminate manual stock code creation. 5,000 parts per new truck.
TLD-35 ELLIPS Cleanup Quick Win — Inventory Master Data Same recipe as MDT-31, ELLIPS search "terrible," 5min-2hr per box
TLD-36 Maint Budget Forecast Maintenance Parts Forecasting Straight-line averages → production-correlated, 65 item categories
TLD-37 Railroad Analytics Railroad Asset Maintenance Geo car + X-ray car data, 950 car fleet, limited ELLIPS
TLD-38 HPGR Pilot ★★★ HPGR Knowledge Base + PdM Team-nominated pilot: 1200pg unread manuals + rich sensors on new equipment. Best-scoped pilot at any site.
TLD-39 Repair Scheduling Major Repair Schedule Optimization $1M/mill shutdown, manual ELLIPS→Excel→Project pipeline, daily supervisor updates
TLD-40 Workforce Scheduling Maintenance Resource Scheduling Shift juggling, absence patterns, crane conflicts, union constraints
TLD-41 Risk Quantification Deferred Maintenance Risk "Pay now or pay later" — years of cause-effect delay, CYA bias, mine vs plant allocation
TLD-42 Failure Pattern Search Cross-Asset Failure Detection Hours/week searching ELLIPS + 60K drawings. No cross-equipment pattern detection.
TLD-43 Training Content Maintenance Training Generation Visual/hands-on training from manuals. Caterpillar AR/VR model discussed.
TLD-44 Onboarding Automation Employee Onboarding Multi-ticket manual process across ServiceNow/IT/facilities/ELLIPS
TLD-49 Dispatch Auto-Correction Quick Win — Data Quality Molly's tedious button-press error correction. Straightforward anomaly detection.
TLD-50 Plan Deviation Alerting ★★ Real-Time Ops Intelligence Plan vs. actual comparison. Andrew Mullen-endorsed. Multi-leader consensus.
TLD-51 Shift Handover KB Ops Knowledge Base Shift email + dispatch PDF → searchable knowledge. Cross-shift learning.
TLD-52 BLA Contract Assistant Quick Win — Virtual SME USW contract chatbot. 24/7 supervisor support. Scalable to all USW sites.
TLD-53 Drill Consumable Ordering Quick Win — Predictive Procurement Bit life prediction from drill data + formation hardness.

Daily Update Log

Day 2 — 2026-03-12 (Transcript 11: Mine Operations & Engineering)

Transcript 11: Mine Operations & Engineering Workshop - Participants: Dan Kernan (facilitator), Brad Koski (ops mgr), Tyler Craig (mine eng), Lynn Casco (mine administrator), Kevin (mine eng/planner — Vulcan), Jeff Domann (pit supervisor, 28 yrs — blast crew/yard crew), Melinda (administrative coordinator, 400 people), Andrew Mullen (corporate), Erico (IE), Pascal, Sameen, Sanjeev - ★★★ Drill-to-blast pipeline = #1 mine ops initiative. Jeff Domann described the complete data bridge: drill captures hardness data per hole (pull-down pressure, rotary speed, GPS), Dyno explosive trucks can auto-load correct density per hole. "They have that capability on their trucks now." Both ends exist — the data bridge is the missing piece. Currently blanket-loaded. ~15,000 holes/year = massive multiplier. → TLD-05 upgraded to ★★★. - ★★ Mine planning chain fully detailed: Vulcan 3D model (Kevin, monthly) → Gwen (weekly, tighter Vulcan cuts) → Greg Kavlak (senior supervisor review) → daily PowerPoint map packet → 2:00 PM supervisor meeting → night shift → 7:00 AM catch-up. TLD-15 open question answered ★★. - ★★ Real-time plan deviation alerting = consensus need. Andrew Mullen: "If we could have something saying real time, hey, you're getting way off plan." Brad's shovel-vs-drill example. Supervisor + dispatch supervisor = two hands in the pie. → TLD-50 created ★★. - ★★ Shift handover knowledge gap confirmed. Supervisor shift emails (Excel template) vary wildly in detail. "A lot of it just lives in our memories." No cumulative learning system. → TLD-51 created ★★. - ★★ Labor contract assistant requested. Lynn Casco: "It would be amazing if they could ask somewhere, a ChatGPT type situation." Supervisors tested by union employees on weekends/nights. → TLD-52 created ★★. - ★★ ELLIPS pain validated from ops side. Mop bucket example (20-30 min), equipment reference number (2 hrs), headlamp duplicates, colleague order history as workaround. Brad: "Would encompass all of us. Huge win." TLD-35 enriched. - ★★ Knowledge capture / Virtual SME consensus. Supervisor toolkit, training in-bread problem, procedures on SharePoint effectively inaccessible, cable layout rework from untrained crew. TLD-14 upgraded to ★★★. - ★ Dispatch status auto-correction. Kevin: Molly scans 12-hour shifts correcting button-press errors. "Easy thing to spot, but tedious." "Could easily train something to pick it up." → TLD-49 created. - ★ Drill consumable predictive ordering. Jeff: "I go over there, look, and okay, we got five bits." Kevin: formation data from Vulcan → bit life prediction → refined planning model. → TLD-53 created. - ★ Operator feedback loop missing. Dan Kernan: "I'm assuming I'm doing a great job. I don't hear from anybody ever." Wants scorecards/feedback for operators and supervisors. Links to TLD-26. - ★ Vacation/scheduling data entry horror. Lynn: 400 people × 2+ weeks vacation, entered into ELLIPS, doesn't populate calendar, then manual paper calendar + crew sheet + schedule. "There goes your day." Links to TLD-29. - Key cultural signals: Very open group. Jeff Domann is an exceptional champion — deep domain knowledge, 28 years, already conceptualized the drill-blast optimization solution, just couldn't execute it due to data volume. Brad Koski is pragmatic and receptive. Lynn Casco identified quick wins that affect daily quality of life. Erico (IE) effectively framed cross-department dependencies. - 5 new initiatives: TLD-49..TLD-53 - 8 existing initiatives enriched: TLD-04, TLD-05 (★★★), TLD-14 (★★★), TLD-15, TLD-25, TLD-35, TLD-44, TLD-13

Day 2 — 2026-03-12 (Transcript 7: Process & DCS Engineering)

Transcript 7: Process & DCS Engineering Workshop - ★★ Stockpile ore distribution modeling surfaced organically — Team described extreme section-to-section variation, proposed modeling stockpile from GPS/truck/tripper data. "We have the hardware we need." → TLD-31 created ★★. - ★★ G2 fuzzy logic detail confirmed — Foxboro IA DCS + G2 fuzzy logic (SGS). G2 models best operator decision-making. G2 upgrade stuck ~1.5 years in approval. OCS parallel on pellet side. - ★★ Operator variability quantified — Six-bearing events: 75% success, 25% lose tons. Decision tree charts "dumb down the jobs." Team asked "could software teach in the moment?" → TLD-32 created. - ★★ Flotation instrumentation gap detailed — ONE silica reading every 14 minutes from Courier machine. 6 lines per unit, no per-line data. Tail samples every 6 hours — "problem might be over by the time you receive the information." TLD-08 enriched. - ★★ Filter monitoring costs quantified — 42 filters, each ~$3K to instrument. Total ~$125K hardware. One pilot filter works great. Maintenance vs process disconnect: "they say it's green, we say it's running inefficiently." TLD-22 enriched. - ★ HPGR feed rate mystery — Installed April 2023, feed rates dropped Nov 2025, 7-8 month struggle. "Can't find the smoking gun." → TLD-33 created. - ★ Pellet calcium control = easy test case — "More than enough data." Human adjustment every 6 hours. → TLD-34 created. - ★ 1.3 billion Pi historian entries — Searchable knowledge base concept strongly supported. TLD-14 enriched. - ★ Lead time management — Suppliers give inaccurate lead times (5wk stated, 12wk actual). Risk aversion → overstocking. Prior minmax algorithms failed due to edge cases. TLD-13 enriched. - ★ Proactive attempts fail — "We're really good at reacting. We are not proactive." Feed-forward: 8hr ore transit + 4hr system transit = 12+ hr lag. When they try proactive changes, "it never works out." Thesis confirmed ★★. - New stakeholders: Kirk Williams (area mgr, transportation), Keith Holmgren (senior director, mining technology), Sean Halston (ECSM), Scott Hebert (ops specialist 30yr), Todd Davis (lead process engineer, grinding), Dan Collins (process engineer, flotation), Keith Grammer (process engineer, technical group — supports all US mine sites) - Key cultural signal: No resistance to AI conversation. Technically engaged group. Multiple people asking "where does AI sit relative to G2/DCS?" Team wants practical augmentation, not replacement.

Day 2 — 2026-03-12 (Transcript 8: Purchasing & Warehouse Logistics)

Transcript 8: Purchasing, Warehouse & Logistics Session - ★★★ Vessel/shipping logistics = "biggest business challenge this year" — Daily replanning takes 3-4 hours. 3 contractors, 4-6 vessels, 30-day rolling schedule. BCS has years of data. Communication siloes between site, corporate traffic, and vessel operators. → TLD-16 upgraded from seed to identified ★★★. - ★★ ELLIPS inventory search is terrible — 5 minutes to 2 hours per incoming box to match. Descriptions have commas/semicolons in wrong places. 12-20 duplicates of same part. → TLD-35 created. - ★★ Cascading production optimization vision — Senior ops leader articulated full cascade: annual forecast → production → reagents → shipping → maintenance → what-if scenarios. $1M per mill shutdown. → TLD-15 enriched with evidence. - ★ Maintenance budget forecasting gaps — Straight-line averages for 65+ item categories. $300K/yr "Walmart effect" untracked. Pumps $50K, wheels $10K, flotation unknown. → TLD-36 created. - ★ Railroad maintenance analytics — 950 car fleet, Geo car + X-ray car data exists (years), rail breaks 1/week in spring/fall. Team self-assessed "data not there yet." → TLD-37 created. - ★ Vendor-managed inventory untracked — Safety supplies with no receiving verification, monthly spreadsheet only. → TLD-30 enriched. - ★ No purchasing people showed up — session was warehouse + maintenance railroad + maintenance transportation. Purchasing pain validated indirectly. - New stakeholders: Kevin (train scheduling/daily replanning), JR (senior ops/maintenance leader — articulated cascading vision) - Key cultural signal: Logistics team is eager for help. Railroad team is honest about data limitations. "Intentionally unintelligent" communication siloes — an infrastructure problem, not a people problem.

Day 2 — 2026-03-12 (Transcript 9: Plant Maintenance)

Transcript 9: Plant Maintenance Workshop - ★★★ Andrew Mullen cross-site validation — Corporate AI program manager: "Doesn't matter what CMMS it is... regardless if it's Teams at Middletown, if it's Tabware at Burns Harbor, if it's ELLIPS here, it's not getting done because it's just too cumbersome." Definitive corporate validation of cross-site thesis. TLD-01 upgraded from seed to identified ★★. - ★★★ HPGR pilot nominated by maintenance team — New equipment, 1,200+ pages of unread manuals, European parts, covered in sensors. "Guys go up to it, don't know about it, solving a puzzle." Team consensus: best pilot scope for knowledge base + PdM. → TLD-38 created ★★★. - ★★★ Adam Bingham = Tilden's champion — Already using Copilot with electrical/hydraulic prints (PDF) for liner handler troubleshooting + Spanish translation. Exploring Copilot Studio. Wants to build knowledge base for maintenance SOPs. Self-described "hybrid" (admin + hands-on). → TLD-12 champion confirmed. - ★★ George Harmon's failure analysis pain quantified — Hours/week searching ELLIPS + 60K drawing database. DCS shows signs 3 months before failure but "we're looking at it after the fact." Parallel systems (ELLIPS, DCS, drawings, relay, BO, Power BI) none talking to each other. → TLD-01, TLD-03, TLD-42 enriched. - ★★ Gary's "pay now or pay later" framework — Cause-and-effect years apart. "Maintenance amnesia." AI = emotionless risk quantification. Mine vs. plant resource allocation tug-of-war. Cost-per-ton game. → TLD-41 created ★★. - ★★ Major repair scheduling pain — $1M per mill shutdown. Manual ELLIPS → Excel → MS Project. Senior supervisors updating daily. "If AI could do that, that would be a godsend." → TLD-39 created ★★. - ★★ Gary's healthy skepticism — "Having AI tell me I need to get PMs done doesn't help me. I already know I'm not getting it done." Value is NOT more alerts — it's reducing friction so work gets done. Key framing for Tilden. - ★ Parts delays = weekly — confirmed. Supplier changes cause disruption. Hidden off-books inventory. Ukraine war supply chain impact. - ★ Maintenance resource scheduling — shift juggling, absence patterns, crane conflicts, union constraints. ELLIPS has capability but too cumbersome. → TLD-40 created. - ★ Training content creation — visual/hands-on preferred. Caterpillar AR/VR model discussed. → TLD-43 created. - ★ Employee onboarding automation — multi-ticket manual process, ServiceNow friction. → TLD-44 created. - ★ ELLIPS validated as robust but cumbersome — MSTs (scheduled tasks) by calendar/hours/usage, links to job plans/docs/photos/prints, can auto-order parts. System is capable — usage friction is the problem. Same pattern as Tabware and SWAMI. - ★ George's "virtual plant status" vision — "Come in in the morning, ask my phone how's the plant running?" = maintenance copilot, independently articulated. - New stakeholders: George Harmon (section mgr, maintenance management — reliability + infrastructure eng), Gary (area maintenance — experienced, healthy skeptic), Adam Bingham (hybrid maintenance — ★ AI early adopter, Copilot user, champion) - Key cultural signals: Gary's skepticism is mature and valuable — he sees value in troubleshooting acceleration and risk quantification, NOT in more notification systems. Adam Bingham is a grassroots champion already building proof-of-concept. Team-nominated HPGR as pilot = strong buy-in signal.

Day 2 Cumulative Summary (All 5 Transcripts — Process/DCS + Purchasing/Warehouse + Plant Maint + Mine Maint + Mine Ops): - Total initiatives: 53 (4 seeds, 49 identified, 0 validated) - 23 new initiatives added (Day 2 total): TLD-31..TLD-53 - 3 seeds upgraded: TLD-01 (seed → identified ★★★), TLD-15 (seed → identified ★★), TLD-16 (seed → identified ★★★) - Key upgrades: TLD-05 (Drill & Blast → ★★★), TLD-14 (Knowledge Capture → ★★★), TLD-19 (Tires → ★★★) - ★ THREE STANDOUT INITIATIVES from Day 2: 1. TLD-38 HPGR Knowledge Base + PdM Pilot — team-nominated, well-scoped, Adam Bingham champion 2. TLD-05 Drill & Blast Optimization — Jeff Domann champion, Dyno tech exists, data bridge is the gap 3. TLD-50 Real-Time Plan Deviation Alerting — Andrew Mullen + Brad Koski endorsed, multi-leader consensus - ★ Andrew Mullen's cross-site validation is the headline for consolidation — corporate PM explicitly confirmed same CMMS pattern at 3 sites - Thesis reinforced from three angles: "We're really good at reacting, not proactive" (process eng) + "Having AI tell me doesn't help, reducing friction helps" (Gary, maintenance) + "A lot of it just lives in our memories" (Brad Koski, operations). Same information flow problem, every department.

Day 1 — 2026-03-09 (All 5 Transcripts)

Transcript 1: First Contact / Site Overview - 9 initiatives moved from seed to identified: TLD-02, 03, 05, 06, 07, 08, 09, 11, 14 - 3 new initiatives added: TLD-21 (Feed-Forward Control ★★★), TLD-22 (Filter Monitoring ★★), TLD-23 (Reagent Optimization ★★★) - Key finding: Concentrator = #1 bottleneck. Ore variability + reactive reagent adjustment + poor filter instrumentation - Key numbers: ~70% iron recovery vs. ~75% design benchmark + $50M/yr reagent spend - CMMS confirmed: ELLIPS (3rd different CMMS across CLF)

Transcript 2: Concentrator & Pellet Plant Walkthrough (Visit 1) - G2 fuzzy logic control system confirmed (SGS) for grinding control - DCS from late 1990s — "central control operators" - Pebble mill secondary capacity is a binding constraint — overgrinding wastes energy + causes filter problems - Pelletizing detail: 14 balling drums, 7 lines, grate-kiln at 2,200°F, moisture explosion risk - 2 dryers (one double capacity) — recently running both due to high-clay ore - "The biggest challenge here is just the variability of the ore" - Updated: TLD-07 (G2 confirmed), TLD-09 (pelletizing detail)

Transcript 3: Lunch Group Session - ★ Richest transcript. 10+ participants, mining + dispatch + maintenance + safety + HR. - Modular dispatch system confirmed — GPS, cycle times, shovel priorities. Molly = dispatch admin. - PM vs reactive = ~70/30 — better than steel sites! - Real-time tire monitoring — temp, pressure, forces. TLD-19 moved to identified. - Dispatcher bottleneck — weeks to learn, months to optimize. TLD-04 moved to identified ★★. - Take-5 paper process — 50 cards/shift, supervisor manually enters. Strong group energy for digitization. → TLD-24 created. - Paper production reporting — multi-day lag. → TLD-25 created. - Operator scorecards — manual, from dispatch data. Overloading by 15%. → TLD-26 created. - Onboard diagnostics fragmented — half dozen OEM systems, fault codes not unified. TLD-02 evidence strengthened. - Equipment history in ELLIPS — well-maintained, long history. Good foundation. - Andrew Mullen confirmed as "program manager for AI projects across companies, position created Feb 1." - New stakeholders: Dan Clarendon (safety/training 25yr), Brad (production 21yr), Molly (dispatch admin), Tyler Craig (mining eng), 30yr maintenance planner

Transcript 4: Truck Shop Visit - Fleet composition confirmed: 12-14 classes, 320-ton + 150-ton trucks, 4 shovels - $70K/tire, $100K/chain set, $300K/wheel motor — massive consumable costs - No barcode scanning in warehouse — all manual ELLIPS entry. → TLD-30 created. - Oil sampling program for wheel motors — existing condition monitoring - Camera system on equipment — "doesn't give you much information" - Fire suppression systems confirmed

Transcript 5: End of Day Wrap-Up - Utilities/energy forecasting — power contract complexity, natural gas budget variance. → TLD-28 created. - Environmental compliance — Brent: "dozens of compliance states in outdated spreadsheet or in my head." → TLD-27 created. - HR overtime forecasting — manual, hard to predict. → TLD-29 created. - Esri geospatial partnership — Thursday workshop planned with Esri for GIS/mining tools. - Production planning model exists — "two clicks and run hours" tool for forecasting. - Leadership confirmed concentrator as #1 priority — "biggest spot in mine" for optimization.

Day 2 Mine Maintenance Summary (Transcript 10): - Total initiatives: 48 (4 seeds, 44 identified, 0 validated) - 4 new initiatives added: TLD-45 (Dispatch↔ELLIPS Integration), TLD-46 (Duty-Cycle Maintenance), TLD-47 (Fleet Capital Replacement), TLD-48 (OEM Auto-Import) - 8 existing initiatives upgraded with mine maintenance evidence: TLD-01, TLD-02, TLD-13, TLD-19, TLD-35, TLD-36, TLD-39, TLD-40 - TLD-19 (Tire Management) upgraded to ★★★ — full lifecycle detail, 108 tires/yr × $70K = $7.5M/yr, Bridgestone allotment, front→rear rotation - TLD-02 (Heavy Mobile PdM) upgraded to ★★ — telemetry untapped, fuel burn lifecycle metric, wheel motor 4th rebuild - TLD-01 (Ops-Maint Integration) upgraded to ★★★ — Modular ↔ ELLIPS disconnect confirmed, manual hours transcription = 4 hrs/week - Key participants: Pete Austin (section mgr, 30 yrs), Chase Lincoln (reliability eng), John Lokowitz (pit electrical), George Beelon (scheduling) - Two new opportunity themes: 1. Fleet lifecycle intelligence (TLD-02, TLD-19, TLD-45, TLD-46, TLD-47) — tons vs hours, telemetry, capital planning 2. Maintenance data automation (TLD-35, TLD-48, TLD-45) — eliminate manual duplication of OEM data into ELLIPS - Next: Remaining Day 2 sessions (if any), Day 3-5 workshops, validation, sizing

Day 1 Summary (Pre-Mine Maintenance)

Day 1 Summary: - Total initiatives: 30 (7 seeds, 23 identified, 0 validated) - 7 new initiatives added from transcripts 2-5: TLD-24 through TLD-30 - 4 initiatives upgraded with significant new evidence: TLD-04, TLD-12, TLD-13, TLD-19, TLD-20 - Key systems confirmed: ELLIPS (CMMS), Modular (dispatch/GPS), G2/SGS (grinding control), DCS (late 1990s), real-time tire monitoring - Two emerging opportunity bundles: 1. Concentrator Optimization (TLD-06, 07, 08, 21, 22, 23) — $50M reagent anchor, ~70% recovery vs. ~75% design benchmark 2. Mining Operations Quick Wins (TLD-24, 25, 26, 30) — paper digitization, immediate visibility - Next: Deep dives with mining ops/fleet maintenance, concentrator process engineers, lab/R&D, Esri workshop

Pre-Visit — 2026-03-09

  • Created initial seed registry with 20 initiatives from research + cross-site patterns
  • Adapted PRJ hypothesis table for mining context (removed PRJ-05, PRJ-08; reframed PRJ-02, PRJ-04, PRJ-07)
  • Added 8 new project candidates unique to mining
  • Key hypothesis: Haul truck fleet (TLD-02, 04, 19) and concentrator optimization (TLD-07, 08, 11) are the biggest mining-specific opportunities. Ore blend (TLD-06) is Terry Fedor's flagged priority.
  • Knowledge Capture / Virtual SME (TLD-14) aligns with leadership's #1 from Middletown readout

Site-Specific Context

Site profile: - Products: Iron ore pellets (fluxed hemflux — customized per customer BF specs) - Key constraints: Pit depth (increasing haul distance/cost), seasonal shipping (Jan-Mar dock closure), environmental (selenium, wetlands), 20-year mine plan expansion - Staffing: ~990 employees, 24/7 operations, USW union - Unique challenges: Open-pit mining environment (weather, dust, mobile equipment safety), hematite operation using selective flocculation and amine flotation (historically dual hematite/magnetite 1990-2009; magnetite reserves now exhausted), remote Upper Peninsula location, MSHA-regulated (not OSHA)

Key stakeholders met:

Name Role Relevance
Site leader (name TBD) Superintendent/Director Open, technically candid, ran the Day 1 overview session
Ryan Korpela Operations Manager Confirmed — primary ops contact
Todd Davis Mine Engineer Led mine tour, deep mining knowledge. ~20 years experience.
Dan McGrath Process Engineer Metallurgical technician background, concentrator/pelletizing expertise. Key for TLD-07, 08, 21.
Dan Clarendon Mine Ops / Safety-Training 25 years mining + 4 years safety/training. Champion for TLD-24 (inspections), TLD-20 (safety).
Brad Production 21 years. Champion for TLD-25 (production reporting). Paper report pain.
Molly (Bradley?) Dispatch Administrator Manages Modular dispatch interface, network. Key for TLD-04 (fleet dispatch).
Tyler Craig Mining Engineer Since 2011. Engineering/mine planning. Key for TLD-26 (operator analytics).
30-year maintenance planner (name TBD) Maintenance Reliability & Planning 30 years. Key for TLD-01, 02, 12 (maintenance workflows/PdM).
Brent Environmental Manager Key for TLD-27 (environmental compliance). "Dozens of compliance states in my head."
Steven Reed Blue Band (vendor) sales account exec External — Blue Band partnership.
Lab researcher (name TBD) R&D / Process Design "Helped design Tilden" — critical for TLD-21, 23 (concentrator optimization).
Kevin (last name TBD) Train Scheduling / Railroad Operations Spends 3-4 hours/day replanning train crews based on vessel schedule. Key for TLD-16.
JR (last name TBD) Senior Ops/Maintenance Leader Articulated cascading production optimization vision (forecast → production → reagents → shipping → maintenance). Key for TLD-15, TLD-36.
Andrew Mullen Program Manager, AI Projects (corporate) Position created Feb 1. Cross-site coordination. Confirmed at Tilden.
Brad Koski Operations Manager (mine) Key Day 2 Mine Ops participant. Pragmatic, receptive. Plans, coaching, soft skills focus. Champion for TLD-50 (plan deviation).
Jeff Domann Pit Supervisor (blast crew + yard crew, 28 yrs) ★ Exceptional domain expert. Conceptualized drill-blast optimization. Champion for TLD-05, TLD-53.
Lynn Casco Mine Administrator People + data entry focused. HR/contract questions, vacation entry, paper forms. Champion for TLD-52 (BLA contract assistant).
Melinda Administrative Coordinator (mine ops, 400 people) Communications, scheduling, data entry. Supports TLD-52 use case.
Gwen Weekly Mine Planner Creates weekly plans in Vulcan, daily map packets (PowerPoint). Key for TLD-15, TLD-50.
Greg Kavlak Senior Mine Supervisor Reviews plans with Gwen. Not present at session but referenced by Brad/Kevin.