Skip to content
Confidential

Initiative Registry — Middletown Works

Purpose: Living tracker of all AI/digital initiatives identified during the Middletown site sprint.

Workflow: - Day 1-2: Identify (status: identified) - Day 2-3: Validate in deep conversations (status: validated or rejected) - Day 4: Size — $ estimates, confidence, feasibility (status: sized) - Day 5: Prioritize — matrix placement, sequence (status: prioritized)

Carry-forward from Cleveland: Cleveland identified 27 initiatives (CLV-01..CLV-27) across 8 projects. All 8 projects have validation questions for Middletown.

Last updated: 2026-04-16 (appendix preparation — no content changes, field evidence preserved as collected)


Hypotheses to Validate from Cleveland

These are projects already validated at Cleveland — test them here. See 03-consolidation/project-catalog.md for full detail.

Project Horizon Cleveland Status Key Question for Middletown
PRJ-01 Ops-Maint Integration H1 Validated — strongest signal, every stakeholder Same ops-maint disconnect? Middletown uses "Teams" (homegrown CMMS), not Tabware — different integration surface than Cleveland.
PRJ-02 Scheduling / S&IOP H3 Validated — manual scheduling, $10-30M What is Middletown's constraint? Finishing line scheduling may be the unique angle (multiple coating lines competing for substrate).
PRJ-03 PdM Platform H1→H2 Validated — multi-asset (bag house primary, scrubbing secondary, cranes tertiary) What condition monitoring exists? What are the 5 most critical assets? Does Middletown have BOF bag house / scrubbing data like Cleveland?
PRJ-04 Quality & Yield H2→H3 Partial — slab cut already done ($3M/mo) HIGHEST PRIORITY at Middletown. Full finishing chain = longest quality cascade. Through-process traceability degasser → caster → HSM → cold mill → coating. Automotive OEM surface quality.
PRJ-05 Cobble / Process Risk H2 Validated — cobbles + operator decision support Does pair-cross HSM have different cobble dynamics? Same operator experience loss? L2 data available?
PRJ-06 Maint Workflow (copilot + procurement) H1 Validated — voice capture + procurement hell Same corporate procurement policies. Test: does IAM workforce have same voice capture appetite?
PRJ-07 Logistics H2 Partial — coil handling 3-4x, rail car gaps Different logistics with full finishing chain. Material flow from HSM → multiple finishing lines.
PRJ-08 Caster Chemistry H2 Validated — Prime Metals model unused Dual-strand caster + vacuum degasser. Chemistry transitions with RH degasser may be different optimization problem.

Master Summary Table

ID Initiative Horizon Project Status Value ($/yr) Confidence Complexity Priority
MDT-01 Ops-Maintenance data integration H1 PRJ-01 validated $3-8M High Quick Win
MDT-02 Maintenance AI copilot (voice capture) H1 PRJ-06 validated-w-caveats $0.5-2M + data Med Quick Win
MDT-03 Procurement automation (conversational front-end) H1 PRJ-06 validated $1-3M High Quick Win HIGH
MDT-04 Through-process quality traceability (degasser → coating) H2 PRJ-04 validated $5-15M High Strategic HIGH
MDT-05 Coating line defect detection (Ametek cameras, electrogalv + galv) H2 PRJ-04 validated $2-8M Med Medium
MDT-06 BF 3 optimization (stove tenders + raw material — Palmer flagged) H2 PRJ-05 identified $3-10M Med Medium HIGH
MDT-07 Finishing line scheduling (multi-line competition for substrate) H2 PRJ-02 seed $3-10M Medium
MDT-08 PdM proof of value (fleet vehicles as proving ground — Dave preference) H1→H2 PRJ-03 identified $3-12M Med Quick Win→Expand
MDT-09 Cobble prediction (pair-cross HSM — R&D validated, IBA data available) H2 PRJ-05 identified $2-8M Med Medium
MDT-10 Caster chemistry optimization (with RH degasser) H2 PRJ-08 seed $2-8M Medium
MDT-11 Energy optimization (SunCoke integration, BF gas recovery) H2 new seed $2-5M Medium
MDT-12 Vacuum degasser process optimization H2 PRJ-04 seed $1-3M Medium
MDT-13 Safety incident trend analytics H1 new identified low-cost High Quick Win HIGH
MDT-14 HSM centerline tracking (vision AI) H2 PRJ-04 adjacent / new identified TBD Medium
MDT-15 Safety review database (LLM/RAG) H1 PRJ-06 adjacent identified $0.5-2M Quick Win
MDT-16 HSM rolling model — Siemens replacement H2 new identified $3-10M Strategic
MDT-17 X-ray QA for rolled steel H2 PRJ-04 identified $2-5M Medium
MDT-18 Root cause analysis platform H1 PRJ-01 adjacent identified $1-4M Quick Win
MDT-19 Computer vision QA — cold rolled line H2 PRJ-04 identified $2-6M Medium
MDT-20 ~~Part number cleanup~~ → absorbed into MDT-31 absorbed
MDT-21 QA investigation post-mortem knowledge base H1 PRJ-04 validated $1-4M High Quick Win
MDT-22 Fleet vehicle AI copilot (diagnostic + knowledge base) H1 PRJ-06 / new validated $0.5-2M High Quick Win HIGH
MDT-23 Pre-trip inspection digitization (mobile, scales to all equipment) H1 PRJ-06 adjacent identified low-cost High Quick Win
MDT-24 Surface inspection classifier enhancement (Ametek, cross-coil patterns) H2 PRJ-04 validated $3-8M High Medium HIGH
MDT-25 Through-process quality alerting (tundish/heat → downstream divert) H2 PRJ-04 validated $2-6M Med Medium
MDT-26 Mechanical property drift detection & quality SPC modernization H1→H2 PRJ-04 validated $1-3M High Quick Win
MDT-27 Customer complaint triage & tracking system H1 PRJ-04 adjacent deprioritized $0.5-1M Med Quick Win
MDT-28 Intra-plant coil logistics optimization (door status + route optimization) H2 PRJ-07 validated $2-5M Med Medium HIGH
MDT-29 ~~Oracle auto-reorder intelligence~~ → absorbed into MDT-31 absorbed
MDT-30 BF stove tender decision support (knowledge capture + raw material optimization) H2 PRJ-05 / new identified $3-10M Med Medium HIGH
MDT-31 Inventory intelligence & master data cleanup H1 PRJ-06 validated $2-5M High Quick Win HIGH
MDT-32 BOF endpoint prediction model (AI-assisted, R&D in-flight) H2 PRJ-08 / new identified $2-5M Med Medium
MDT-33 Cross-site caster reliability analytics & best practice sharing H1 PRJ-01 adjacent identified $3-8M High Quick Win
MDT-34 BF burden mix / raw material optimization (expert system) H2 PRJ-05 / MDT-30 adjacent identified $5-15M Med Medium
MDT-35 Turn Log intelligence — predictive pattern mining (20yr, 1.3M entries) H1 PRJ-01 / MDT-P16 validated $0.3-1M High Quick Win
MDT-36 Process Control Virtual SME (per-department knowledge agent) H1→H2 PRJ-01 / PRJ-06 / MDT-P16 validated $0.5-2M + risk Med-High Medium HIGH
MDT-37 Legacy system code documentation & modernization (Fortran/CRISP/PHP) H1 MDT-P16 validated $0.2-0.5M + risk High Quick Win

Status key: seed = from pre-visit research, not yet discussed | identified = emerged from field conversations | validated = confirmed by multiple stakeholders with data/evidence | sized = $ estimate attached with confidence | prioritized = placed on matrix, sequenced | absorbed = folded into another initiative | deprioritized = champion explicitly deferred | rejected = not viable or not relevant

Horizon key: H1 = 0-6 months "Bridge the Gap" | H2 = 6-18 months "Build the Foundation" | H3 = 18-36 months "Predict & Optimize"


Initiative Detail Cards

MDT-01: Ops-Maintenance Data Integration

Field Value
Horizon H1: Bridge the Gap
Project PRJ-01
Status validated
Source Cleveland cross-pollination (CLV-01 — validated as #1 priority)
Field champion Steve Longbottom (RIT) + John Houston (maintenance technology) — corrected at Day 5 readout. Brian Benning is champion for process control knowledge (MDT-P16), NOT maintenance data integration.

Problem statement:

At Cleveland, operations tracks delays in SQL and maintenance tracks work in Tabware — different naming, no cross-reference. A 14-minute delay logged by operations cannot be matched to the maintenance work done to address it. At Middletown, the CMMS is a homegrown system called "Teams" (not Tabware) — the integration gap likely exists but through a different technical surface.

Proposed solution: Semantic matching layer linking operational delay reports to "Teams" work orders. Unified dashboard for delay attribution.

Day 3 validation — Brian Benning (Process Control):

Brian Benning (27 years, Section Mgr Process Control) independently confirmed the knowledge silo problem as "the biggest problem facing the plant." This is the third independent validation at Middletown (Dave Day 1, Dean Day 2, Brian Day 3). Brian and Chris Sizemore both landed on the same diagnosis: information siloed in single experts who get called at 2am and are at flight risk. Brian also confirmed Middletown has "the most inward-facing data structures" of CLF plants — each department owns its own data islands, and "you have to know how to get to the data, and there's only a few people" who can navigate them.

Day 3 validation — R&D team (cross-site perspective):

The R&D team confirmed the same pattern from a central support view: every time they support a new plant or department, they must discover the data landscape from scratch. Matt maintains an informal OneNote with data access cheat sheets per plant per department (20+ links). Even as a cross-site support agency, they find data discovery painful: "You have to dig, like put people in a torturing machine, and squeeze it out of them before you find out." Some departments are protective of data: "They have a paranoia that we're going to get them in trouble."

Day 5 readout validation:

Presented as one of 4 projects to Middletown leadership. Paul explicitly corrected the champion assignment: "Brian probably wouldn't be on there... That would probably be like Steve Longbottom from RIT, and then probably maintenance technology under John Houston." The team envisions extracting 20 years of Teams/SWAMI data into a queryable database with conversational LLM front-end. Dave validated the approach: "What I like is that anything you do is going to just deal with what we already have." Reception was positive but lower priority than Virtual SME and Procurement in Dave's view.

Current state: Confirmed — information flows through people, not systems. Three independent validations + readout confirmation. Data exists but is fragmented across departmental silos with no plant-wide visibility. "Teams" CMMS is legacy but functional. Target state: Fact-based delay attribution, daily standup driven by data not anecdote. Value estimate: $3-8M/yr (Cleveland benchmark $2-5M + RCA scope) Confidence: High — three independent validations (Dave, Dean/Chris, Brian Benning) Data readiness: TBD — depends on "Teams" data export capability Systems: Teams (homegrown CMMS), operational delay reporting (confirm — same SQL system as Cleveland?) Complexity: Quick Win Dependencies: None

Comparison with other sites:

Cleveland: strongest signal across all 10 transcripts. Every stakeholder independently identified this gap. Middletown's AK Steel heritage may mean better or different integration. Key question: is the ops-maint disconnect as severe here?

Open questions: - [ ] Does Middletown use the same operational delay reporting system as Cleveland? - [ ] How mature is the asset hierarchy in "Teams"? Does it cover all critical assets? - [ ] Are there any existing ops-maint integration tools from AK Steel era?


MDT-02: Maintenance AI Copilot

Field Value
Horizon H1: Bridge the Gap
Project PRJ-06
Status validated-with-caveats
Source Cleveland cross-pollination (CLV-07) + Day 2 management session
Field champion Dave Reinhold (sponsor for fleet version), West/Zach (truck section)

Problem statement:

At Cleveland, technicians fix equipment and move on with no documentation. Paper PM sheets come back crumpled and grease-covered. At Middletown, management expressed skepticism about behavior change for plant maintenance — but STRONG enthusiasm for the fleet vehicle version (MDT-22).

Day 2 validation — management session:

Dave and management were polite but skeptical about copilot for plant maintenance: "The technology's solid. You're dealing with people... We give our maintenance techs a detail pre-plan with all the tools they need to do a job and they still walk out there, scope it out, drink their coffee, go back." Chris asked: "Is it going to change behavior?" The group acknowledged it helps with hydraulic/electrical diagnosis and when people "get stumped," but doubted daily adoption. Toledo's QR code scanning on assets was cited as a CLF precedent.

However: The fleet vehicle copilot (MDT-22) received unanimous enthusiasm — proving the concept on simpler equipment is the Middletown entry point.

Proposed solution: Voice-based work capture + AI diagnostic assistant. Start with fleet vehicles (MDT-22 as proving ground), then scale to plant assets. Technicians talk through repairs, AI structures into work order fields.

Current state: Plant maintenance uses 30-page DMP packets (low engagement). Fleet section uses paper/whiteboard. No voice capture anywhere. Target state: Voice-first maintenance capture, no forms, AI-structured work orders. Fleet first, plant assets second. Value estimate: $0.5-2M direct + massive data quality uplift Confidence: Medium (plant), High (fleet) Data readiness: Low (plant — Teams/SWAMI is legacy), High (fleet — starting from zero = clean slate) Systems: Teams/SWAMI (plant CMMS), Oracle (purchasing), mobile devices Complexity: Quick Win (fleet), Medium (plant) Dependencies: IT policy on mobile devices, internet connectivity (shop has no internet, fiber ~1-2 months)

Open questions: - [x] Is there appetite? → Yes for fleet, lukewarm for plant maintenance - [ ] What is Wi-Fi coverage like at Middletown (especially finishing lines)? - [ ] What is the IT policy on mobile devices for union workers?


MDT-03: Procurement Automation (Conversational Front-End)

Field Value
Horizon H1: Bridge the Gap
Project PRJ-06
Status validated
Source Cleveland cross-pollination (CLV-08) + Day 2 management session — consensus
Field champion Dave Reinhold, Chris, Chuck (management session consensus)

Problem statement:

The buying experience at CLF is broken. When a maintenance technician needs a part, they face: (1) an unsearchable parts catalog where Oracle text search requires near-exact description match, (2) no visibility into whether other plants have the part in stock, (3) remote buyers who question routine purchases and loop approvals 2-3 times over 12+ months, (4) a $500 PO threshold that forces even trivial purchases through formal procurement, and (5) vendor markup workarounds (Napa at $25 for a $5 Amazon part) because searching the formal system is too painful. The result: people bypass the system entirely, buying off-catalog at markup, or simply taking parts from open stores without logging. ~$1B in parts inventory across the CLF footprint, and nobody can find what they need.

Day 2 evidence — management session:

"I need a sledgehammer. Boom — here's part numbers that are available. Do any of these work?" — Dave, describing the ideal conversational front-end. "Multiply that by 10 for the footprint... a billion dollars in stuff on the shelves." "It's fair to say there's consensus amongst pursuing a solution for purchasing and procurement." — Simee, confirming room alignment. Cross-site sharing example: "Cleveland reached out. They needed an oxygen hose for their BOF. I was like, hey dude, I don't have a 10 footer but I got a 16 footer."

Proposed solution: Conversational front-end for parts procurement — maintenance person says "I need a bearing, here's the info I have" → system searches across part numbers (fuzzy matching, tolerant of description inconsistency), finds matches, shows stock on shelf and at other CLF sites, places order. Voice-compatible. Replaces the painful Oracle text search with natural language. Works across Pilog master data and Oracle inventory.

Current state: Oracle procurement + manual buyer review. Cross-site visibility exists in Pilog but not inventory levels (phone calls for emergency parts). No conversational interface. Users bypass system entirely (Napa at markup, taking parts without logging). Target state: Voice-first parts search → intelligent matching → stock check across sites → streamlined ordering for known patterns. Value estimate: $1-3M/yr direct (reduced markup purchases, fewer emergency buys, buyer productivity) + velocity unlock Confidence: High — consensus from management session, same pain as Cleveland, concrete examples Data readiness: Medium — depends on MDT-31 (inventory cleanup) for reliable search results Systems: Oracle (ERP/purchasing), Pilog (master parts, cross-site), Teams/SWAMI (CMMS work order context), vendor catalogs Complexity: Quick Win (conversational front-end on existing data) → Medium (cross-site search + ordering) Dependencies: MDT-31 (Inventory Intelligence) is the prerequisite — procurement automation fails if the underlying inventory data is garbage (duplicates, wrong descriptions, wrong lead times). Corporate policy on approval thresholds ($500 limit).

Palmer filter note: Palmer explicitly warned MRO consolidation is "quicksand" / too long-term. The conversational front-end is NOT MRO consolidation — it's an interface layer that makes Oracle usable without replacing it. Position carefully: "we're not replacing Oracle, we're making it searchable."

Relationship to other initiatives:

Depends on MDT-31 (Inventory Intelligence & Master Data Cleanup) for clean underlying data. Complements MDT-02 (Maintenance AI Copilot) — voice-captured work orders can trigger parts search. The conversational search pattern is reusable across sites regardless of whether the backend is Oracle, Tabware, or a future unified system.

Open questions: - [x] Same $500 threshold at Middletown? → Confirmed (truck guys) - [x] Are purchasing agents on-site or remote? → Remote buyers, on-site stores manager (Sean) - [x] Consensus? → YES — strongest room agreement of the session

Day 5 readout validation:

Presented as one of 4 projects. Dave positioned this as the "quickest return" option: "Let's start thinking about that one over, look, give me some of this right now for sure. It's not as sexy, but give me this one now because I'm generating real, I'm returning money back." Paul was candid about the warehouse: "I would not take you to my warehouse because that's somewhat embarrassing." Multiple examples of duplicate inventory (Paul: "admittedly we have duplicate stuff") and unsanctioned satellite storage. Paul described the conversational demo and immediately saw value: "It would appear there's been some orders placed by Chris. You might want to go." Dave's self-funding cascade depends on this project generating quick ROI to create budget for the Virtual SME and other projects. Budget constraint critical: "I don't have 2 million for AI Innovation in my budget, so anything we go after has to pay for itself pretty quick." - [ ] What APIs does Oracle expose for search/ordering? (IT question for Phil/Charles) - [ ] Can Pilog be queried programmatically for cross-site parts lookup?


MDT-04: Through-Process Quality Traceability

Field Value
Horizon H2: Build the Foundation
Project PRJ-04
Status validated
Source Middletown site profile + Day 2 management session (Chuck, Chris, Palmer) + Day 3 quality deep dive
Field champion Chuck (Quality Mgr), Chris (Sr Div Mgr Finishing/Automation), Alex (Quality Engineer)

Problem statement:

Middletown has the longest process chain: vacuum degasser → caster → pair-cross HSM → pickling → 5-stand cold mill → electrogalv/aluminize/galvanize/anneal. A chemistry issue at the degasser or a surface defect at the caster may not be detected until post-coating inspection — potentially 6+ processing steps later. Automotive OEMs have near-zero tolerance for surface defects on exposed panels. When a defect is found downstream, quality teams must "restart the investigation" — tracing backwards through 6+ process steps to find the root cause. This investigation restart problem is a massive time sink and often fails to attribute the defect accurately. Current plant-wide quality loss is ~1% — double the historical 0.5% from 5-6 years ago, driven by equipment aging, workforce turnover, and a union retirement wave that took out 30+ year veterans.

Day 2 validation evidence:

Chuck (Quality Mgr) confirmed the investigation restart problem: when a defect shows up at the coating line, they have to go back through every process step to find where it originated. The Ametek surface inspection cameras exist but accuracy on key defect types is ~60%. Chris confirmed the gap between process steps: data exists at each stage but is not connected. Palmer specifically flagged surface inspection as a cross-site opportunity he wants to see in the readout. The management consensus was strong — this is the single highest-value opportunity unique to Middletown.

Day 3 validation evidence — quality deep dive (Chuck, Alex, Eric):

Quality loss quantified: ~1% plant-wide (double historical 0.5%), ~5% rejection starting at the caster alone. Each process stage adds potential quality loss. Dearborn closure means capacity is tight — every remake is costly, no slack to absorb quality misses. GERS (Generative Expert Routing System) manages recipes but is a legacy black box (punch cards → mainframe → web-based, original developer gone). Cross-plant recipe sharing for unfamiliar grades is manual (phone calls, email). Chuck described two future sub-workstreams within through-process quality:

Smart disposition: ~50 hold types, ~100 holds/day, many trivially releasable (e.g., coil too wide → person manually checks if next line can trim → sends it on). Quinn Logic at Burns Harbor automates this. AI could release trivial holds and flag only real issues — freeing significant manual review time.

Mechanical property prediction: When unfamiliar grades or gauge ranges come in, engineers manually query Access databases, make charts in Minitab, and guess. Key variables: chemistry, coiling temperature, cooling pattern, transfer bar thickness. Each finishing operation multiplies the variable space. Forward-looking model ("given these inputs, what properties will we get?") would reduce trial-and-error remakes.

Proposed solution: End-to-end traceability from heat chemistry (degasser settings) through every processing step to final coating quality. AI models correlating upstream process parameters to downstream defects. Phased approach: 1. Ametek classifier improvement (MDT-24) as concrete first deliverable 2. Through-process quality alerting (MDT-25) for tundish/heat-level defect propagation 3. Drift detection (MDT-26) for within-spec mechanical property monitoring 4. Smart disposition (future) — automated hold release for trivial holds 5. Property prediction (future) — recipe optimization for unfamiliar grades

Current state: Ametek cameras exist but ~60% accuracy on key defect types. Quality data captured at each process step but NOT connected across steps. Investigation restart = manual forensic work. 1% quality loss (2x historical). ~50 hold types, 100/day manual disposition. Target state: Automated heat→slab→coil→finished product genealogy with defect attribution across process steps. Ametek classifier improved to >90% accuracy. Smart disposition for trivial holds. Property prediction for unfamiliar grades. Value estimate: $5-15M/yr (0.3-0.5% yield improvement + defect reduction + customer claim prevention + disposition automation + reduced remakes) Confidence: High — validated by Quality Mgr, Finishing/Automation leader, corporate (Palmer), and quality engineering team (Day 3) Data readiness: Medium — data exists at each step, but cross-step linkage is the gap. Ametek cameras produce data that could be immediately improved. 16-17k tensile tests/month. Thousands of research reports. Systems: L2, caster data, HSM data, cold mill data, coating line data, Ametek SIS cameras, quality systems, GERS (routing), mechanical testing database, Access/Minitab/JMP Complexity: Strategic (full vision), but decomposes into Quick Win → Medium → Strategic sub-initiatives Dependencies: Data integration across process steps. MDT-24 (Ametek classifier) → MDT-25 (alerting) → MDT-26 (drift detection) as sequenced deliverables.

Comparison with other sites:

Cleveland had partial quality/yield validation but slab cut already captured the biggest win. Middletown's finishing chain makes this site the best candidate for through-process quality. Burns Harbor has Quinn Logic — a smart disposition system that automates many of the hold releases Middletown does manually. Burns Harbor's plate quality may be a comparable opportunity. Palmer wants surface inspection as a cross-site play. Cross-plant recipe sharing is a universal pain (manual, slow).

Open questions: - [x] Does a Surface Inspection System (SIS) exist? What vendor? YES — Ametek, ~60% accuracy on key defect types - [x] What is quality loss? → ~1% plant-wide (double historical 0.5%), ~5% at caster exit - [x] How many mechanical tests? → 16-17k tensile tests/month - [ ] Can Middletown trace a coil back to a specific heat and degasser run? - [ ] What quality data is captured at each processing step? - [ ] What are the top 3 automotive customer quality complaints? - [ ] What is the Ametek camera model and data output format? (needed for classifier retraining) - [ ] What is the GERS code base? Language, size, accessibility? (for legacy code documentation) - [ ] What is Quinn Logic at Burns Harbor exactly? Can it be adapted to Middletown?


MDT-05: Coating Line Defect Detection (Ametek Classifier Improvement)

Field Value
Horizon H2: Build the Foundation
Project PRJ-04
Status validated
Source Middletown site profile + Day 2 management session (Chuck confirmed Ametek cameras, ~60% accuracy)
Field champion Chuck (Quality Mgr), Chris (Sr Div Mgr Finishing/Automation)

Problem statement:

Middletown has Ametek surface inspection cameras on the coating lines. However, the cameras are only ~60% accurate — they generate so many false positives that operators lose trust in the system and tend to ignore alerts. This is a classic "boy who cried wolf" failure mode: the inspection technology exists, but its low accuracy makes it operationally useless. Automotive OEMs have near-zero tolerance for surface defects on exposed panels, making accurate detection critical.

Day 2 validation evidence:

Chuck confirmed Ametek as the vendor. The ~60% accuracy figure was discussed in the management session. Chris (Finishing/Automation) confirmed the trust erosion problem — operators have learned to discount the alerts. Palmer flagged surface inspection as one of the specific use cases he wants to see in the corporate readout, positioning this as a cross-site opportunity (other CLF sites likely have similar Ametek installations with similar accuracy problems).

Proposed solution: Retrain/replace the Ametek classifier using the existing camera hardware. Use historical images (correctly labeled by quality team review) to build a higher-accuracy defect detection model. Target: >90% accuracy to restore operator trust and enable automated quality gating. This is a concrete, bounded AI project with clear before/after metrics.

Current state: Ametek cameras installed, ~60% accuracy, operators ignore alerts. Target state: >90% defect classification accuracy. Operators trust the system. Automated quality holds on flagged coils. Value estimate: $2-8M/yr (defect reduction + customer claim prevention + reduced manual inspection labor) Confidence: High — cameras exist, problem is validated, Palmer wants it for cross-site Data readiness: Medium-High — Ametek cameras are generating images. Need to validate historical image archive and labeling. Systems: Ametek SIS cameras, coating line process data, quality management system Complexity: Medium — bounded ML problem (image classification) with existing hardware Dependencies: Through-process traceability (MDT-04) for root cause attribution. Access to Ametek image data/API.

Why this is a strong "laser strike" candidate (Palmer filter): 1. Bounded problem: Improve a specific classifier, not build a platform 2. Cross-site scalable: Other CLF sites likely have same Ametek cameras with same accuracy issues 3. Quick ROI: Existing hardware, no new sensors, measurable accuracy improvement 4. Palmer asked for it: Surface inspection was one of his specific examples

Open questions: - [x] What inspection systems exist on each coating line? Ametek cameras, ~60% accuracy - [ ] What is the Ametek camera model and image data format? - [ ] How many labeled images exist (true positives vs. false positives)? - [ ] What are the top defect types and their frequency? - [ ] What is the annual cost of automotive quality claims? - [ ] Do other CLF sites use the same Ametek system? (cross-site scaling)


MDT-06: BF 3 Optimization (Stove Tender Decision Support — Palmer Flagged)

Field Value
Horizon H2: Build the Foundation
Project PRJ-05
Status identifiedPalmer flagged as corporate priority
Source CLF public statements + Day 2 management session (Palmer specifically requested BF stove optimization)
Field champion TBD — needs BF operations leader

Problem statement:

BF 3 (1953, relined 2021) is operating "pre-retirement." The $500M hydrogen/DRI replacement was cancelled. CLF has explicitly stated AI-driven optimization at Middletown as the alternative. Palmer specifically flagged "stove tenders" as a concrete AI opportunity — BF stove management involves complex thermal cycling decisions that are currently experience-dependent. With experienced stove tenders retiring, decision support for stove optimization is both a knowledge capture and efficiency opportunity. This is cross-site scalable (every CLF BF has stoves), which is why Palmer prioritized it.

Day 2 validation (Palmer):

In the management session, Palmer explicitly named BF stove optimization as one of the use cases he wants to see in the corporate readout. He framed it as: (1) proven AI use case in steel industry, (2) cross-site scalable to every BF in CLF, (3) knowledge capture for retiring stove tenders, (4) concrete enough for quick ROI demonstration. Dave Reinhold confirmed BF 3 has instrumentation from the hydrogen injection trial. See also MDT-30 (Stove Tender Decision Support) for the specific stove angle.

Proposed solution: Two-track approach: (1) Stove tender decision support (MDT-30) as the bounded "laser strike" — AI advisor for stove cycling decisions, targeting fuel efficiency and stove life. (2) Broader BF optimization as the H2 expansion — thermal state prediction, burden distribution, fuel rate optimization.

Current state: TBD — validate instrumentation and historian data. Hydrogen injection trial (May 2023) infrastructure may still be in place. Target state: Predictive thermal model enabling proactive BF management + stove tender decision support tool. Value estimate: $3-10M/yr (fuel rate + hot metal quality + availability + stove life extension) Confidence: Medium — Palmer corporate backing, proven industry approach, pending data validation Data readiness: TBD — BF 3 had hydrogen injection trial, suggesting instrumentation exists. Need to validate historian depth. Systems: Historian (TBD), L2 BF control, stove control system Complexity: Medium Dependencies: Historian data access, identification of BF operations champion

Palmer filter alignment: This is one of Palmer's explicitly named priorities for the corporate readout. The stove tender angle specifically satisfies his criteria: cross-site scalable (every BF has stoves), quick ROI potential, and knowledge capture for workforce transition.

Open questions: - [ ] What instrumentation exists on BF 3? - [ ] What historian system captures BF data? How deep is the history? - [ ] What fuel rate and hot metal quality KPIs are tracked? - [ ] Is the hydrogen injection trial infrastructure still in place? - [ ] Who are the experienced stove tenders? How many years to retirement? - [ ] What stove cycling decisions are currently made manually?


MDT-07: Finishing Line Scheduling

Field Value
Horizon H2: Build the Foundation
Project PRJ-02
Status seed
Source Middletown site profile — multiple finishing lines competing for cold-rolled substrate
Field champion TBD

Problem statement:

Middletown has electrogalvanizing, hot-dip galvanizing, aluminizing, annealing, and temper mill lines, all consuming cold-rolled substrate from the same 5-stand mill. Scheduling which coils go to which finishing line, in what sequence, and when, is likely manual and complex. Automotive delivery deadlines add time pressure.

Proposed solution: AI-assisted finishing line scheduling that optimizes for customer delivery dates, line changeover minimization, and substrate availability.

Current state: TBD — validate current scheduling process (Excel? ERP module? tribal knowledge?) Target state: Optimized daily/weekly finishing schedule with constraint-aware sequencing. Value estimate: $3-10M/yr (throughput + delivery reliability + changeover reduction) Confidence: — (pending validation) Complexity: Medium Dependencies: Understanding of current scheduling tools and constraints

Open questions: - [ ] How is finishing line scheduling done today? - [ ] What are the changeover times between product types on each line? - [ ] How often are schedules disrupted by upstream delays?


MDT-08: PdM Proof of Value (Fleet Vehicles as Proving Ground)

Field Value
Horizon H1→H2
Project PRJ-03
Status identified — Dave Reinhold preference
Source Day 2 management session (Dave proposed fleet as PdM starting point) + Truck section validation
Field champion Dave Reinhold (sponsor), West (Truck Section Mgr)

Problem statement:

PdM on production assets (BOF, cranes, etc.) requires complex sensor integration, union coordination, and operational disruption risk. Dave Reinhold's preference is to start with fleet vehicles — trucks, cranes, mobile equipment — as a simpler proving ground where: (1) assets are standardized, (2) OBD/telematics data is readily available, (3) failure modes are well-understood, (4) there's no production disruption risk from sensor installation. The truck section currently runs on paper and whiteboard — Mitchell software license expired — so there's zero digital baseline to beat.

Day 2 validation evidence:

Dave explicitly suggested fleet as the PdM starting point in the management session. The truck section workshop confirmed the opportunity: all maintenance is reactive, no digital work order system, pre-trip inspections are pencil-whipped, and Mitchell (the former fleet software) license was allowed to expire. West (Truck Mgr) confirmed 15+ trucks in fleet. The gap between current state (paper/whiteboard) and basic digital is so large that even rudimentary telematics + digital work orders would be transformative.

Proposed solution: Start PdM proof of value with fleet vehicles: install telematics on trucks, digitize work orders, build basic predictive models on engine/transmission/brake data. Use fleet success to build organizational credibility for expanding PdM to production assets.

Current state: Paper-based. Mitchell software expired. Pre-trip inspections pencil-whipped. Zero predictive capability. Target state: Telematics-equipped fleet with digital work orders and basic predictive alerts (engine hours, oil life, brake wear). Value estimate: $0.5-1M/yr direct (fleet), but strategic value is proving PdM concept for $3-12M/yr production asset expansion. Confidence: Medium — Dave wants it, truck section confirmed the gap, but fleet is not the highest $ opportunity. Data readiness: Low — no digital system today. Would need telematics installation + CMMS for fleet. Systems: None currently. Would need fleet telematics + basic CMMS or mobile app. Complexity: Quick Win (fleet) → Expand (production assets) Dependencies: Budget for telematics hardware, truck section buy-in (strong per Day 2)

Strategic note: Dave sees fleet as a "safe" PdM bet — no production risk, simpler assets, faster proof. This is a valid de-risking strategy but must be positioned carefully: fleet PdM alone doesn't demonstrate value at the scale Palmer wants to see. Frame as "Phase 0" that feeds into the broader PRJ-03 multi-asset PdM platform.

Open questions: - [ ] How many vehicles in the fleet? What types? - [ ] Is OBD-II port accessible on all trucks? - [ ] What is the current maintenance cost per vehicle per year? - [ ] What are the top 3 failure modes (engine, transmission, brakes, hydraulics)? - [ ] Would West accept a telematics pilot on a subset of trucks?


MDT-09 through MDT-12: Additional Seeds

Detail cards to be populated during visit. Summary in master table above.

  • MDT-09: Cobble predictionENRICHED (Day 3 R&D): Cleveland HSM cobbles "by far worst in the company." R&D now starting Middletown cobble work. IBA data (millisecond-level) + L3 data available. Cleveland furnace data is a "black hole." Key insight from R&D: "the fix for the last piece is the hurt for the next piece" — each slab reacts differently to mill settings. Scalable: cobbles across all hot mills = enormous aggregate cost. R&D is the right champion team. Status upgraded from seed to identified.
  • MDT-10: Caster chemistry — RH vacuum degasser adds unique optimization dimension.
  • MDT-11: Energy optimization — SunCoke partnership, BF gas recovery.
  • MDT-12: Vacuum degasser optimization — Cycle time, alloy addition, vacuum control.

MDT-13: Safety Incident Trend Analytics

Field Value
Horizon H1: Bridge the Gap
Project New (potential PRJ-09 or standalone quick win)
Status validatedmulti-stakeholder confirmation (Dave, Palmer, Eric Archer)
Source Dave Reinhold — unprompted Day 1 proposal + Day 2 Palmer/management endorsement
Field champion Dave Reinhold (committed to data access, already briefed Eric Archer corporate safety)

Problem statement:

Middletown has 5 years of safety incident data in their reporting system. 80% of the data fields captured during incident reporting are not actively analyzed. Trend identification (day of week, demographics, seniority, consecutive hours, weather conditions, department) is done manually when a leader has a "gut feeling." Dave suspects Tuesday is the highest-risk day — possibly correlated with Hot Mill down day and/or first day back from long weekend for one crew rotation — but has no analytical confirmation.

Day 2 validation evidence:

Palmer endorsed the safety analytics idea in the management session, adding a specific variable: "consecutive days worked" — he wants to know if there's a correlation between how many consecutive days someone has worked and incident frequency. Palmer sees this as cross-site scalable (every CLF site has the same safety reporting system). Eric Archer (corporate safety) has been briefed by Dave and is supportive. Dave shared the scale context: when a safety pattern is identified, the response can be massive — he cited a recent 550-person safety training triggered by a pattern they spotted manually. If AI can identify these patterns earlier and with statistical confidence, it could reduce both incidents AND the cost of reactive training campaigns.

Proposed solution: AI-driven analysis of 5 years of incident data. Correlation analysis across temporal, demographic, environmental, and operational variables. Specifically include Palmer's "consecutive days worked" variable. Deliver actionable trend report with statistical validation of patterns.

Current state: Rich data, zero analytics. Manual hypothesis testing only when a leader has an intuition. 550-person training campaigns triggered reactively. Target state: Quarterly trend reports with validated patterns. Leading indicators integrated into crew briefings. Proactive training targeting based on statistically validated risk factors. Value estimate: Low direct $ — but highest strategic value as trust-building exercise AND Palmer-endorsed cross-site play. Safety is KPI #1 for Dave. Delivering here opens every other door. Confidence: High — data exists, Dave owns it, Palmer endorsed it, Eric Archer (corporate) supportive. Data readiness: High — 5 years, plant-level data is "pretty solid" per Dave and corporate safety contact. Systems: Safety reporting system / incident tracker (access committed by Dave) Complexity: Quick Win — "relatively easy to do, relatively speaking" per Dave Dependencies: Data access (Dave committed), department number mapping (differs from other sites)

Why this matters strategically: This is NOT the highest-value project in dollar terms. But it is now validated at three levels: 1. Plant GM (Dave) — proposed it, owns it, will champion results 2. Corporate engineering (Palmer) — endorsed it, wants consecutive-days variable, sees cross-site potential 3. Corporate safety (Eric Archer) — briefed and supportive

The 550-person training example quantifies the operational impact: if AI can identify the pattern 3 months earlier, it prevents both incidents AND the massive reactive training cost. This positions safety analytics as a trust-building AND cost-saving initiative.

Comparison with other sites:

No equivalent at Cleveland. Palmer's endorsement positions this for cross-site rollout. Every CLF site has the same safety reporting system → same analytics can be applied.

Open questions: - [x] Is Dave willing to share the data? YES — committed Day 1 - [x] Is corporate supportive? YES — Palmer endorsed, Eric Archer briefed - [ ] What are the data fields? (date, time, day, department, age, seniority, consecutive days, hours, severity, type, weather, etc.) - [ ] Can we get a data export by Day 3? - [ ] Department numbers: unique to Middletown or shared nomenclature? - [ ] Does the reporting system capture "consecutive days worked" directly, or must it be derived from schedule data?


MDT-14: HSM Centerline Tracking (Vision AI)

Field Value
Horizon H2: Build the Foundation
Project PRJ-04 adjacent / new
Status identified
Source Day 1 field conversation — Middletown sprint
Field champion TBD

Problem statement:

On the hot strip mill, maintaining the strip centerline through the rolling stands is critical for dimensional consistency, edge quality, and cobble prevention. On the pair-cross HSM (Middletown-specific technology), centerline drift may have different dynamics than conventional HSMs. Current control is likely model-based (Siemens L2) with limited closed-loop feedback from direct visual observation.

Proposed solution: Camera-based vision AI tracking the strip centerline in real time through the HSM rolling stands. Provides ground-truth feedback for steering correction, reduces cobble risk from off-center threading, and generates data for process model improvement.

Current state: TBD — validate what centerline monitoring exists today. Target state: Real-time vision-based centerline tracking with closed-loop or advisory feedback to operators. Value estimate: TBD — link to cobble reduction, yield, and edge trim savings. Confidence: — (pending validation) Data readiness: TBD — requires camera infrastructure assessment. Systems: HSM L2, camera installation (TBD), historian Complexity: Medium Dependencies: Camera installation feasibility, L2 integration pathway

Open questions: - [ ] What centerline monitoring exists today on the pair-cross HSM? - [ ] How often do threading / steering issues cause cobbles or edge defects? - [ ] Is camera access feasible in the roughing and finishing stands environment? - [ ] Does the Siemens steering model already use optical feedback or is it purely sensor-based?


MDT-15: Safety Review Database (LLM/RAG)

Field Value
Horizon H1: Bridge the Gap
Project PRJ-06 adjacent
Status identified
Source Day 1 field conversation — Middletown sprint
Field champion TBD

Problem statement:

Safety reviews, hazard assessments, lock-out/tag-out procedures, and incident reports accumulate over decades and exist as unstructured documents — PDFs, Word files, paper scans. When a technician faces an unfamiliar task or a near-miss occurs, finding relevant prior safety reviews is a manual, time-consuming process. Knowledge of past incidents is not systematically surfaced.

Proposed solution: LLM-powered RAG (Retrieval-Augmented Generation) system over the safety document corpus. Technicians query in plain language: "What are the safety requirements for working on BOF vessel tilt drive?" and get consolidated answers with source citations from the actual safety review documents.

Current state: TBD — validate where safety documents live and what access looks like. Target state: Searchable, queryable safety knowledge base accessible from mobile devices on the plant floor. Value estimate: $0.5-2M (incident reduction, compliance efficiency, onboarding speed for new technicians) Confidence: — (pending validation of document corpus quality) Data readiness: TBD — depends on whether safety docs are digitized Systems: Document management system (TBD), mobile access Complexity: Quick Win (if documents are digitized) Dependencies: Document digitization status, IT approval for LLM deployment

Open questions: - [ ] Where are safety review documents stored today? SharePoint? Paper? - [ ] How many safety review documents exist? How far back? - [ ] Is there an existing permit-to-work or safety management system? - [ ] What mobile device / connectivity situation exists on the plant floor?


MDT-16: HSM Rolling Model — Siemens Replacement

Field Value
Horizon H2: Build the Foundation
Project New — PRJ-10 candidate
Status identified
Source Day 1 field conversation — Middletown sprint
Field champion TBD

Problem statement:

The hot strip mill operates under a Siemens-supplied L2 process model that controls rolling schedules, pass reductions, temperature profiles, and finishing parameters. Siemens models are black-box, expensive to update, and not adaptable to site-specific process improvements or grade mix changes without vendor involvement. Any model update requires a Siemens engagement.

Proposed solution: Custom AI/ML rolling model to replace or augment the Siemens L2 model. Trained on Middletown's own historical rolling data (historian + L2 setpoints + quality outcomes). Gives Cleveland-Cliffs full ownership, adaptability, and the ability to tune the model for new automotive grades without vendor dependency.

Current state: TBD — validate Siemens model version, contract status, and pain points with the current model. Target state: Custom rolling model with owned IP, continuously improved from operational data, validated against Siemens baseline before cutover. Value estimate: $3-10M/yr (yield improvement from better rolling schedules + reduced Siemens licensing/support costs + grade flexibility) Confidence: — (technically proven approach in industry; pending local validation) Data readiness: TBD — requires access to L2 setpoint data and rolling historian Systems: HSM L2 (Siemens), historian (Pi or Wonderware), quality systems Complexity: Strategic — requires deep process knowledge and L2 integration Dependencies: L2 data access, process engineer partnership, Siemens contract terms

Day 3 enrichment — Brian Benning (Process Control):

Brian confirmed the legacy code problem extends well beyond the Siemens HSM model. Multiple homegrown systems reside on isolated computers throughout the plant — even the original vendors cannot support some of them. "Nobody really knows how the process works exactly, so it's a bit of tinkering from both ends." Brian sees AI as a tool to reverse-engineer and document these codebases, making them modifiable and evolvable. This validates MDT-16 specifically AND positions a broader "legacy code modernization" theme applicable across CLF. Systems mentioned: TEAMS, a Siemens hot mill system, and other unnamed operational control systems.

R&D enrichment:

R&D confirmed the furnace data gaps feeding the HSM: "We think that hot mill data with lack of good information from the furnaces — we don't really have a good understanding of that initial slab coming out." Furnace soak profiles are poorly recorded, which limits any rolling model improvement — the input data is incomplete. Closing this data gap (furnace → HSM) is a prerequisite for meaningful model improvement.

Note: This is strategically aligned with CLF's stated goal of replacing vendor dependency (hydrogen project cancellation → AI self-sufficiency narrative). Positions Vooban as a long-term IP partner, not a one-time vendor.

Open questions: - [ ] What version of Siemens L2 model is running? What is the contract / support structure? - [ ] What are the known limitations or failure modes of the current Siemens model? - [ ] Is there internal process engineering capability to validate a replacement model? - [ ] Does Pi/Wonderware capture L2 setpoints and actual rolling parameters at sufficient resolution?


MDT-17: X-Ray QA for Rolled Steel

Field Value
Horizon H2: Build the Foundation
Project PRJ-04
Status identified
Source Day 1 field conversation — Middletown sprint
Field champion TBD

Problem statement:

Subsurface defects in rolled steel (internal voids, laminations, inclusions) are not detectable by surface visual inspection but can cause failures at the customer's stamping or forming process. X-ray inspection systems exist for this purpose but their output — large volumes of scan data — may not be systematically analyzed or integrated into the quality traceability chain. Automotive OEMs have zero tolerance for structural defects in exposed or structural panels.

Proposed solution: AI-driven analysis of x-ray QA camera output to automatically classify defects by type, severity, and location, with linkage to the upstream process parameters (heat chemistry, caster settings, rolling schedule) that caused them. Enables root cause attribution and process correction.

Current state: TBD — validate what x-ray QA cameras exist, on which lines, and how results are currently used. Target state: Automated defect classification with traceability to upstream causes. AI model flags anomalous patterns before product ships. Value estimate: $2-5M/yr (defect escape prevention + customer claim reduction + yield recovery through targeted rejection vs. blanket downgrade) Confidence: — (pending validation of camera infrastructure) Data readiness: TBD Systems: X-ray inspection cameras (TBD vendor), quality management system, historian Complexity: Medium Dependencies: Camera output data accessibility, quality system integration

Open questions: - [ ] What x-ray QA camera systems exist? On which lines (HSM exit? Cold mill? Coating?) - [ ] Are x-ray scan results stored digitally and accessible? - [ ] How are defect decisions made today — manual review by quality tech? - [ ] What is the annual cost of defect-related downgrades and customer claims?


MDT-18: Root Cause Analysis Platform

Field Value
Horizon H1: Bridge the Gap
Project PRJ-01 adjacent
Status identified
Source Day 1 field conversation — Middletown sprint
Field champion TBD

Problem statement:

When equipment fails or a quality issue occurs, root cause analysis is informal, inconsistent, and undocumented. Findings exist in people's heads or in unstructured work order notes. The same failure modes recur because nobody systematically captures what caused them or what fixed them. At Cleveland, the 2001 restructuring removed the dedicated reliability engineers who used to own this process — validate whether the same gap exists at Middletown (AK Steel heritage may mean different org structure).

Proposed solution: Structured RCA platform with AI-assisted cause tree generation. When a failure or quality event is logged, the system suggests probable root causes based on similar historical events, prompts the technician to confirm or correct, and builds a living knowledge base of failure patterns. Over time, recurring causes surface automatically and drive proactive action.

Current state: TBD — validate RCA maturity at Middletown vs. Cleveland. Target state: Structured RCA for all significant events, AI-suggested cause trees, searchable failure pattern library. Value estimate: $1-4M/yr (recurring failure reduction + downtime prevention + accelerated troubleshooting) Confidence: — (pending validation — Middletown org history may differ from Cleveland) Data readiness: TBD — depends on "Teams" work order data quality and export capability Systems: Teams (homegrown CMMS, primary), historian (for correlating sensor patterns to failure events) Complexity: Quick Win to Medium Dependencies: Tabware data quality, maintenance team buy-in

Comparison with Cleveland:

Cleveland's 2001 restructuring removed dedicated RCA capability entirely. Middletown's AK Steel heritage may have preserved more structure. Key question: does the ops-maint disconnect look as severe here, or did the Japanese process engineering culture from the Kawasaki partnership create better institutional habits?

Open questions: - [ ] Is there a formal RCA process at Middletown today? - [ ] Are failure modes documented in "Teams" work orders? - [ ] Are there dedicated reliability engineers (unlike Cleveland where roles were merged)? - [ ] What are the top 3 recurring failure modes at Middletown?


MDT-19: Computer Vision QA — Cold Rolled Line

Field Value
Horizon H2: Build the Foundation
Project PRJ-04
Status identified
Source Day 1 field conversation — Middletown sprint
Field champion TBD

Problem statement:

Surface defects on cold rolled steel (scratches, slivers, roll marks, scale pits, edge cracks) are difficult to catch consistently with manual visual inspection. At Middletown, where cold rolled substrate feeds directly into automotive-grade coating lines, a surface defect that passes cold mill inspection becomes an expensive customer claim after coating. Inspection is likely intermittent or operator-dependent, with no systematic defect classification or traceability to upstream cause.

Proposed solution: Computer vision system installed at the cold rolled line exit performing 100% surface inspection at line speed. AI classifies defects by type, severity, and location. Results are linked to coil identity and upstream process parameters (roll condition, reduction schedule, entry material) for root cause attribution. Defective coils can be diverted before entering the coating lines.

Current state: TBD — validate what surface inspection exists on the cold mill today. Target state: Automated 100% surface inspection at cold mill exit, defect classification, upstream traceability, and divert logic before coating entry. Value estimate: $2-6M/yr (automotive claim reduction + yield recovery from targeted downgrade vs. blanket rejection + coating line protection) Confidence: — (well-proven technology in steel; pending site-specific data) Data readiness: TBD — requires camera infrastructure and integration with coil tracking Systems: Cold mill L2, coil tracking system, historian, quality management system Complexity: Medium Dependencies: Camera installation feasibility, coil ID linkage across process steps

Relationship to other initiatives:

Complements MDT-04 (through-process quality traceability) and MDT-17 (x-ray QA). Vision at cold mill exit catches surface defects; x-ray catches subsurface; together they provide full quality gate before coating. MDT-05 (coating line defect detection) is the downstream equivalent — this initiative is the upstream catch.

Open questions: - [ ] Does a surface inspection system (SIS) exist on the cold mill today? What vendor? - [ ] What are the top surface defect types on cold rolled product? - [ ] What percentage of cold rolled coils are downgraded or returned due to surface issues? - [ ] Is coil identity tracked continuously through cold mill and into coating lines?


MDT-20: ~~Part Number and Description Cleanup~~ → Absorbed into MDT-31

Status: absorbed — All scope, evidence, and Vooban reference implementation details moved to MDT-31 (Inventory Intelligence & Master Data Cleanup), Phase 1. Part dedup and description standardization is now the first phase of the broader inventory initiative.


MDT-21: QA Investigation Post-Mortem Knowledge Base

Field Value
Horizon H1: Bridge the Gap
Project PRJ-04
Status validated
Source Day 2 field conversation + Day 3 quality deep dive (Chuck + Alex + Eric)
Field champion Chuck (Quality Mgr), Alex (Quality Engineer)

Problem statement:

When a quality issue occurs — a customer complaint, a failed coil, a surface defect escape — the investigation process is labor-intensive and the findings are poorly captured. Thousands of research reports have accumulated over years in a standard scientific format (objective → summary → supporting evidence), stored as PDFs on local computers and possibly a file server. Institutional knowledge about failure signatures is trapped in individuals — the 6-person product metallurgy group relies on personal memory to connect current problems to past investigations. Querying past work ("have we seen this defect pattern before?") requires manually digging through files or asking someone who remembers. The corrective measure system captures the identified root cause and fix, but does NOT capture the investigation path — all the variables checked, hypotheses eliminated, and data reviewed before arriving at the root cause. Each new investigation starts from scratch.

Day 3 validation evidence — Chuck + Alex:

Chuck confirmed thousands of research reports exist in standard scientific format (objective, summary, supporting evidence — "pretty much like a science report"). Stored as PDFs, likely on local computers or a file server ("I believe it would be a local computer somewhere"). Chuck: "There's a just boatload of research years and years of research reports that I don't think are organized to a level that would be easily accessible." Alex confirmed the corrective measure system captures root cause and fix but NOT the investigation process. Chuck explicitly prioritized this initiative OVER customer complaint triage (MDT-27): "I would not prioritize that [claims investigation]... the one I would put in its place is trying to look at our quality data from a quality reporting standpoint." Micrograph comparison capability would be particularly valuable — "if you could look at micrographs within those reports and say, hey, the features, this feature looks like this feature, that would be a pretty useful tool."

QMS context: UDMS (web-based) exists for procedures, FMEAs, control plans. Archives of obsolete documents available. This is separate from the research reports.

Proposed solution: LLM-powered knowledge base over the full research report and post-mortem corpus. The system: * Ingests all QA research reports and investigation documents (PDFs, likely thousands) * Structures key fields for search: defect type, product grade, line, heat/coil ID, root cause, corrective action, recurrence flag * Enables plain-language queries: "Show all investigations involving zinc adhesion on EG line since 2022" or "What corrective actions were taken for roll mark defects on grade X?" * Micrograph matching: Compares visual features in micrograph images across reports to identify similar defect signatures * Flags new investigations that match patterns from past events, prompting engineers to check prior findings before starting from scratch * Links to MDT-04 (through-process traceability) — post-mortems become richer when upstream process data is automatically attached to each investigation

Current state: Thousands of research reports in scientific format (PDF, local storage). Corrective measure system captures root cause but not investigation path. UDMS for procedures/FMEAs. 6-person metallurgy group relies on personal memory. Target state: Queryable knowledge base accessible to quality engineers and metallurgists. New investigations automatically surface related historical cases. Micrograph comparison across reports. Value estimate: $1-4M/yr (reduced investigation time, fewer repeated escapes, faster corrective action, lower automotive claim exposure) Confidence: High — Chuck explicitly prioritized this, standard document format, large corpus, clear champion Data readiness: High — standard scientific format, thousands of documents, likely all digital (PDF). Need to confirm storage location and access. Systems: Local file storage / file server (research reports), UDMS (QMS — procedures), corrective measure system, quality databases Complexity: Quick Win (if documents are accessible and consistently formatted — Chuck confirmed they are) Dependencies: Access to research report file storage, IT cooperation for document ingestion

Relationship to other initiatives:

Complements MDT-04 (through-process quality traceability) — richer post-mortems when process data is automatically attached. Complements MDT-26 (drift detection) — research reports provide context for what historically caused drift patterns. Feeds into the quality intelligence package that Chuck and Alex want: drift detection + knowledge base + quality reporting = the three initiatives they explicitly prioritized. Palmer alignment: Knowledge capture is one of Palmer's explicit themes for the readout.

Open questions: - [x] Where are QA investigation post-mortems stored? → PDFs on local computers / file server (Chuck confirmed) - [x] How consistently are post-mortems documented? → Standard scientific format (objective, summary, evidence) - [x] How many? → Thousands, years of accumulation - [x] Is there a QMS? → UDMS (web-based) for procedures/FMEAs/control plans — separate from research reports - [ ] Can we get access to the research report file storage? Who owns it? - [ ] Are micrograph images embedded in the PDFs or stored separately? - [ ] How far back do digital reports go? Is there a cutoff before which it's paper only?

Day 5 readout validation:

Presented as one of 4 projects under the "QoE Knowledge and Investigation" umbrella. Paul confirmed the pain: "Seldom do we remember all the particulars... Paul remembers. We had an issue with the Zinker furnace but yeah, something we did something and something we did." Alex was "pretty excited about this one" per Paul — and the knowledge sharing theme resonated especially for new hires: Alex came from Indiana Harbor and lacks Middletown-specific tribal knowledge. Paul and Dave both validated the cross-site angle: sharing knowledge between sites so "Barnes Harbor has [solved it], here's their journey." Estimated at $1.5-5M/yr. Directly aligned with Palmer's knowledge capture theme.


MDT-22: Fleet Vehicle AI Copilot (Diagnostic + Knowledge Base)

Field Value
Horizon H1: Bridge the Gap
Project PRJ-06 / new PRJ-09 candidate
Status validated
Source Day 2 — management session (Dave pitched) + truck guys workshop
Field champion Dave Reinhold (sponsor), West (truck section mgr), Zach (mechanic)

Problem statement:

Middletown recently absorbed ~150+ plant vehicles (light-duty trucks, SUVs, vans, buses, zero turns, semi trucks, class 9 trailers) plus growing mobile equipment (loaders, dozers). The truck section has been running for only 2.5 years with no CMMS — maintenance is tracked on a whiteboard and paper work orders. The section manager (West) has no mechanical background; Zach brings expertise but had a Mitchell software trial that expired. Diagnostic tools (JoeTest/Diesel Laptops) are not user-friendly and lack internet access. When diagnosing a vehicle, variant confusion (e.g., HP2 vs non-HP2 Escalade) sends technicians down wrong troubleshooting paths. Seniority-based union bidding means unqualified people can bump into mechanic positions — knowledge accessibility is critical.

Proposed solution: AI diagnostic copilot connected to vehicle identity. Scan/identify a vehicle → get its full history, PM schedule, parts list, troubleshooting guides, and OEM specs. Voice-based interaction. Captures repair records automatically, builds institutional knowledge base. Connects to parts ordering for seamless procurement.

Why Dave wants this first:

"Build me something — scan that Chevy Silverado and I get [everything]. I know you can do that on a gearbox somewhere. Anything you deliver, even if it's not optimal, is a person gain." Dave explicitly positioned this as a low-risk proving ground: familiar equipment, data readily available (vs industrial assets), zero baseline to beat. Success here builds credibility to scale to plant assets.

Current state: Paper work orders, whiteboard PM tracking, no CMMS, expired Mitchell trial, no internet in shop (1920s building, fiber coming ~1-2 months). Target state: Voice-first diagnostic + knowledge base on smartphone/tablet. Maintenance history retained. PM scheduling automated. Connects to procurement for parts ordering. Value estimate: $0.5-2M direct (downtime reduction, faster diagnosis, reduced misdiagnosis, fleet optimization) Confidence: High — Dave proposed it, data available, zero baseline, Vooban has reference experience (Delamon/Cat mining rigs, Ford vehicle diagnostics) Data readiness: High for light-duty (OEM info available online), medium for class 9 (bodybuilder info harder) Systems: None currently. OEM data (web), Teams CMMS (potential future integration), Oracle (purchasing) Complexity: Quick Win — "anything is better than nothing" per Dave Dependencies: Internet connectivity (fiber ~1-2 months), IT policy on devices

Palmer filter assessment: Palmer was lukewarm — fleet management is a local problem. BUT: the platform built here (scan asset → knowledge base → diagnostic → procurement) is the SAME platform needed for plant assets. Position as proving ground, not destination.

Relationship to other initiatives:

Directly feeds MDT-02 (maintenance copilot) by proving the platform on simpler assets. Feeds MDT-20 (part cleanup) by starting clean with fleet parts. The diagnostic + knowledge base pattern is identical to what plant maintenance needs — just different equipment.

Open questions: - [x] Is there appetite for this? YES — Dave's #1 local excitement - [ ] When will fiber internet be installed? (~1-2 months per West) - [ ] What is the fleet size and composition? (~150+ vehicles, growing) - [ ] Can West/Zach provide a vehicle inventory list? - [ ] What OBD/diagnostic interface do the vehicles support?


MDT-23: Pre-Trip Inspection Digitization

Field Value
Horizon H1: Bridge the Gap
Project PRJ-06 adjacent
Status identified
Source Day 2 — truck guys workshop
Field champion West (truck section mgr)

Problem statement:

All pre-trip vehicle inspections are paper-based. Pencil whipping is rampant — West can walk outside and find 3-4 items checked "good" that aren't good. No accountability trail. No way to compare consecutive inspections. Audit retention is a "huge pain in the ass." West came from an over-the-road trucking company where every truck had a tablet — truck wouldn't start until pre-trip was completed digitally. Dave noted this scales to all mobile equipment across the plant, including cranes.

Proposed solution: Mobile pre-trip inspection app (tablet or smartphone). Digital checklist with photo capture, timestamp, operator ID. Truck doesn't release until inspection completed. Anomalies flagged automatically. Audit trail built-in.

Current state: Paper forms, pencil whipping, no accountability Target state: Digital pre-trip on mobile device, automated anomaly detection, audit trail, scales to cranes and all mobile equipment Value estimate: Low direct $ — high compliance and safety value. Audit cost reduction. Confidence: High — proven technology, West has seen it work at prior employer Complexity: Quick Win Dependencies: Mobile device availability, IT policy, connectivity

Scalability: Dave explicitly noted this can scale to cranes and all mobile equipment across the company. "That's a big deal. We have to manage all that stuff."


MDT-24: Surface Inspection Classifier Enhancement (Ametek)

Field Value
Horizon H2: Build the Foundation
Project PRJ-04
Status validated
Source Day 2 — Chuck (quality manager), management session
Field champion Chuck (quality mgr), Alex (quality engineer)

Problem statement:

Ametek surface inspection cameras are installed on the electrogalvanizing line, hot-dip galvanizing line, hot mill (~3 years old), and pickling line (2009 vintage). The classifier identifies defects but with ~60% accuracy on specific defect types — notably the critical lamination-vs-gouge distinction (confirmed by lab analysis study). Note: 60% is NOT the overall accuracy across all defect types — confidence intervals exist per defect class but need to be retrieved. This means downstream quality decisions on key defect types are built on corrupt data. Wrong classification has direct operational consequences: a coating line scratch classified as "lamination" sends the investigation to the steelmaking team instead of the coating line — wasted effort chasing the wrong root cause while the actual problem continues. The corporate SIS support group has been dissolved (cutbacks), leaving plants to maintain their own classifier tuning with fewer resources. Detection (did the camera see something?) and classification (what is it?) are separate problems — the cameras may detect well but classify poorly.

Key evidence from Chuck + Alex (Day 2-3):

"We're about accurate about 60% of the time [on lamination vs gouge]." "The classifier may tag it as one thing, and it's not doing anything wrong, but there could be multiple root causes to what it's seen." "We've had a couple of kind of bigger containments where it was actually a scratch being generated on the coating line, but because of the way it was classified... the coating line's not reacting." Day 3 clarification: The 60% figure is on specific defect types, not overall camera accuracy. Confidence intervals per defect class have been studied historically but "it's been a minute since I looked at one of those." Two engineers work on the cameras (one for hot mill, one for downstream units) — both report to Chuck. Corporate group that used to support SIS across all sites has been dissolved.

Proposed solution: ML-enhanced defect classifier trained on Middletown's own confirmed lab samples. Adds cross-coil pattern recognition: flags when multiple coils from different heats show same defect at same location (= line defect, not steelmaking). Integrates tundish/heat identity to assess probability of true steelmaking defect.

Current state: Ametek cameras on 4+ lines, ~60% classifier accuracy on key defect types, one dedicated engineer improving classifier at hot mill, another for downstream units. Target state: AI-enhanced classifier with >90% accuracy on lamination vs gouge, cross-coil pattern detection, real-time line defect alerts. Value estimate: $3-8M/yr (reduced containments, faster corrective action, fewer customer claims, avoided mill wrecks at cold mill) Confidence: High — data exists (images + lab confirmations), well-proven ML approach in steel Data readiness: High — Ametek stores images, lab results exist for training labels, hot mill camera is only 3 years old Systems: Ametek SIS cameras, lab analysis database, coil tracking system, heat/tundish identity Complexity: Medium — requires ML model training + integration with existing Ametek system Dependencies: Access to Ametek image archive, lab-confirmed sample labels for training data

Palmer filter: Palmer explicitly flagged surface inspection as cross-site scalable — "All of our coating lines and oat mills have that equipment." This is a top candidate for the 3-4 final recommendations.

Open questions: - [x] What vendor? → Ametek (confirmed by Chuck) - [x] Are images stored? → Yes, retrievable - [x] Is there lab-confirmed training data? → Yes, studies done historically - [ ] How many confirmed lab samples exist? (need to ask Alex) - [ ] What Ametek system version on each line? - [ ] What is the annual cost of misclassification-related containments?


MDT-25: Through-Process Quality Alerting (Tundish/Heat Level)

Field Value
Horizon H2: Build the Foundation
Project PRJ-04
Status validated
Source Day 2 — Chuck (quality manager)
Field champion Chuck, Alex

Problem statement:

When the first few pieces from a tundish or heat run through the hot mill and show defects on the surface inspection system, there's no alert to flag the remaining slabs from that same heat. Coils from a single heat are NOT processed sequentially — they may be weeks apart at the hot mill. If the first 2 pieces show laminations, the remaining slabs could be proactively diverted to less critical orders (e.g., door inners instead of BMW exposed panels) or scarfed before rolling. Today, this connection isn't made until a metallurgist manually pieces the data together — often after a containment.

Key quote — Chuck:

"If you can see the first couple of pieces that come through from this particular heat are showing up with defects... you could give feedback to the steelmaking guys. You still got some material in slab form — maybe we'd do something different."

Proposed solution: Real-time alerting system that monitors defect rates by tundish/heat ID. When a threshold is exceeded on early pieces, alerts downstream operations and recommends diversion of remaining slabs from that heat to less critical orders or scarfing.

Current state: No real-time tundish/heat-level defect monitoring. Connections made manually days/weeks later. Target state: Automated heat-level defect monitoring with proactive slab diversion recommendations. Value estimate: $2-6M/yr (avoided claims on exposed automotive, yield recovery through targeted downgrade vs blanket rejection) Confidence: Medium-High — requires integration of SIS data with heat/slab genealogy (both exist) Systems: Ametek SIS, caster data, slab tracking, heat genealogy Complexity: Medium Dependencies: MDT-24 (better classification → better alerting), slab-to-heat traceability data quality


MDT-26: Mechanical Property Drift Detection & Quality SPC Modernization

Field Value
Horizon H1→H2
Project PRJ-04
Status validated
Source Day 2 — Chuck (quality manager) + Day 3 quality deep dive (Chuck, Alex, Eric)
Field champion Chuck (Quality Mgr), Alex (Quality Engineer)

Problem statement:

Mechanical property testing (yield strength, tensile strength, elongation) produces 16-17,000 tensile tests per month — substantial data volume — but the reporting tools are antiquated. Quality reports are generated by a SAS-based system as PDFs — not interactive, not queryable. The main SAS programmer retired and only 1-2 people across the corporation know the code. Data analysis uses Microsoft Access for queries and Minitab/JMP for statistics — all manual. An experienced metallurgist in the 6-person product metallurgy group notices "this product's yield strength has been drifting up for two months." A new metallurgist sees each coil in isolation: it's within spec, pass. By the time the drift hits the goalpost, it's a problem requiring investigation. Data retention is inconsistent — some systems hold 6 months, some 3 years.

Day 3 validation evidence — Chuck + Alex:

Chuck used the "field goal" analogy extensively: "We're just kicking field goals here — clean through the uprights, it's good. But if you were always going to the left upright and all of a sudden now you're going to the right upright, something shifted." The current system has no way to detect this within-spec drift — only binary pass/fail against spec limits. The cold mill already has roll force alarms for mixed steel detection — a precedent for the same pattern applied to properties. Mixed steel (wrong steel on wrong order) is a secondary use case: zero external mixes last year at Middletown, ~10 across CLF, but catastrophic when it happens ($M liability). Chuck described the ideal: "AI should probably do this — put on the hold, AI would disposition it and say you're fine, you're fine, you're fine. Okay, now you're too far out — somebody look at this."

Quality reporting infrastructure is broken: SAS-based reporting generates static PDFs. Main programmer forced into retirement. Chuck wants real-time trending with anomaly detection: "I could envision these systems saying this particular product is performing poorly for this defect. It's out of the norm." He gave a concrete example: one item performing 10x worse than it should on blisters — the setup wasn't optimized, fixable by "flipping a couple switches," but nobody spotted it because the reporting doesn't surface it proactively.

Key quote — Chuck:

"If I'd have known a month ago or two months ago that it was drifting up, I could start investigating. That Spidey sense kind of stuff — the guy who's been doing it for years knows it."

Proposed solution: Two layers: 1. Drift detection: Automated statistical trend monitoring on mechanical properties by product/grade/gauge. Alert when drift exceeds process-normal thresholds (tighter than spec limits). Captures the "Spidey sense" of experienced metallurgists. Mixed steel flag as secondary use case (anomalous properties on a coil that doesn't match its heat profile). 2. Quality SPC modernization: Replace the SAS-based PDF reporting with interactive trending dashboards. Reject/retreat/claim metrics by unit, defect type, product — week over week, month over month. Proactive anomaly alerts: "Item X had 10% reject rate last week — investigate." Replaces the retiring SAS programmer with a sustainable, modern system.

Current state: 16-17k tensile tests/month. SAS-based PDF reports (main programmer retired). Manual analysis in Access/Minitab/JMP. 6-person metallurgy group. Data retention varies (6mo to 3yr). No within-spec drift detection. Binary pass/fail only. Target state: Dynamic trending dashboard with automated drift detection + quality SPC reporting. Configurable by product/grade/gauge. Proactive anomaly alerts. Replaces SAS pipeline. Value estimate: $1-3M/yr (proactive intervention, reduced customer complaints from drift-related quality changes, SAS replacement savings, faster root cause identification) Confidence: High — Chuck explicitly prioritized quality reporting, data exists (16-17k tests/month), SAS replacement is urgent (programmer gone) Data readiness: High — mechanical testing database has years of data, Access queries already set up, 16-17k tests/month = rich dataset Systems: Mechanical testing database, SAS reporting system (to replace), Access (data queries), Minitab/JMP (statistics), production data Complexity: Quick Win (drift detection on existing data) → Medium (full SPC modernization replacing SAS) Dependencies: Access to mechanical testing database, SAS code base access (for understanding current reports)

Relationship to other initiatives:

Forms a package with MDT-21 (knowledge base) and MDT-04 (through-process quality): drift detection catches problems early, knowledge base provides context from past investigations, through-process traceability connects upstream causes. Chuck explicitly prioritized this package over MDT-27 (claims investigation). SAS code analysis is a secondary opportunity — AI can document the legacy SAS code to prevent further knowledge loss (aligns with Palmer's knowledge capture theme).

Open questions: - [x] How much testing data? → 16-17k tensile tests/month - [x] What tools for analysis? → Access (queries), Minitab/JMP (statistics), SAS (reporting → PDFs) - [x] How many people do this work? → 6-person product metallurgy group - [x] Is there precedent for anomaly detection? → Yes — cold mill roll force alarms for mixed steel - [ ] Can we get access to the SAS code base? (for documentation + replacement planning) - [ ] What is the mechanical testing database system? Direct access or via Access queries only? - [ ] What data retention policies apply to mechanical property data?


MDT-27: Customer Complaint Triage & Tracking System

Field Value
Horizon H1: Bridge the Gap
Project PRJ-04 adjacent
Status identifieddeprioritized by Chuck (Day 3)
Source Day 2 — Chuck (quality manager)
Field champion Chuck, Alex

Problem statement:

Customer complaints arrive from multiple directions (tech services, field reps, customer contacts) and are manually tracked by one person (Alex). There is no ticket system. Complaints that escalate to claims have Power BI tracking, but the pre-claim complaint stage — where corrective action could prevent escalation — has no systematic tracking. Chuck can get "4 different GM facilities complaining about 4 different topics all at the same time." Complaint volume ebbs and flows with pricing cycles and seasons. No way to correlate complaints with internal quality data or across plants.

Day 3 — Chuck deprioritized this initiative:

In the quality deep dive, Chuck explicitly said he would NOT prioritize claims investigation: "I would not prioritize that... there's so much manual [work]... there's no clean way to get into the files we have... all of my claims are on my computer or on Teams. Or it's just a bunch of people's responses back to me." He said the data is not in a good place to offer up for AI analysis. Instead, he redirected priority to quality reporting / SPC modernization (folded into MDT-26) and post-mortem knowledge base (MDT-21). This initiative remains valid but is not a near-term candidate.

Proposed solution: Lightweight complaint intake and triage system. AI categorization of incoming complaints. Auto-correlation with internal quality data (SIS defects, heat chemistry, production parameters). Prioritization based on likelihood of claim escalation.

Current state: One person (Alex) manually tracking complaints on personal computer and Teams. No ticket system. Claims tracked in Power BI but complaints are not. Data scattered across email, Teams, individual files — not ready for AI. Target state: Systematic complaint tracking with auto-correlation to production data. Priority scoring. Cross-plant visibility. Value estimate: $0.5-1M/yr (faster triage, reduced claim escalation, better resource allocation) Confidence: Medium — but champion (Chuck) deprioritized in favor of MDT-21 and MDT-26 Systems: Power BI (claims), email/manual (complaints), quality systems Complexity: Quick Win (technically), but data readiness is LOW Dependencies: Data access to complaint sources, data cleanup/structuring before AI can be applied


MDT-28: Intra-Plant Coil Logistics Optimization

Field Value
Horizon H2: Build the Foundation
Project PRJ-07
Status validated
Source Day 2 — Chris (Sr Div Mgr), West (truck section), truck guys
Field champion Chris (owns shipping doors), West (manages coil movement planning)

Problem statement:

Middletown moves ~40 coil loads/day on 8-10 internal trucks, point-to-point between buildings. The truck master coordinates via radio and text — no real-time visibility of truck locations or door status. West spends ~2 hours/day manually planning coil movements using the IBM Mainframe shop floor system → Excel. Door availability (crane down, lunch breaks, shift changes) is discovered only when a truck arrives and sits. Empty return trips waste fuel and capacity. "Rush" coils are often not truly urgent — experienced people know "your 'now' means next day or two" but new people don't. GPS is being installed (MobileCom, ~3rd week of March) but provides visibility only, not optimization.

Key evidence:

Chris: "In a perfect world, each door could indicate open/closed/down... you optimize, boom." West: "It takes me a couple hours a day [to plan moves]. I go through every door in the mill." Dave: "Likely there's a better solution to that through technology."

Proposed solution: Phase 1: Door status system — each department enters availability windows (shift schedule, planned maintenance, lunch). Truck master gets real-time view. Phase 2: Route optimization — GPS data + door status + coil priority → optimized move sequences. Reduce empty trips, avoid closed doors. Phase 3: Rush priority intelligence — integrate commercial data to assess true urgency ("80% likelihood this is truly a rush coil").

Current state: Radio + text coordination, manual Excel planning from IBM Mainframe, no door status visibility, GPS being installed. Target state: Real-time logistics dashboard with door status, route optimization, priority intelligence. Value estimate: $2-5M/yr (reduced trucks needed, fuel savings, throughput improvement from fewer delays, less overtime) Confidence: Medium — GPS provides data foundation, optimization is well-proven Data readiness: Medium — GPS coming March, IBM Mainframe has coil inventory, door status requires new data input Systems: IBM Mainframe shop floor system, GPS (MobileCom, incoming), radio system Complexity: Medium — phased approach, Phase 1 is quick Dependencies: GPS installation (March), door status input mechanism

Palmer filter: Palmer explicitly flagged coil movements as cross-site scalable. "To me, I see that as potential at other plants." This is a top candidate for the 3-4 final recommendations.


MDT-29: ~~Oracle Auto-Reorder Intelligence~~ → Absorbed into MDT-31

Status: absorbed — All scope moved to MDT-31 (Inventory Intelligence & Master Data Cleanup), Phase 2. Lead time inference, dynamic min/max recommendations, and auto-reorder governance are now part of the broader inventory initiative. Day 3 Sean transcript massively validated and deepened this: zero-approval auto-reorder on all stock items (including million-dollar parts), all 32k lead times defaulted to 15 days, roll ordered that was never used in the plant.


MDT-30: BF Stove Tender Decision Support

Field Value
Horizon H2: Build the Foundation
Project PRJ-05 / new PRJ-10 candidate
Status identified
Source Day 2 — Steve Palmer (corporate, in-person)
Field champion Steve Palmer (corporate sponsor)

Problem statement:

Blast furnace stoves are managed by a single "stove tender" — one person making all decisions about firing sequences, blast temperature management, and thermal cycling. These decisions directly impact BF fuel rate (coke, natural gas) and hot metal quality. The raw materials cost is enormous: 5 lbs of coke per net ton of hot metal is a major cost impact. When a stove tender retires, the replacement must learn through years of apprenticeship. Six blast furnaces operate across CLF (2 Cleveland, 2 Burns Harbor, 1 Middletown, 1 Indiana Harbor) — each with this single-expert dependency.

Key quote — Palmer:

"We run our stoves with a stove tender. It's a person who makes all these decisions. In my world, that would be great to have a learning model to follow that gentleman." "What may be a minuscule gain becomes huge because the overall cost of raw materials."

Proposed solution: AI learning model that captures the stove tender's decision patterns: what parameters they watch, how they sequence stove changes, what signals trigger adjustments. Builds an advisory system that recommends actions to less experienced operators. Over time, optimizes beyond human capability using historical data.

Current state: One person per BF making all stove decisions based on experience and instrument readings. Target state: AI advisory system for stove management with captured expert knowledge, applicable across all 6 BFs. Value estimate: $3-10M/yr across footprint (fuel rate optimization, hot metal quality improvement, succession risk mitigation) Confidence: Medium — requires access to BF data, stove tender cooperation, and understanding of decision variables Data readiness: TBD — historians at each BF site need to be assessed Systems: BF historian, L2 BF control, stove control systems Complexity: Medium — technically proven, but requires deep process knowledge and operator trust

Palmer filter: Palmer raised this HIMSELF. It is cross-site by definition (6 BFs), high raw material value, and addresses his knowledge-capture theme. This is a top candidate for the final 3-4.

Day 3 enrichment — R&D team cross-site perspective:

Burns Harbor precedent: Lucas Melton at Burns Harbor implemented semi-automated wind rate control using pressure drop across the furnace as a control variable. "He gradually, after 'proving it to the operators,' moved it more into an automatic system." This is the closest existing example of BF process automation at CLF — a model for progressive trust-building. IH7 = best starting point: Indiana Harbor BF 7 is the most instrumented, most productive BF in CLF. R&D: "Most productive, most if you want to call it technology, sensors, models, instrumentation... that could give us good ground to start from, and then you reuse what you learn." Closest to automation today: Stove firing sequences (automatic changes based on temperature), charging system weight balancing (auto-adjusts overweight/underweight within batch). But the burden mix decisions and stove cycling strategy remain manual. Knowledge loss is active: R&D confirmed coke making knowledge already lost (Joan Edterov). Iron making down to 1-2 experts per furnace. "Maybe all the maintenance, I put on [two deep]. It's hard to go more than two."

Open questions: - [ ] What instrumentation exists on Middletown BF 3 stoves? - [ ] Who is the stove tender at Middletown? Can we observe? - [ ] What historian captures stove data? - [ ] Are stove operations similar enough across sites for one model? - [ ] Can we get details on Lucas Melton's pressure-drop automation at Burns Harbor? - [ ] What is IH7's sensor/instrumentation setup? (R&D can provide)


MDT-31: Inventory Intelligence & Master Data Cleanup

Field Value
Horizon H1: Bridge the Gap
Project PRJ-06
Status validated
Source Day 3 — Sean (stores/spares manager) deep dive + Day 2 management session ($102M quote, sledgehammer dedup)
Field champion Sean (stores/spares manager)
Absorbs MDT-20 (Part Number Cleanup) + MDT-29 (Oracle Auto-Reorder Intelligence)
Reference project Vooban internal POC — production-ready part standardization tool, available for reuse

Problem statement:

Middletown sits on $104M in on-site spare parts inventory (~$150M including external warehouses at Hiko and Applied/Eco) with no reliable way to know what it has, where it is, or when to reorder. The inventory operation is broken at every layer: master data (32,000 parts with ~10% duplicates, descriptions written however departments want, Pilog fields allow free text), locations (no naming convention, multiple locations per part, 900ft buildings with no signage, open buildings where anyone can walk in), lead times (all 32,000 defaulted to 15 days when migrated from Teams to Oracle — actual lead times range from days to 54 weeks), reorder logic (auto-reorder with ZERO approval — including million-dollar parts — triggered by min/max parameters rarely updated), and visibility (monthly reconciliation takes 2 people x 2 days across Oracle/Noetix, ~500 move orders stuck up to 2 years, ~400-500 open POs never received due to suspected iSupplier transmission failures). Coke plant parts still sit in inventory 2-3 years after demolition. Some parts valued at $0 in the system. Departments bypass stores entirely — pull parts without logging, use IOU cards that often don't get filled out.

Day 3 validation evidence — Sean (stores/spares manager):

Sean confirmed $104M in on-site inventory across 250,000 sq ft of floor space, plus ~4,000 parts at Hiko (external motor/brake warehouse — Middletown has no access to their inventory system). ~$5M/month spend through stores, ~24,000 internal orders/year. Duplicate example: 4 different part numbers for identical contactor — 300 units sitting under one number, another number shows zero stock, triggers reorder of a part they already have. All lead times are 15 days ("I know that's not true") — but historical order date / receive date exists in Oracle to compute real lead times. Auto-reorder recently purchased a roll that has NEVER been used at this plant — min/max of 1/1, cycle count found zero, system auto-ordered. Zero approval required regardless of cost. Sean estimated ~10% duplication rate. Cycle counting: 22,000 of 32,000 parts counted in 2025 (their best year ever). Buildings are open — anyone can walk in and take parts. IOU card system for after-hours. Sean runs an inventory report at 4 AM every morning (30+ min) because of overnight part movements. Monthly reconciliation: 2 people, 2 days, triple-checking across Oracle, Noetix, and accounting reports. Sean is enthusiastic about technology adoption — explicitly said employees would use handheld devices, barcodes would be "fantastic."

Day 2 validation evidence — management session:

Dave: "At this plant alone, I've got 102 million in inventory." Sledgehammer dedup example. Refrigerator example. $1B estimated across CLF footprint. Consensus on conversational front-end for parts search.

Proposed solution: Multi-layer inventory cleanup and intelligence, phased:

Phase 1 — Master Data Cleanup (weeks 1-6): * AI-powered part dedup and description standardization using Vooban's existing reference implementation (Streamlit app, per-category conventions, batch processing) * Process Oracle + Pilog exports, identify duplicates, standardize descriptions, output clean data for re-import * Obsolescence identification: flag zero-movement items (coke plant parts, etc.), zero-dollar items, items with no usage in 2+ years * Human review workflow for low-confidence items

Phase 2 — Lead Time & Reorder Intelligence (weeks 4-10): * Infer realistic lead times from historical order date / receive date pairs in Oracle * Dynamic min/max recommendations based on consumption trends (replace static parameters rarely reviewed) * Auto-reorder governance: flag anomalous reorders for human confirmation before PO generation (the "million-dollar part with zero approval" problem) * Stuck order cleanup: identify and surface the ~500 open move orders and ~400-500 open POs for resolution

Phase 3 — Operational Intelligence (weeks 8-16): * Location rationalization: consolidate multiple locations per part, propose logical grouping by department/building * Automated reconciliation: replace the 2-person x 2-day monthly process with system-driven matching * Cross-site visibility: surface Middletown inventory levels to other plants via Pilog (today they can see parts but not quantities — still phone calls for emergency parts) * Consumption forecasting: basic demand prediction for high-value / high-turnover items

Current state: $104M on-site, ~$150M total. 32k parts, ~10% duplicates, all lead times at 15 days, zero-approval auto-reorder, no barcodes/RFID, open buildings, IOU cards, 2-person x 2-day monthly reconciliation. Target state: Clean master data, realistic lead times, intelligent reorder with human-in-loop for anomalies, location rationalization, automated reconciliation, cross-site inventory visibility. Value estimate: $2-5M/yr (inventory carrying cost reduction from $150M base + eliminated duplicate purchases + fewer stockouts + fewer unnecessary auto-reorders + reduced emergency buys + reconciliation labor savings) Confidence: High — stores manager confirmed all data points, management consensus, Vooban has reusable tooling Data readiness: High — Oracle has consumption history (years), Pilog has master parts, order/receive dates available for lead time inference. All exportable. Systems: Oracle (procurement/inventory), Pilog (master part numbers, corporate), Noetix (Oracle reporting layer), Hiko/Applied/Eco (external warehouses), Teams/SWAMI (CMMS work orders reference parts) Complexity: Quick Win (Phase 1 — Vooban has reusable tool) → Medium (Phases 2-3) Dependencies: Oracle export access, Pilog export access, stores team cooperation (strong — Sean is enthusiastic)

Why this is a priority H1 quick win: 1. Simpler than procurement: Own data, no cross-site vendor integration, no approval workflow redesign 2. Prerequisite for MDT-03: Procurement automation fails if the underlying inventory data is garbage 3. Immediate ROI: Stop buying parts you already have, stop auto-ordering parts you don't need 4. Enthusiastic champion: Sean is "kid in a candy store" about modernization 5. Vooban has the tool: Part standardization reference implementation is production-ready 6. Cross-site scalable: Every CLF site has the same inventory chaos (Palmer filter)

Comparison with other sites:

Cleveland has the same problem through Tabware ($26-69M addressable). Burns Harbor likely worse (largest site). The Vooban part standardization tool works on both Teams/SWAMI and Tabware exports — same tool, different input format. Cross-site inventory visibility (seeing what other plants have in stock) is a frequent pain point: Sean confirmed phone calls to find emergency parts, other plants calling Middletown for parts they can't find elsewhere.

Open questions: - [x] What is total inventory value? → $104M on-site, ~$150M total - [x] What systems manage inventory? → Oracle (inventory/procurement), Pilog (master parts), Noetix (reporting) - [x] Are there duplicates? → Yes, ~10%, contactor example (4 part numbers, 300 units invisible) - [x] What is the lead time situation? → All 32k at 15 days default, actual ranges from days to 54 weeks - [x] Is there data to compute real lead times? → Yes, order date / receive date in Oracle - [x] Is there appetite for technology? → Yes — Sean enthusiastic about barcodes, handhelds - [ ] Does Oracle support automated export/API for the cleanup pipeline? - [ ] Can Pilog exports be obtained for the part standardization tool? - [ ] What is the Wi-Fi/connectivity situation in stores buildings? (barcodes/handhelds for Phase 3) - [ ] Is there corporate appetite for cross-site inventory visibility through Pilog?


MDT-14: Fleet/Mobile Equipment Maintenance Integration (Mitchell1)

Field Value
Horizon H1: Bridge the Gap
Project PRJ-01
Status identified
Source Day 2 — internal logistics and maintenance team
Field champion TBD

Problem statement:

The internal logistics and maintenance team uses Mitchell1 (automotive repair/diagnostics platform) for mobile equipment diagnostics with some success. This creates a third maintenance data silo alongside Teams/SWAMI (plant floor CMMS) and condition monitoring tools. Fleet and mobile equipment health data does not flow into the broader asset management picture.

Proposed solution: Integrate Mitchell1 diagnostic and repair history data into a unified asset management layer. Pull mobile equipment health, repair trends, and parts consumption into the same visibility as fixed plant assets.

Current state: Mitchell1 used for mobile equipment diagnostics (logistics fleet). Data lives in its own system, disconnected from plant-floor maintenance. Target state: Single pane of glass for all asset health — fixed and mobile — with repair history, parts, and condition data flowing into one system. Value estimate: $0.5-2M/yr (fleet availability + parts consolidation + unified planning) Confidence: — (pending: fleet size, current downtime costs, integration complexity) Data readiness: Partial — Mitchell1 has structured data and APIs, but integration to Teams/SWAMI or a future enterprise CMMS is undefined. Systems: Mitchell1, Teams/SWAMI, future enterprise asset management platform Complexity: Quick Win (if API-based integration) to Medium (if requires CMMS migration first) Dependencies: Enterprise CMMS direction (Teams/SWAMI migration to Tabware or other?)

Comparison with other sites:

No Mitchell1 usage identified at Cleveland. Cleveland logistics challenges (CLV-19, CLV-20) focused on coil handling and rail cars but did not surface a separate fleet maintenance system. If Mitchell1 is used at other sites, this becomes a cross-site integration opportunity.

Open questions: - [ ] How many mobile assets are tracked in Mitchell1? - [ ] What is annual mobile equipment maintenance spend? - [ ] Does Mitchell1 have API/export capability for integration? - [ ] Is Mitchell1 used at any other CLF site? - [ ] What is the CMMS consolidation roadmap (Teams/SWAMI → Tabware? or something else?)


MDT-15: Custom CMMS Rewrite (Replace Teams/SWAMI)

Field Value
Horizon H2: Build the Foundation
Project PRJ-01
Status identified
Source Day 2 — Vooban recommendation based on field findings
Field champion TBD

Problem statement:

Middletown runs Teams/SWAMI, an Armco-era CMMS with no vendor support and 20+ years of accumulated technical debt. Cleveland runs Tabware (~95% hierarchy) but satisfaction is low across sites. Neither system is designed for the data-driven maintenance workflows that every other initiative in this registry depends on. Migrating from one unloved legacy system to another is not a path forward.

Proposed solution: Purpose-built CMMS designed around actual maintenance workflows at CLF. Built to integrate natively with the asset management layer, mobile equipment diagnostics (Mitchell1), condition monitoring, and the AI copilot (MDT-02). Designed for voice-first capture, structured work orders, and bidirectional data flow with operations systems.

Current state: Two legacy CMMS platforms across CLF sites (Teams/SWAMI at Middletown, Tabware at Cleveland and likely others). Neither well-loved. Critical maintenance knowledge lives in spreadsheets and people's heads because the systems do not support how work actually gets done. Target state: Single modern CMMS across CLF sites, designed for the workflows practitioners actually use. Native integration points for AI copilot, condition monitoring, operational delay reporting, and mobile equipment. Value estimate: Enabler — direct value is hard to isolate, but this unlocks or amplifies MDT-01, MDT-02, MDT-03, MDT-08, MDT-14, and cross-site standardization. Confidence: — (concept validated by field frustration with both systems, but scope and appetite TBD) Data readiness: N/A — this is a platform initiative, not an analytics initiative. Systems: Teams/SWAMI (replace), Tabware (potentially replace), Mitchell1 (integrate), Axiom (ERP interface), operational delay systems Complexity: Strategic — multi-site rollout, change management, data migration Dependencies: Executive buy-in (corporate CMMS standardization decision), enterprise data platform direction (Databricks/Snowflake/Fabric evaluation in progress)

Why this matters: This is the "build the foundation" initiative. Every H1 quick win (copilot, procurement, ops-maint integration) delivers value faster and scales better if the underlying CMMS is designed for it. Without this, each quick win is a point solution bolted onto a legacy system.

Comparison with other sites:

Cleveland uses Tabware with low satisfaction. If neither Teams/SWAMI nor Tabware is the answer, a custom CMMS becomes a cross-site enterprise play. This positions Vooban as a long-term platform partner, not just an analytics overlay.

Open questions: - [ ] Is there corporate appetite for CMMS replacement or is Tabware the mandated direction? - [ ] What is the total maintenance workforce across sites that would use a new CMMS? - [ ] What are the top 5 workflow gaps in both Teams/SWAMI and Tabware? - [ ] Has CLF evaluated any modern CMMS platforms (Fiix, UpKeep, Limble, etc.)? - [ ] What is the enterprise data platform decision timeline? (CMMS should align)


MDT-32: BOF Endpoint Prediction Model (AI-Assisted)

Field Value
Horizon H2: Build the Foundation
Project PRJ-08 / new
Status identifiedR&D already building this
Source Day 3 — R&D team (Matt, Eric Bridge, Eric Welty)
Field champion Matt (R&D, primary process), Matt Blakely + Patrick (engineers building the model)

Problem statement:

BOF endpoint prediction — knowing the final carbon content and temperature before tapping — is a critical control point in steelmaking. Middletown has a "fairly robust" existing endpoint prediction model, but R&D believes AI-based approaches (neural operators, transformers) can improve accuracy. The R&D team has already started building an AI-based model using GitHub Copilot to assist with code development. Engineers are iterating with Copilot — uploading data, building models, conversing about scientific/thermodynamic physics constraints. Currently answering ~30 questions from the AI to better inform the model. Chose Middletown specifically because the existing model provides a strong baseline to beat.

Key evidence — R&D transcript:

Matt: "They're using the Microsoft get environment copilot to build that. Doing it through natural language by talking to the co-pilot and giving it uploading some data files." "We chose Middletown works because Middletown Works has got a fairly robust endpoint prediction model now, so if it can perform that model, that's something saying something." "I think it's scalable also for some of the other shops."

Proposed solution: AI-based BOF endpoint prediction model trained on Middletown historical data. Physics-informed ML (thermodynamic models embedded). Deployable to other steel shops with sensor adaptation. R&D is already on this path — Vooban could accelerate with ML engineering expertise and production-grade deployment.

Current state: R&D engineers building with Copilot assistance. Neural operator / transformer approaches being explored. Early iteration — AI asking questions, engineers answering. Existing deterministic model as baseline. Target state: AI-based endpoint prediction deployed at Middletown, validated against existing model, then rolled to other shops. Value estimate: $2-5M/yr (improved hit rate on carbon/temp targets → fewer reblows → more heats/day → less alloy waste) Confidence: Medium — R&D actively working on it, technically proven approach, but maturity is early Data readiness: High — Middletown has the best existing model, implying good historical data Systems: L2 BOF control, historian, existing endpoint prediction model Complexity: Medium — technically proven, but requires scientific domain expertise + production deployment Dependencies: R&D cooperation (strong — they initiated this), BOF process data access

Strategic note: This is an organic AI adoption story within CLF — R&D took initiative without external help. Vooban could offer to accelerate and productionize what R&D has started, framing it as "we're supporting your engineers, not replacing them." Scalable to 6+ BOFs across CLF. Palmer may find this interesting as a cross-site play, though he hasn't flagged it specifically.

Open questions: - [ ] What is the current model accuracy (baseline to beat)? - [ ] What sensors feed the existing endpoint model? (sublance, off-gas, charge weights?) - [ ] How far along is the AI model? First iteration or multiple? - [ ] Is Matt Blakely available for a deep technical conversation? - [ ] What data format and volume? (heats/day, variables per heat)


MDT-33: Cross-Site Caster Reliability Analytics & Best Practice Sharing

Field Value
Horizon H1: Bridge the Gap
Project PRJ-01 adjacent
Status identified
Source Day 3 — R&D team (Matt runs weekly cross-site turnaround meetings)
Field champion Matt (R&D, primary process — runs weekly meetings)

Problem statement:

CLF's biggest bottleneck is slab production — "the orders are there, if we can make enough slabs." Caster reliability varies enormously across sites: Middletown has nearly zero unplanned turnarounds per week, while other shops have 5-12 per caster. Matt runs weekly cross-site caster reliability meetings tracking turnarounds, delay time, tons vs plan, and working ratio. Data is collected via questionnaires and compiled into Excel/PowerPoint. Best practices are shared through annual round tables and ad-hoc information requests. But the process is entirely manual — assembling questionnaire responses into tables, presenting in PowerPoint, no systematic analysis of what makes Middletown different from Indiana Harbor.

Key evidence — R&D transcript:

Matt: "Middletown is almost zero each week, sometimes one, whereas other steel shops have like anywhere between 5 and 12 for each caster." "Try to understand some of the differences. Really a challenge for us — to find out what they're doing and apply it to everyone else." "They don't make orders... it's really a bottleneck in the company." On best practice questionnaires: "Each shop fills out that questionnaire... at the end of that meeting, you have everybody's response to that technical issue." Then compiled manually into Excel → PowerPoint. On peer-to-peer learning: "It works a lot better when it's coming from one of their peers."

The Middletown benchmark insight: Middletown's near-zero unplanned turnarounds vs. 5-12 at other shops is an "order of magnitude" difference. The culture traces back to Dave Reinhold's mid-90s accountability philosophy. Scrap is <1% (vs. >10% at Cleveland). The R&D team actively wants to understand WHY and export it — but lacks the analytical tools to systematically compare across sites.

Proposed solution: 1. Digitize the weekly turnaround meeting data — structured database of turnaround events by site, cause area, equipment, delay time, tons lost 2. Cross-site benchmarking dashboard — working ratio, turnaround frequency, cause categories. Week-over-week trending. 3. Best practice knowledge base — AI-structured questionnaire responses (ladle gates, rotary joints, etc.) searchable by topic, equipment type, site 4. Pattern analytics — what distinguishes Middletown from Indiana Harbor? Equipment? Practices? Culture? Staffing? Surface the variables.

Current state: Weekly meetings with manual questionnaire → Excel → PowerPoint. Annual round tables for peer sharing. "Loss of containment" format as model. No structured database. No systematic cross-site comparison analytics. Target state: Structured turnaround database, automated benchmarking, searchable best practice library, pattern analytics surfacing actionable differences. Value estimate: $3-8M/yr (if bottom-performing sites improve by even 2-3 turnarounds/week → more heats → more slabs → more revenue in a supply-constrained market) Confidence: High — data exists (Matt has been collecting since Jan 2025), clear champion, massive value if bottom sites improve Data readiness: Medium — data exists in spreadsheets/PowerPoints but not structured. Would need to build the database from existing records. Systems: Excel/PowerPoint (current), future: structured database + dashboard + knowledge base Complexity: Quick Win (dashboard on existing data) → Medium (AI-powered best practice matching) Dependencies: Matt's cooperation (strong), site willingness to share data (variable — "some departments have a paranoia that we're going to get them in trouble")

Cross-site scalability: This IS a cross-site initiative by definition — it compares and connects all integrated sites. Palmer would appreciate the scalability framing. The benchmarking data could feed directly into the corporate readout.

Relationship to other initiatives:

Complements PRJ-01 (ops-maint integration) — turnaround data is the operational symptom of maintenance effectiveness. Feeds MDT-01 (delay attribution). The "loss of containment" questionnaire model is a proven format that could be AI-enhanced across all operational topics.

Open questions: - [ ] Can Matt share his 2025 turnaround tracking spreadsheets? - [ ] How many sites/casters are in the weekly meeting? (sounds like all integrated sites) - [ ] What cause categories are tracked? (equipment, process, material, staffing?) - [ ] Are the round table questionnaires archived? How far back? - [ ] What data does Matt's OneNote contain? (he offered to share) - [ ] Can we observe a weekly turnaround meeting?


MDT-34: BF Burden Mix / Raw Material Optimization (Expert System)

Field Value
Horizon H2: Build the Foundation
Project PRJ-05 / MDT-30 adjacent
Status identified
Source Day 3 — R&D team (Matt, Eric Bridge)
Field champion Matt (R&D, primary process), Eric Bridge (R&D, iron making background)

Problem statement:

Blast furnace burden mix — the ratio of coke, pellets, sinter, and other raw materials charged into the furnace — is adjusted manually based on operator judgment. Temperature indicators, hot metal quality, and production targets drive adjustments, but there is no automated recommendation system. The R&D team described this as an "expert system" opportunity: gather the collective wisdom of experienced operators and embed it in a decision support tool. The stakes are enormous: "5 lbs of coke per net ton of hot metal is a major cost impact" (Palmer). Small changes in burden optimization across 6 BFs = massive savings.

Key evidence — R&D transcript:

Matt: "Right now, there's a lot of manual adjustment to the burden mix. You evaluate the temperature and a lot of indicators, and then you see how it goes. And then you add some more." "You could have some system — you should call them expert systems. Knowledge based — you gather everybody's [wisdom]. And it says the recommendation is to increase your coke rate by X." "Everybody always worries it's a bad piece of information out there, makes a bad decision, a costly bad decision." → Human-in-the-loop is essential. Pellet chemistry debate: Flux pellets vs. acid pellets is a live corporate tension. Mining side wants cheaper acid pellets; iron making insists on flux pellets for BF stability, efficiency, and energy consumption. Eric Bridge used Copilot to analyze the tradeoff in 35 minutes — "citations, cost-benefit, spreadsheet model." AI can objectively quantify what's currently a political argument.

Adjacent opportunity — Burns Harbor precedent:

Lucas Melton at Burns Harbor implemented semi-automated wind rate control using pressure drop as a control variable. Gradually proved to operators it could be automated. This is a precedent for progressive automation at the BF.

IH7 as starting point:

R&D identified Indiana Harbor BF 7 as the most instrumented, most productive blast furnace — best sensor coverage, best data. If starting a BF optimization initiative, IH7 is the ideal proving ground, with Burns Harbor as a close second.

Proposed solution: AI-driven burden mix recommendation engine. Ingests furnace operating data (thermal state, wind rate, pressure drop, slag chemistry, hot metal quality) and recommends coke rate, burden composition, and wind adjustments. Human-in-the-loop — recommendations vetted by operators before execution. Progressive automation: advisory → semi-automated → closed-loop (years).

Current state: Manual burden adjustments based on experience. No automated recommendation system. Closest automation: stove firing sequences, charging system weight balancing. Target state: AI advisory system for burden management. Optimizes coke rate, pellet/sinter ratio, wind rate. Validated by operators before execution. Value estimate: $5-15M/yr across footprint (coke rate reduction + hot metal quality improvement + BF stability + energy savings). Small per-unit savings × massive tonnage = large aggregate value. Confidence: Medium — technically proven (expert systems in BF have existed for decades), but requires deep process data and operator trust Data readiness: Variable by site — IH7 has the best instrumentation. Middletown BF 3 had hydrogen injection trial infrastructure. Cleveland BF data has gaps ("black holes"). Systems: BF historian, L2 BF control, charging system, stove control, slag/hot metal sampling Complexity: Medium to Strategic — depends on starting scope (single furnace advisory vs. cross-site platform) Dependencies: BF historian data access, process engineer partnership, operator buy-in ("they always worry about bad decisions")

Relationship to MDT-30 (Stove Tender Decision Support):

MDT-30 focuses specifically on stove cycling decisions (Palmer's explicit request). MDT-34 is the broader burden/raw material optimization that surrounds the stove. They are complementary: stove management is one input to overall BF thermal management. Could be sequenced as MDT-30 (stove, H1-ish quick win) → MDT-34 (burden, H2 expansion).

Relationship to pellet chemistry debate:

The flux vs. acid pellet question (Tilden mine relevance) could be partially answered by this initiative — quantifying the actual BF cost impact of pellet chemistry changes, rather than relying on corporate politics.

Open questions: - [ ] What instrumentation exists on each BF across CLF? (IH7 = best, others?) - [ ] What historian depth at each BF? - [ ] How are burden recipes currently documented and shared? - [ ] Who are the experienced burden managers at each site? Retirement timeline? - [ ] Can R&D provide the "cheat sheet" OneNote for BF data access? - [ ] What is the annual coke spend across CLF? (to size the savings)


MDT-35: Turn Log Intelligence — Predictive Pattern Mining

Field Value
Horizon H1: Bridge the Gap
Project MDT-P16 (Process Control Knowledge & Virtual SME) / PRJ-01 adjacent
Status validated
Source Day 3 — Brian Benning (Section Mgr, Process Control, 27 yrs)
Field champion Brian Benning

Problem statement:

Middletown has a unique, homegrown system called the Turn Log — a MySQL database running for 20+ years with 1.3 million entries. Approximately 100 technicians across 12-13 plant departments log what they worked on at the end of every turn (8-12 hour shift), yielding ~30 entries per day. Each entry has structured metadata via dropdown menus (technician name, department, equipment area, subsystem) plus free-form text describing the work performed and any associated delays. Brian Benning currently attempts manual pattern analysis: "I go in there and look at it and try to find a pattern of: well, these guys worked on it a couple weeks before it broke, two days before it broke and acted up, and then it finally broke." At 1.3 million entries, this is impossible to do systematically by hand. No one has ever applied analytics or machine learning to this data.

Key evidence — Brian Benning transcript:

"One of the first ones — we have a system called the Turn Log. This is where our hourly technicians go in, and they document what they did over eight hours. It's been running for 20 years now. There's 20 years worth of data in the database." "There's, I think, last I looked about 1.3 million entries in that. So there's a big amount of data to go through." "That's something — as a predictive tool. What could it figure out?" On the UI: structured dropdowns for name, department, equipment area, subsystem + free-form text. "Some of it is free flow, some of it — you do have some guardrails where you guys pour down. That's some metadata, but then it has free form with it."

Proposed solution: AI-powered pattern mining over the Turn Log database. NLP extraction of maintenance activities from free-form text. Equipment-level timeline reconstruction: what maintenance sequences precede failures? Correlation with delay records. Anomaly detection on recent entries vs. historical patterns. Department-level activity dashboards.

Current state: 1.3M entries in MySQL. No analytics applied. One person (Brian) occasionally scans manually. Target state: Automated pattern extraction, equipment failure predictors, real-time anomaly flagging on new entries. Value estimate: $0.3-1M/yr (improved predictive detection → reduced unplanned downtime, earlier intervention). Value amplified when fed into the Virtual SME (MDT-36). Confidence: High — data exists, MySQL is directly exportable, 20 years of history provides strong statistical power, structured metadata enables clean segmentation. Data readiness: Excellent — MySQL database, Brian has direct access, ~100 active users ensure ongoing data flow. Systems: Turn Log (MySQL, PHP front-end, homegrown) Complexity: Quick Win — data export + NLP pipeline + dashboard. Well-understood AI/ML task. Dependencies: MySQL export access (Brian controls). Understanding of dropdown taxonomy (Brian can provide).

Relationship to other initiatives:

Directly feeds MDT-36 (Virtual SME) as a continuous learning source. Enriches MDT-01/MDT-P09 (Ops-Maint Data Integration) by providing the maintenance activity context that makes delay attribution richer. The Turn Log is a Middletown-specific asset, but the pattern (mining maintenance activity logs for predictive signals) applies to any CMMS at any site.

Open questions: - [ ] Can Brian provide a Turn Log MySQL export (or read-only access) under NDA? - [ ] What is the dropdown taxonomy? (full list of departments, equipment areas, subsystems) - [ ] Is there a delay/incident reporting system that can be correlated with Turn Log entries? - [ ] Are there department-specific conventions in the free-form text that affect NLP accuracy?


MDT-36: Process Control Virtual SME

Field Value
Horizon H1→H2
Project MDT-P16 (Process Control Knowledge & Virtual SME) / PRJ-01 / PRJ-06 adjacent
Status validated
Source Day 3 — Brian Benning (Section Mgr, Process Control, 27 yrs) — his stated #1 priority
Field champion Brian Benning, Chris Sizemore

Problem statement:

Process control knowledge at Middletown is locked inside the heads of a handful of experts who get called at 2am. Junior engineers act as "secretaries" — they receive the operator's call, relay it to the SME, relay the answer back, and learn nothing. Brian: "The young guy, all he is now is a secretary. The operations people call him, say I've got a problem. Okay, I'm gonna call the SME old guy, and then he tells them what to do. And then they do it, and he sits back down and waits for the next call." Brian's teams get "numerous calls after hours — you fix this, why is this behaving this way." The problem is structural: easy phone access to SMEs prevents learning, and when the SME retires, there's no fallback. "The biggest problem is experts leaving, and then you're scrambling. The way you build a new expert is usually a lot of blood."

Key evidence — Brian Benning transcript:

"The last one's real interesting to me. Is what I call a Subject Matter Expert Creator. We've got all these processes out here. The experts age out. What I'd like to see is some sort of AI agent that we could park an old timer in front of a chair and have it talk about the process and how it's supposed to run." "What voltages on certain drives should be when things are running. Just let it brain dump into it. And then we have a problem — the junior engineer sits down and talks to the virtual SME and says, we're having this problem. What should I go check?" On temper mill example: "We put oil on the end of the strip... The oiler is set up based on a message that comes from the level 2 system... It's a Yokogawa flow meter, this model... Then if we have an oiling problem, it can say, did you check the flow meter? Did you check the setup message?" On convergence troubleshooting: Described 3 engineers working 3 different ends of the same problem (display, PLC, wiring) that turned out to be a coffee spill on a surge strip. The Virtual SME could guide this decision tree. On scope: "You'd basically be building an expert that knows level zero, level one, level two. Then you bring the operations people in and explain why certain things are done a certain way. And then you have this expert that knows how the thing runs front to back." On suggested starting point: temper mills — "a rather simple process... maybe I'm biased, that's my area." On Turn Log integration: "We can actually build in that Turn Log analysis as well. What are all the things that have been worked on there? What was done to fix the various — hey, two weeks ago you had to adjust a rear style after you replaced this."

Proposed solution: Per-department AI knowledge agents built from: (1) expert knowledge capture sessions (structured interviews with experienced engineers), (2) Turn Log history (MDT-35), (3) Teams/SWAMI CMMS work order history, (4) control system documentation (including output from MDT-37 legacy code documentation). Queryable troubleshooting assistant — describe a problem, get suggested diagnostic steps, relevant past incidents, and equipment documentation. Continuously learns from new Turn Log entries and resolved incidents.

Current state: Knowledge lives in people's heads. Wiki attempts exist but "one sits down, types it — it's very time-intensive." After-hours callouts are the norm. No structured knowledge capture process. Target state: AI agent per department that junior engineers and operators can query 24/7. Continuously updated from maintenance activity logs. Reduces SME dependency. Value estimate: $0.5-2M/yr (MTTR reduction + after-hours callout reduction + faster onboarding) + significant risk avoidance value (catastrophic knowledge loss when key experts leave). Brian estimated "75% reduction in SME phone calls at night." Confidence: Medium-High — technology is mature (RAG + knowledge bases are well-proven). Main uncertainty is depth of knowledge capture per department — requires expert time commitment. Brian is enthusiastic champion. Data readiness: Good for Turn Log and CMMS (existing databases). Knowledge capture sessions must be planned and executed. Systems: Turn Log (MySQL), Teams/SWAMI (CMMS), control system documentation (various), expert interviews (new) Complexity: Medium — technically proven, but per-department knowledge capture and validation requires sustained effort Dependencies: Expert availability for knowledge capture sessions (Brian can coordinate). Turn Log data (MDT-35). Legacy code documentation (MDT-37). IT policy on LLM deployment (Brian already proposed Enterprise endpoint internally).

Palmer readout alignment:

Palmer explicitly named "knowledge capture" as a priority theme. This is the most concrete, implementation-ready knowledge capture proposal in the Sprint. Cross-site scalable: Andrew confirmed Burns Harbor has the same expert flight risk; Cleveland's copilot concept (CLV-P03/Dan Hartman) is the same pattern. Brian and Andrew were both enthusiastic: "I think it's cool shit. I've been here 27 years. The biggest problem is experts leaving."

Cross-site evidence:

Andrew Mullen (Day 3, in meeting): "That's a big deal at Burns Harbor. We've lost a lot of our key subject matter experts." Cleveland: Dan Hartman strongest copilot champion (CLV-P03). Same "call the expert" pattern with Tabware data instead of Turn Log.

Relationship to other initiatives:

Feeds from MDT-35 (Turn Log Intelligence) and MDT-37 (Legacy Code Docs) as data sources. Complements MDT-02 (Maintenance AI Copilot) — MDT-02 is focused on voice capture for work order generation; MDT-36 is focused on knowledge retrieval for troubleshooting. They converge in a mature state. Complements MDT-P09 (Ops-Maint Data Integration) — the Virtual SME is the "last mile" that turns data integration into actionable troubleshooting intelligence.

Open questions: - [ ] Which department goes first? Brian suggested temper mills. Validate with Chris Sizemore. - [ ] Who are the 2-3 old-timers for the first knowledge capture sessions? Brian mentioned Bruce (70+, pickle line Fortran), Phil Levins (scheduling), Offinger and Braun, James Worley (retired — willing to come back for a month?). - [ ] What is Brian's proposed Enterprise LLM endpoint? Has it been approved? - [ ] How structured are the Teams/SWAMI work orders? (free-form text vs. coded fields)

Day 5 readout validation — LEADERSHIP'S #1 PREFERENCE:

Presented as the 4th of 4 projects ("Process Control Knowledge and Virtual SME"). This was the standout of the readout. Paul took ownership and expanded the vision in real-time, far beyond what was presented: - On the oiler at the temper mill: "The oiler is set up by a message that comes from the PLC. If we have an oiling problem, it can say: did you check the flow meter? Did you check the setup message?" - On safety integration: "You could expand it into safety. You're doing something at the temper mills, and then you say, we're going to do some sort of lift here. What safety concerns should I have?" - On training: "Use it as a training tool. Someone new to this department — what should they know?" - On lockout procedures: "What's the lockout procedure for this piece of equipment? Was there an incident doing this job in 2017?" - On cross-department knowledge: "As people understand the overall process, how things are cross-connected better, they make better decisions."

Paul explicitly prioritized Virtual SME as the starting point: "Say the fourth one there. You start on a small scale. You pick up area, department... start with process control, then you grow to another subject matter experts."

Dave validated with the "self-funding" lens: this is the project they WANT most, but it may not be the first one they DO (procurement is the self-funding starter). Brian Benning acknowledged the legacy systems as an asset: "There's nothing wrong with stuff made in the seventies."

Key strategic insight: The Virtual SME vision expressed at the readout goes well beyond Brian Benning's Day 3 scope. Paul envisions it covering L0/L1/L2 automation, operations, quality, AND safety — a universal knowledge layer per department. This is MDT-P16 + elements of MDT-P03 + MDT-P07 safety component. The scope needs careful management to avoid overcommitment.


MDT-37: Legacy System Code Documentation & Modernization

Field Value
Horizon H1: Bridge the Gap
Project MDT-P16 (Process Control Knowledge & Virtual SME)
Status validated
Source Day 3 — Brian Benning (Section Mgr, Process Control, 27 yrs)
Field champion Brian Benning

Problem statement:

Middletown's control systems run on legacy languages that predate the current workforce: Fortran 77 (pickle line mod comp system), CRISP (caster control — "I think maybe a half dozen people in the United States know"), PHP 3 (Turn Log and other web apps). One engineer (Bruce, age 70+) has spent 8 months (~80% of his time) manually building flowcharts from Fortran 77 code to document the pickle line system for eventual vendor replacement. The output is required for any integrator to build a replacement system. Brian: "We've got a lot of code that has been orphaned because the engineers that designed it and built it had aged out." Multiple homegrown systems sit on isolated computers with no documentation and no one who understands them.

Key evidence — Brian Benning transcript:

"We've got a lot of code that has been orphaned because the engineers that designed it and built it had aged out." "Right now, we're working on our pickling lines, and I have an engineer spending his entire days going through old Fortran 77 code building flowcharts for it." "Our Caster control system — it runs on a language called CRISP, which I think maybe a half dozen people in the United States know." On code rebasing: "The Turn Log system was written 20 years ago. It was PHP version 3 or whatever. We've got to put it on a new Windows 2025 box, you install the newest version of PHP — half your function calls don't work." On the value proposition: "That code base thing we talked about — the 70 year old going through the Fortran code, he spent 80 percent of his last six months doing that. What's his salary? You cut that to a few hours."

Proposed solution: AI-assisted reverse engineering and documentation of legacy control system code. For each codebase: ingest source code → produce structured documentation (flowcharts, decision trees, function-by-function annotation, data flow diagrams) → validate against engineer knowledge → deliver to integrators or internal teams. For code rebasing: identify deprecated function calls, generate migration guides, produce equivalent code in modern languages.

Current state: Manual documentation — months per system. Bruce (70+) on Fortran, others on CRISP and proprietary languages. No AI tools allowed under current IT policy. Target state: AI-documented codebases. Structured flowcharts and decision trees delivered in hours instead of months. Migration guides for vendor replacement projects. Value estimate: $0.2-0.5M/yr (labor savings: 8 months of senior engineer time → hours, repeated across multiple systems). Risk avoidance: if Bruce leaves before documentation is complete, the pickle line control system becomes a black box. Confidence: High — AI code analysis is well-proven (GitHub Copilot, Claude, etc.). Brian confirmed source code is accessible: "We have the source code for it and stuff, and we can pull it out of the ancient system." Data readiness: Source code is available for all systems. Brian controls access. Systems: Pickle line mod comp (Fortran 77), Caster control (CRISP), Turn Log (PHP), various HMI/SCADA systems, Siemens models Complexity: Quick Win for documentation. Medium for code rebasing/modernization. Dependencies: IT policy change to allow code analysis in LLM (Brian already advocating). Source code export from isolated systems. Engineer time for validation.

Relationship to other initiatives:

Directly feeds MDT-36 (Virtual SME) — code documentation becomes part of the knowledge base that the Virtual SME can reference. Supports MDT-P13 (HSM Rolling Model Replacement) — the Siemens model documentation follows the same AI-assisted reverse engineering pattern. Supports any vendor replacement project (pickle line, caster) by accelerating the specification phase.

Cross-site applicability:

Brian: "There are similar problems around the company footprint. At Burns Harbor, a lot of the rolling mills — there's only a couple people that even know how they're put together." This is a cross-site pattern, not a Middletown-specific problem.

Open questions: - [ ] Priority order for code documentation? (pickle line Fortran, caster CRISP, PHP apps, Siemens models) - [ ] Can source code be exported to a secure environment for AI analysis? Or must it be analyzed on-prem? - [ ] Has Brian's proposal for an Enterprise LLM endpoint been submitted? What's the approval timeline? - [ ] Who validates the AI-generated documentation? Bruce for Fortran, who for CRISP?


New Project Candidates

If an initiative doesn't map to any existing project (PRJ-01 through PRJ-08), it may be a new project candidate.

Initiative Proposed Project Why It's New
MDT-11 Energy optimization PRJ-09 candidate? SunCoke partnership + BF gas recovery may be Middletown-specific enough to warrant separate project
MDT-13 Safety incident trend analytics Standalone quick win No Cleveland equivalent. Dave Reinhold-proposed. Safety analytics is a distinct workstream with unique data source.
MDT-14 HSM centerline tracking (Vision AI) PRJ-10 candidate? Vision AI for strip steering on pair-cross HSM — no PRJ-01 through PRJ-08 maps cleanly to real-time mill vision.
MDT-16 HSM rolling model — Siemens replacement PRJ-10 candidate? Replacing a vendor L2 model with custom AI is a new capability outside the existing project taxonomy.
MDT-22 Fleet vehicle AI copilot PRJ-09 candidate (Fleet Maintenance Platform) Dave's preferred proving ground for copilot/knowledge base. Not a direct map to PRJ-06 (which is plant maintenance workflow). Palmer lukewarm — local problem — but the platform pattern scales.
MDT-30 BF stove tender decision support PRJ-10 candidate (BF Decision Support) Palmer raised explicitly. Scales to 6 BFs across CLF. Raw materials $$$ = massive value. Knowledge capture for critical one-person roles.
MDT-32 BOF endpoint prediction model PRJ-08 extension or standalone R&D already building with Copilot. Scalable to all steel shops. Organic AI adoption proof point.
MDT-33 Cross-site caster reliability analytics PRJ-01 adjacent / standalone Matt's weekly meetings = existing data stream. Middletown benchmark. Company's biggest bottleneck (slab production).
MDT-34 BF burden mix optimization PRJ-10 candidate (BF Decision Support) alongside MDT-30 Expert system for burden recommendations. IH7 as starting point. Pellet chemistry debate resolution. Massive raw material $$.

Daily Update Log

Day 1 — Mar 2

New initiatives identified (field conversations): - MDT-13: Safety incident trend analytics — Dave Reinhold unprompted proposal, committed data access, called corporate safety contact - MDT-14: HSM centerline tracking via vision AI — camera-based strip tracking on the pair-cross HSM - MDT-15: Safety review database — LLM/RAG over safety documents, permit-to-work history, incident reports - MDT-16: HSM rolling model Siemens replacement — custom AI model to replace vendor black-box L2 model - MDT-17: X-ray QA for rolled steel — AI classification of x-ray scan output with upstream process traceability - MDT-18: Root cause analysis platform — structured RCA with AI-assisted cause trees and failure pattern library - MDT-19: Computer vision QA on the cold rolled line — 100% surface inspection at cold mill exit, defect classification, upstream traceability before coating entry

Seed validations: - MDT-10 (Caster chemistry) and MDT-12 (Vacuum degasser): casting recipe modeling identified as an explicit opportunity — both seeds move toward validated

Running totals: 19 initiatives (12 seeds + 7 identified today). Status: Day 1 of 5.

Day 2 — Mar 3

New initiatives identified: - MDT-20: Part number and description cleanup — AI-powered CMMS standardization using a Vooban reference implementation. Foundational data quality that enables procurement automation (MDT-03) and the maintenance copilot (MDT-02). - MDT-21: QA investigation post-mortem knowledge base — LLM/RAG over past quality investigations (SharePoint/database TBD). Queryable by defect type, line, grade, root cause. Flags recurring patterns before new investigations start from scratch.

Day 2 — Mar 3 (Afternoon)

Transcripts analyzed: - "David and management Tuesday" — Dave Reinhold + Steve Palmer (corporate) + Chris (Sr Div Mgr Finishing/Automation) + Chuck (Quality Mgr) + Dean + Andrew Mullen + full Vooban/IE team. ~90 min. - "Truck guys" — West (truck section mgr) + Zach (mechanic) + Troy (fleet/purchasing) + truck section workers. ~60 min.

New initiatives identified: - MDT-22: Fleet vehicle AI copilot — Dave's preferred proving ground. Paper/whiteboard → AI diagnostic + knowledge base on Chevy Silverado, then scale. - MDT-23: Pre-trip inspection digitization — paper pencil whipping → mobile device. Scales to cranes. - MDT-24: Surface inspection classifier enhancement — Ametek cameras, ~60% accuracy on lamination vs gouge. Cross-coil pattern recognition gap. - MDT-25: Through-process quality alerting — tundish/heat-level defect monitoring → proactive slab diversion. - MDT-26: Mechanical property drift detection — trending before goalposts. Captures "Spidey sense." - MDT-27: Customer complaint triage — no ticket system, one person manual tracking. - MDT-28: Intra-plant coil logistics — 40 trips/day, 2hr/day manual planning, door status problem, GPS coming. - MDT-29: Oracle auto-reorder intelligence — dynamic max/min recommendations. - MDT-30: BF stove tender decision support — Palmer raised explicitly, scales to 6 BFs.

Major validations: - MDT-03 (Procurement): Consensus — $102M local inventory, conversational front-end is the play. Sledgehammer dedup example. - MDT-04 (Through-process quality): Strong validation from Chuck. Ametek cameras, investigation restart problem, lab accuracy vs classifier accuracy. - MDT-20 (Part cleanup): Validated with concrete examples (sledgehammer/refrigerator duplicates). - MDT-21 (QA knowledge base): Validated — "we've solved this about four times and it keeps coming back."

Palmer strategic guidance (in-person): - "Laser strikes" — 3-4 scalable, quick-ROI projects only - Flagged: coil logistics, surface inspection, BF stove optimization, knowledge capture - Warning: MRO consolidation is "quicksand" / too long-term - Readout: Friday → Palmer filters → Nicholas/Dale Rupp

Teams CMMS live demo: Confirmed IBM mainframe, 50 years old, messages from 2008, requires department codes and equipment keys, no back button. Terrible UX.

Running totals: 30 initiatives (12 seeds + 18 identified/validated). Status: Day 2 of 5 — in progress.

Day 3 — Mar 4

Transcripts analyzed: - "Inventory Guy Middletown" — Sean (stores/spares manager). ~45 min deep dive on inventory operations, Oracle/Pilog systems, lead times, auto-reorder, locations, reconciliation. - "Quality guys Middletown" — Chuck (Quality Mgr) + Alex (Quality Engineer) + Eric (Sr Quality Engineer). ~60 min on mechanical property prediction, drift detection, mixed steel, smart disposition, quality reporting, research knowledge base, GERS routing, Ametek cameras, customer complaints.

New initiative created: - MDT-31: Inventory Intelligence & Master Data Cleanup — absorbs MDT-20 (part cleanup) and MDT-29 (auto-reorder intelligence). $104M on-site inventory, $150M total. Sean as champion. Phased: master data cleanup → lead time/reorder intelligence → operational intelligence. H1 quick win with Vooban reusable tooling.

Initiatives absorbed: - MDT-20 (Part Number Cleanup) → absorbed into MDT-31 Phase 1 - MDT-29 (Oracle Auto-Reorder Intelligence) → absorbed into MDT-31 Phase 2

Initiative refinements: - MDT-03 narrowed: Scoped to procurement/buying experience only (conversational front-end). Inventory management split out to MDT-31. Depends on MDT-31 for clean data. - MDT-04 deepened: Added 1% quality loss (2x historical), smart disposition sub-workstream (~50 hold types, 100/day), mechanical property prediction sub-workstream, GERS routing context, Dearborn capacity pressure, Quinn Logic reference at Burns Harbor. - MDT-21 upgraded to validated: Thousands of research reports in standard scientific format (PDF), micrograph comparison need, Chuck explicitly prioritized over MDT-27. - MDT-24 refined: 60% accuracy is on specific defect types (not overall). Corporate SIS support group dissolved. Detection vs classification distinction clarified. - MDT-26 upgraded to validated + expanded: Now includes quality SPC modernization (SAS replacement). 16-17k tests/month. Main SAS programmer retired. Renamed to "Drift Detection & Quality SPC Modernization." - MDT-27 deprioritized: Chuck explicitly said claims data is not ready for AI. Redirected priority to MDT-21 + MDT-26.

Key new stakeholders met: - Sean — Stores/Spares Manager. Champion for MDT-31. 2 years in role, 9 years at mill. Enthusiastic about technology. - Alex — Quality Engineer under Chuck. Complaint tracker, quality forensics, statistical analysis. Champion for MDT-21/MDT-26. - Eric (Quality) — Sr Quality Engineer. Ametek cameras, GERS routing, 20+ years experience. Legacy code/knowledge capture.

New systems identified: - Pilog — Master part number system (corporate, cross-plant). Departments create part numbers here → transferred to Oracle. - Noetix — Oracle reporting extraction layer. Used for inventory reports. - Hiko — External motor/brake warehouse. Separate inventory system. Middletown has no access. - GERS — Generative Expert Routing System. Legacy recipe management (punch cards → mainframe → web). Black box. - SAS — Quality reporting. Generates PDFs. Main programmer retired. 1-2 people know the code. - UDMS — Web-based QMS for procedures, FMEAs, control plans. - JMP — Statistical analysis (used alongside Minitab).

Running totals: 31 initiatives (12 seeds + 19 identified/validated, 2 absorbed). Status: Day 3 of 5 — in progress.

Day 3 continued — IT Guy + R&D transcripts:

Transcripts analyzed: - "IT Guy" — Brian Benning (Section Mgr, Process Control, 27 years). ~45 min. - "R&D Guys" — Matt (primary process research, 30 yrs), Eric Bridge (steelmaking R&D, 39 yrs), Eric Welty (Sr Director Research, 26 yrs), Rich (melt support). ~90 min.

New initiatives identified: - MDT-32: BOF Endpoint Prediction Model — R&D already building with GitHub Copilot. Neural operator / transformer approaches. Chose Middletown because it has the best existing model. Scalable to all steel shops. - MDT-33: Cross-Site Caster Reliability Analytics — Matt runs weekly turnaround meetings across all integrated plants. Middletown near-zero unplanned turnarounds vs. 5-12 at other shops. Company bottleneck = slab production. Data exists in spreadsheets. - MDT-34: BF Burden Mix Optimization — Expert system for coke rate, pellet ratio, wind rate. Manual adjustments today. IH7 (Indiana Harbor) most instrumented BF = best starting point. Burns Harbor has Lucas Melton's pressure-drop automation precedent.

Major enrichments: - MDT-09 (Cobble prediction): Massively enriched. Cleveland HSM cobbles "by far worst in the company." IBA real-time data available (millisecond). Cleveland furnace data is a "black hole." Key insight: "fix for last piece is hurt for next piece." R&D actively involved. Scalable across all hot mills. - MDT-01 (Ops-maint integration): Brian Benning is third independent validation of knowledge silo problem (after Dave and Dean). "Biggest problem facing the plant." - MDT-21 (QA Knowledge Base): R&D confirms from their side — decades of research reports, engineers unknowingly re-research solved problems years later. Eric Welty "not sure about appetite" but acknowledges universal need. - MDT-30 (BF stove optimization): Burns Harbor precedent (Lucas Melton, pressure-drop control). IH7 as most instrumented starting point.

Key new insights: - Middletown steel shop = best in CLF. Near-zero unplanned turnarounds, <1% scrap (vs >10% Cleveland). Cultural root: Dave's "figure it out" philosophy from mid-90s. R&D wants to understand WHY and export. Strategic framing for readout: Middletown is the benchmark, not the problem child. - AI maturity at CLF: Copilot was disabled until late 2025. Stelco was 2 years ahead. Now encouraged but early. Eric's pellet analysis anecdote (35 min Saturday, full cost-benefit with citations) = powerful adoption story. - Matt's OneNote = closest thing to a data landscape map in the company. 20+ links per plant organized by department. Offered to share. Must obtain. - R&D as cross-site resource: Central support for all plants and mills. Annual round tables. Weekly turnaround meetings. Best practice questionnaire model. "Loss of containment" meeting format. - Fear of failure culture: Risk of trials rejected when capacity is tight — "afraid to make a change because of the risk." Negative feedback loop. - Legacy systems broader than Siemens: Brian Benning confirmed multiple homegrown systems on isolated computers. Even vendors can't support. AI can reverse-engineer and document.

New systems identified: - IBA — Real-time data system at hot mills (forces, loads, millisecond-level). Used by R&D for cobble investigation. - L3 systems — Site-level aggregated data (avg temps, forces per bar). Separate from L2.

New stakeholders met: - Brian Benning — Section Mgr Process Control, 27 years. 15+ years as section manager. Champion for legacy system modernization, knowledge capture. - Matt — R&D, primary process, 30 years (former Middletown operations). Runs cross-site turnaround meetings. OneNote data map. Champion for cobbles, reliability analytics. - Eric Bridge — R&D, 39 years. BOF/LMF/iron making/hot rolling background. Deep cross-site knowledge. - Eric Welty — Sr Director Research, 26 years. All research (product + process). Cautious on investment appetite for knowledge base. - Rich — R&D, melt support. All plants, everything through casting.

Running totals: 34 initiatives (12 seeds + 22 identified/validated, 2 absorbed). Status: Day 3 of 5 — 8 transcripts processed.

Pre-Visit (Mar 1)

  • Initiative registry created with 12 seeds (3 from Cleveland H1, 9 Middletown-specific or cross-site)
  • Site profile completed with production, historical, and competitive context
  • Key hypotheses documented: through-process quality is likely the Middletown differentiator
  • Workshop approach updated: deep-dive conversations, not formal cluster workshops

Day 1 (Mar 3)

  • 2 transcripts with Dave Reinhold (plant manager): initial conversation + extended operational debrief post-tour
  • CMMS discovery: Middletown uses Teams/SWAMI (Armco-era), NOT Tabware. Major cross-site integration implication.
  • MDT-13 identified: Safety incident analytics — Dave proposed unprompted, committed data access, called corporate safety contact
  • PRJ-04 strengthened: 50% lamination false positive rate confirmed. Vision system at HSM generates corrupt data. Prior non-AI correlation attempts failed.
  • PRJ-01 reframed: Ops-maint gap is NOT the same as Cleveland. Better human practices, but same data fragmentation. Risk is succession, not current operations.
  • PRJ-02/07 confirmed: Slab scheduling in Excel. Planning doesn't know physical slab locations. Someone else may already be working on scheduling optimization.
  • Political dynamic: Dave performing competence, externalizing problems to slab constraint (corporate) and personnel (industry-wide). Real pains visible in unguarded moments.
  • Site tour completed: BF, steelmaking, caster, HSM, pickling. Coating lines not running.
  • Day 2 plan: 8 AM all-hands with Dave + 3 division managers. Priority: ops delay system, historian landscape, quality/metallurgy person, Hot Mill model specifics, identify the scheduling optimization competitor.

Site-Specific Context

Site profile: - Products: Hot-rolled, cold-rolled, electrogalvanized, galvanized, aluminized, enameling steels - Key constraints: TBD — likely finishing schedule / BF 3 life / automotive delivery - Staffing: ~2,363 employees, IAM union - Unique challenges: AK Steel cultural integration, hydrogen project cancellation morale, Dave Reinhold's dual-site GM role, pair-cross HSM (different technology from Cleveland/Burns Harbor)

Key stakeholders met:

Name Role Relevance Day Met
Dave Reinhold Site GM (also GM Mining) Executive sponsor — skeptical but engaged. Champion for safety analytics + fleet copilot. "Laser strikes" aligned with Palmer. Day 1-2
Steve Palmer Director of Engineering (corporate, steering committee) READOUT GATEKEEPER. Defines what reaches Nicholas/Dale Rupp. Wants cross-site scalable, quick-ROI projects. Flagged: coil logistics, surface inspection, BF stoves, knowledge capture. Day 2 (in-person)
Dean Sr Div Mgr — Primary (BF, steelmaking, railroad) Controls primary side. Key target for BF optimization, caster, BOF details. Day 1 (tour, brief)
Chris Sr Div Mgr — Finishing + Automation + Shipping doors Controls coating lines, temper mills, annealing, all Level 1/Level 2 control engineers, shipping doors. IT/OT gatekeeper + logistics stakeholder. Raised coil logistics optimization and door status. Day 2
Chuck Quality Manager (site-wide) Owns surface inspection (Ametek cameras), complaint process, metallurgy group. Confirmed 60% classifier accuracy, investigation restart problem, drift detection gap, no complaint ticket system. KEY champion for PRJ-04. Day 2
West Truck Section Manager Running fleet maintenance shop for 2.5 yrs. No mechanical background. Built from scratch (whiteboard + paper). 150+ vehicles. Enthusiasm for AI diagnostic. Day 2
Zach Fleet Mechanic (expertise) Mechanical expert, used Mitchell software. Key knowledge holder in truck section. Day 2
Troy Fleet Purchasing / Logistics Manages coil movement logistics. 21 years experience. Understands rush priority inflation. Procurement pain examples (Napa markup, buyer pushback). Day 2
Paul Area Mgr — Maintenance? Cost-saving champion. Scrap processing project (Q3). Real $ projects. Referenced Day 1
Alex Quality Engineer under Chuck Customer complaint tracker (sole person). Participated in Day 3 quality deep dive. Described forensic investigation process, JMP statistical analysis, smart disposition opportunity. Champion for MDT-21, MDT-26. Day 3
Eric (Quality) Sr Quality Engineer under Chuck Participated in Day 3 quality deep dive. Manages Ametek SIS cameras and GERS routing system. 20+ years experience. Described recipe management, cross-plant comparison challenges, SAS reporting system, legacy code documentation need. Day 3
Brian Benning Section Mgr — Process Control Controls all automation/computer systems at Middletown. 27 years with CLF/AK Steel. Confirmed "biggest problem = info siloed in single experts." Champion for legacy system modernization. Day 3
Matt (R&D) Mgr — Primary Process Research 30 years. Runs weekly cross-site caster turnaround meetings. Maintains OneNote with data access maps per plant/dept. Champion for cobble reduction, reliability analytics. Offered to share OneNote. Day 3
Eric Bridge (R&D) Process Research — Steelmaking 39 years. BOF/LMF/ironmaking/hot rolling background. BOF endpoint prediction model context. Deep cross-site knowledge. Day 3
Eric Welty (R&D) Sr Director — All Research 26 years. Leads product + process research. Acknowledged knowledge base need but unsure about investment appetite. Day 3
Rich (R&D) Process Research — Melt Support Covers everything through continuous casting. Supports all plants. Day 3
Sean Stores/Spares Manager 2 years in role, 9 years at mill, formerly maintenance. Manages $104M inventory, 32k parts, 250k sq ft stores. Enthusiastic about technology adoption. Champion for MDT-31. "Kid in a candy store" about AI. Day 3
Eric Archer (remote) Corporate Safety Owns safety reporting data. Dave called him Day 1. Access to incident data for MDT-13. Referenced Day 1-2
Logan Safety Manager (local) Recently left the role. Can describe data fields in safety reporting system. Referenced Day 2