Site Project Catalog — Middletown Works¶
Purpose: Groups Middletown's 37 initiatives (34 active + 1 deprioritized, 2 absorbed) into actionable projects for the plant manager. Each site project bundles related initiatives, sizes the local opportunity, and references the corporate project it feeds into.
Audience: Dave Reinhold (GM), Middletown site leadership, internal team
Last updated: 2026-04-16 (consistency pass: corporate project mappings verified against ch5/ch8. MDT-P16 → PRJ-09)
Project Summary¶
| ID | Project | Horizon | Corporate | Bundled Initiatives | Value ($/yr) | Champion | Status |
|---|---|---|---|---|---|---|---|
| MDT-P01 | Surface Inspection Enhancement | H2 | PRJ-04 | MDT-05, MDT-24, MDT-14, MDT-19 | $9-27M | Chuck | validated |
| MDT-P02 | Through-Process Quality & Traceability | H2 | PRJ-04 | MDT-04, MDT-25, MDT-26, MDT-12, MDT-17 | $11-29M | Chuck, Chris | validated |
| MDT-P03 | QA Knowledge & Investigation | H1 | PRJ-04 | MDT-21, MDT-27 | $1.5-5M | Chuck, Alex | validated |
| MDT-P04 | Procurement & Inventory Intelligence | H1 | PRJ-06 | MDT-03, MDT-31 | $3-8M | Dave, Sean | validated |
| MDT-P05 | Fleet Maintenance & Copilot | H1→H2 | PRJ-03/PRJ-06 | MDT-22, MDT-23, MDT-08, MDT-02 | $1-6M | Dave, West | validated |
| MDT-P06 | Intra-Plant Coil Logistics | H2 | PRJ-07 | MDT-28 | $2-5M | Chris, West | validated |
| MDT-P07 | Safety Incident Analytics | H1 | new | MDT-13, MDT-15 | low direct $ | Dave | validated |
| MDT-P08 | BF Optimization & Raw Material Intelligence | H2 | PRJ-05 | MDT-06, MDT-30, MDT-34 | $8-25M | Palmer, Matt, Eric Bridge | identified |
| MDT-P09 | Ops-Maintenance Data Integration | H1 | PRJ-01 | MDT-01, MDT-18 | $3-8M | Steve Longbottom (RIT), John Houston | validated |
| MDT-P10 | Finishing Line Scheduling | H2 | PRJ-02 | MDT-07 | $3-10M | TBD | seed |
| MDT-P11 | Steelmaking Process Optimization | H2 | PRJ-08 | MDT-10, MDT-32 | $4-13M | Matt, R&D team | identified |
| MDT-P12 | Energy & Utility Optimization | H2 | new | MDT-11 | $2-5M | TBD | seed |
| MDT-P13 | HSM Rolling Model Replacement | H2 | new | MDT-16 | $3-10M | Brian Benning | identified |
| MDT-P14 | Cobble Prediction & HSM Process Risk | H2 | PRJ-05 | MDT-09 | $2-8M | Matt, R&D team | identified |
| MDT-P15 | Cross-Site Caster Reliability Analytics | H1 | PRJ-01 adjacent | MDT-33 | $3-8M | Matt | identified |
| MDT-P16 | Process Control Knowledge & Virtual SME | H1→H2 | PRJ-09 | MDT-35, MDT-36, MDT-37 | $1-4M + risk | Brian Benning | validated |
Note: MDT-20 and MDT-29 were absorbed into MDT-31. MDT-27 deprioritized by Chuck (Day 3).
Project Cards¶
MDT-P01: Surface Inspection Enhancement¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | PRJ-04 — Through-Process Quality & Yield |
| Status | validated |
| Champion(s) | Chuck (Quality Mgr), Alex (Quality Engineer), Eric (Sr Quality Eng) |
Local problem statement:
Ametek surface inspection cameras are installed on 4+ lines (electrogalv, hot-dip galv, hot mill, pickling) but the classifier is only ~60% accurate on critical defect types like lamination vs. gouge. Wrong classification sends investigations to the wrong team — a coating line scratch classified as "lamination" wastes steelmaking's time while the actual problem continues. The corporate SIS support group has been dissolved. Two engineers maintain the classifiers locally. Automotive OEMs have near-zero tolerance for surface defects on exposed panels.
Bundled initiatives: - MDT-05: Coating line defect detection — Ametek classifier improvement, operator trust restoration - MDT-24: Surface inspection classifier enhancement — ML retraining on lab-confirmed samples, cross-coil pattern recognition - MDT-14: HSM centerline tracking (vision AI) — camera-based strip steering, cobble prevention - MDT-19: Computer vision QA — cold rolled line — 100% surface inspection at cold mill exit before coating entry
Systems involved: Ametek SIS cameras (4+ lines), lab analysis database, coil tracking, HSM L2 (Siemens)
Value estimate: $9-27M/yr (classifier improvement $3-8M + coating detection $2-8M + cold mill vision $2-6M + centerline TBD) Confidence: High (Ametek classifier), Medium (cold mill/centerline — need infrastructure assessment) Implementation approach: Ametek classifier retraining is the concrete "laser strike" — bounded ML problem, existing hardware, measurable accuracy improvement. Cross-coil pattern detection second. Cold mill and centerline vision systems require camera infrastructure assessment.
Dependencies: MDT-P02 (through-process traceability enriches root cause attribution). Lab-confirmed training samples from historical studies.
Palmer readout alignment: - Scalability: 4 sites (all coating lines have Ametek or similar equipment) / Quick-ROI: yes (classifier retraining) / Palmer named: YES — surface inspection is his explicit priority
MDT-P02: Through-Process Quality & Traceability¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | PRJ-04 — Through-Process Quality & Yield |
| Status | validated |
| Champion(s) | Chuck (Quality Mgr), Chris (Sr Div Mgr Finishing/Automation), Alex (Quality Eng) |
Local problem statement:
Middletown has the longest finishing chain in CLF: vacuum degasser → caster → pair-cross HSM → pickling → 5-stand cold mill → electrogalv/aluminize/galvanize/anneal. A chemistry issue at the degasser may not be detected until post-coating inspection — 6+ process steps later. When defects are found, quality teams "restart the investigation" — tracing backward through every step manually. Plant-wide quality loss is ~1% (double the historical 0.5%), driven by equipment aging and workforce turnover. 16-17k tensile tests/month but analysis is manual (Access + Minitab + JMP). SAS-based quality reporting generates static PDFs — the main programmer retired. Experienced metallurgists see within-spec drift ("Spidey sense"); new ones see binary pass/fail.
Bundled initiatives: - MDT-04: Through-process quality traceability — end-to-end genealogy, defect attribution across process steps - MDT-25: Through-process quality alerting — tundish/heat-level defect monitoring, proactive slab diversion - MDT-26: Mechanical property drift detection & quality SPC modernization — replace SAS pipeline, automated trending - MDT-12: Vacuum degasser process optimization — cycle time, alloy addition, vacuum control (seed) - MDT-17: X-ray QA for rolled steel — AI classification of subsurface defects with upstream traceability
Systems involved: L2 across all stages, caster data, HSM data, cold mill data, coating line data, Ametek SIS, quality systems, GERS (legacy routing), mechanical testing database, SAS (to replace), Access/Minitab/JMP
Value estimate: $11-29M/yr (traceability $5-15M + alerting $2-6M + drift detection $1-3M + degasser $1-3M + x-ray QA $2-5M) Confidence: High — validated by Quality Mgr, Finishing/Automation leader, quality engineering team, and Palmer Implementation approach: Phased — (1) SPC modernization / drift detection as H1 quick win (SAS replacement is urgent, data exists). (2) Ametek classifier feeds alerting. (3) Through-process genealogy as H2 foundation. (4) Smart disposition (automated hold release for trivial holds — 50 types, 100/day) as downstream benefit. (5) Property prediction for unfamiliar grades as H3 capability.
Dependencies: MDT-P01 (better classification → better alerting). Cross-step data integration is the core challenge. Drift detection (MDT-26) can proceed independently with existing data.
Palmer readout alignment: - Scalability: 3-4 sites (quality traceability is universal) / Quick-ROI: SPC modernization is bounded / Palmer named: surface inspection feeds this
MDT-P03: QA Knowledge & Investigation¶
| Field | Value |
|---|---|
| Horizon | H1: Bridge the Gap |
| Corporate project | PRJ-04 — Through-Process Quality & Yield |
| Status | validated |
| Champion(s) | Chuck (Quality Mgr), Alex (Quality Engineer) |
Local problem statement:
Thousands of research reports exist in standard scientific format (objective → summary → evidence) as PDFs on local computers. The 6-person product metallurgy group relies on personal memory to connect current problems to past investigations. "There's just a boatload of research, years and years of research reports that I don't think are organized to a level that would be easily accessible." The corrective measure system captures root cause and fix but NOT the investigation path — each new investigation starts from scratch. Micrograph comparison across reports would be "a pretty useful tool."
Day 3 enrichment — R&D cross-validation:
The R&D team confirmed the same pattern from a central perspective: engineers unknowingly re-research problems that were solved years earlier. Eric Welty acknowledged the universal need, though he was cautious about investment appetite. The fact that both site-level quality teams and central R&D independently confirm the knowledge fragmentation problem strengthens this initiative's case for cross-site deployment.
Bundled initiatives: - MDT-21: QA investigation post-mortem knowledge base — LLM/RAG over research reports, micrograph matching, queryable by defect type/product/line - MDT-27: Customer complaint triage & tracking — deprioritized by Chuck ("data is not in a good place"), kept as future scope
Systems involved: Local file storage (research reports, PDFs), UDMS (QMS), corrective measure system, quality databases
Value estimate: $1.5-5M/yr (reduced investigation time $1-4M + complaint triage $0.5-1M) Confidence: High (knowledge base — Chuck explicitly prioritized), Low (complaint triage — deprioritized, data not ready) Implementation approach: Ingest research report corpus → build LLM-powered search with structured fields (defect type, grade, line, root cause) → add micrograph comparison → flag matches when new investigations start. Complaint triage deferred until data quality improves.
Dependencies: Access to research report file storage. IT cooperation for document ingestion.
Day 5 readout validation:
Presented as one of 4 projects ("QoE Knowledge and Investigation"). Paul validated: "Seldom do we remember all the particulars." Alex reportedly "pretty excited" about this one — he transferred from Indiana Harbor and lacks local tribal knowledge. The cross-site sharing angle resonated: "Barnes Harbor has [solved it], here's their journey." Chuck and Eric (quality) were absent from the readout due to scheduling conflicts but had validated in Day 3 quality session.
Palmer readout alignment: - Scalability: 4 sites (every site has accumulated quality knowledge) / Quick-ROI: yes (document corpus is bounded, standard format) / Palmer named: YES — knowledge capture is his explicit theme. Directly aligned with Palmer's criteria.
MDT-P04: Procurement & Inventory Intelligence¶
| Field | Value |
|---|---|
| Horizon | H1: Bridge the Gap |
| Corporate project | PRJ-06 — Maintenance Workflow Digitization |
| Status | validated |
| Champion(s) | Dave Reinhold (sponsor), Sean (stores/spares mgr), Troy (fleet purchasing) |
Local problem statement:
$104M in on-site spare parts inventory (~$150M including external warehouses). 32,000 parts with ~10% duplicates. All lead times defaulted to 15 days (actual range: days to 54 weeks). Auto-reorder has ZERO approval — including million-dollar parts. Recently auto-ordered a roll that has never been used at this plant. ~500 stuck move orders up to 2 years old. Monthly reconciliation: 2 people x 2 days triple-checking Oracle/Noetix/accounting. "I need a sledgehammer. Boom — here's part numbers that are available." The buying experience is broken: unsearchable Oracle catalog, remote buyers who question routine purchases, Napa workarounds at $25 for a $5 Amazon part.
Bundled initiatives: - MDT-03: Procurement automation (conversational front-end) — natural language parts search across Oracle/Pilog, cross-site stock check - MDT-31: Inventory intelligence & master data cleanup — AI-powered dedup, lead time inference, reorder governance, location rationalization (absorbs MDT-20 + MDT-29). Vooban has reusable reference implementation.
Systems involved: Oracle (ERP/purchasing/inventory), Pilog (master part numbers, corporate), Noetix (Oracle reporting), Teams/SWAMI (CMMS work order context), Hiko/Applied/Eco (external warehouses)
Value estimate: $3-8M/yr (inventory carrying cost reduction + eliminated duplicate purchases + fewer stockouts + reduced emergency buys + reconciliation labor savings + procurement velocity) Confidence: High — stores manager confirmed all data points, management consensus, Vooban has reusable tooling Implementation approach: Phase 1: Master data cleanup using Vooban reference tool (weeks 1-6) → Phase 2: Lead time inference + reorder governance (weeks 4-10) → Phase 3: Conversational procurement front-end on clean data (weeks 8-16). MDT-31 is the prerequisite for MDT-03 — procurement automation fails on garbage data.
Dependencies: MDT-31 (master data cleanup) must precede MDT-03 (procurement automation). Oracle/Pilog export access. Corporate policy on $500 approval threshold.
Day 5 readout validation:
Presented as one of 4 projects. Dave positioned this as the self-funding starter: "Give me this one now because I'm generating real, I'm returning money back that then I can maybe [fund the next project]." Paul was remarkably candid about warehouse quality: "I would not take you to my warehouse because that's somewhat embarrassing." Multiple examples surfaced of duplicate parts, satellite storage, and uncontrolled inventory. The conversational demo resonated strongly — Paul immediately saw how it would work: "It would appear there's been some orders placed by Chris. You might want to go." Budget constraint is critical: Dave explicitly stated no discretionary AI budget exists, so this project must generate immediate ROI to fund the pipeline.
Palmer readout alignment: - Scalability: 4 sites (~$1B in parts across CLF footprint) / Quick-ROI: yes — Dave's #1 criterion met, positioned as self-funding starter / Palmer named: no directly, but warned MRO consolidation is "quicksand" — position as interface layer, NOT Oracle replacement. For corporate readout: frame as "data cleanup that enables cross-site procurement intelligence" rather than site-specific inventory work.
MDT-P05: Fleet Maintenance & Copilot¶
| Field | Value |
|---|---|
| Horizon | H1→H2 |
| Corporate project | PRJ-03/PRJ-06 — PdM Platform / Maintenance Workflow |
| Status | validated |
| Champion(s) | Dave Reinhold (sponsor), West (Truck Section Mgr), Zach (mechanic) |
Local problem statement:
Middletown recently absorbed ~150+ plant vehicles. The truck section has run for 2.5 years with no CMMS — maintenance tracked on a whiteboard and paper work orders. Mitchell software license expired. Pre-trip inspections are pencil-whipped. Diagnostic tools (JoeTest/Diesel Laptops) lack internet access. Variant confusion (HP2 vs non-HP2 Escalade) sends techs down wrong troubleshooting paths. Dave wants this as the proving ground for AI: "Build me something — scan that Chevy Silverado and I get everything."
Bundled initiatives: - MDT-22: Fleet vehicle AI copilot — scan/identify vehicle → full history, PM schedule, parts, troubleshooting, voice-based - MDT-23: Pre-trip inspection digitization — mobile checklist, photo capture, audit trail, scales to cranes - MDT-08: PdM proof of value (fleet vehicles) — telematics + digital WOs + basic predictive models as proving ground - MDT-02: Maintenance AI copilot — plant maintenance version, starting with fleet as Phase 0
Systems involved: None currently (starting from zero). OEM data (web), future CMMS integration, Oracle (purchasing), MobileCom GPS (incoming March). Mitchell1 used by logistics team for mobile equipment diagnostics — a third data silo alongside Teams/SWAMI and condition monitoring that should be integrated.
Value estimate: $1-6M/yr direct (fleet $0.5-2M + PdM expansion value $0.5-4M). Strategic value: proves PdM/copilot concept for production asset expansion. Confidence: High (fleet — zero baseline to beat), Medium (scaling to plant assets) Implementation approach: Fleet copilot MVP (3-4 months) → pre-trip digitization (parallel, quick) → fleet telematics (PdM Phase 0) → scale to plant assets with proven platform. Internet connectivity (fiber ~1-2 months) is the immediate dependency.
Dependencies: Internet connectivity in truck shop (fiber coming). IT policy on mobile devices. Scaling to plant assets requires CLV-P02-type asset selection.
Palmer readout alignment: - Scalability: limited (fleet is a local problem) / Quick-ROI: yes / Palmer named: no (lukewarm). Must position as proving ground for cross-site PdM, not destination.
MDT-P06: Intra-Plant Coil Logistics¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | PRJ-07 — Intra-Plant Logistics Optimization |
| Status | validated |
| Champion(s) | Chris (Sr Div Mgr Finishing/Shipping doors), West (truck master) |
Local problem statement:
~40 coil loads/day on 8-10 internal trucks, point-to-point between buildings. West spends ~2 hours/day manually planning movements using IBM Mainframe shop floor system → Excel. Door availability (crane down, lunch, shift change) discovered only when a truck arrives and sits. Empty return trips waste fuel and capacity. "Rush" coils often not truly urgent — experienced people know the difference, new people don't. GPS being installed (MobileCom, ~3rd week of March) but provides visibility only, not optimization. Seniority-based union bidding means unqualified drivers can bump into positions.
Bundled initiatives: - MDT-28: Intra-plant coil logistics optimization — door status system, route optimization, rush priority intelligence
Systems involved: IBM Mainframe (shop floor/coil tracking), GPS (MobileCom, incoming), radio system, Excel (manual planning)
Value estimate: $2-5M/yr (reduced trucks needed, fuel savings, throughput improvement, less overtime) Confidence: Medium — GPS provides data foundation, optimization is well-proven Implementation approach: Phase 1: Door status system — each department enters availability windows (quick, high-impact). Phase 2: Route optimization using GPS data + door status + coil priority. Phase 3: Rush priority intelligence integrating commercial data.
Dependencies: GPS installation (March). Door status input mechanism (new data entry from departments). IBM Mainframe data access for coil inventory.
Palmer readout alignment: - Scalability: 4 sites (every plant moves coils) / Quick-ROI: Phase 1 door status is quick / Palmer named: YES — coil logistics is his #1 cross-site priority
MDT-P07: Safety Incident Analytics¶
| Field | Value |
|---|---|
| Horizon | H1: Bridge the Gap |
| Corporate project | new (candidate PRJ-09) |
| Status | validated |
| Champion(s) | Dave Reinhold (proposed it), Steve Palmer (endorsed), Eric Archer (corporate safety) |
Local problem statement:
5 years of safety incident data, 80% of fields not actively analyzed. Trend identification (day of week, demographics, seniority, consecutive hours, weather) done manually on gut feeling. Dave suspects Tuesday is the highest-risk day — no analytical confirmation. When patterns are spotted, the response is massive: 550-person safety training triggered by a manually identified pattern. Palmer wants "consecutive days worked" as a specific variable. Every CLF site uses the same safety reporting system.
Bundled initiatives: - MDT-13: Safety incident trend analytics — AI analysis of 5 years of incident data, correlation across temporal/demographic/operational variables - MDT-15: Safety review database (LLM/RAG) — queryable safety knowledge base over decades of review documents
Systems involved: Safety reporting system (plant-level, same across CLF), safety document corpus (PDFs, SharePoint)
Value estimate: Low direct $ — but highest strategic value as trust-building + Palmer-endorsed cross-site play. Safety is KPI #1 for Dave. Confidence: High — data exists, Dave owns it, Palmer endorsed it, Eric Archer (corporate) supportive Implementation approach: AI-driven analysis of incident data (quick — data exists, bounded problem) → quarterly trend reports with validated patterns → safety knowledge base (RAG over documents). Deliver early to build trust for bigger projects.
Dependencies: Data access (Dave committed). Department number mapping differs from other sites.
Palmer readout alignment: - Scalability: 4 sites (same reporting system) / Quick-ROI: yes — fast analysis, high strategic value / Palmer named: YES — endorsed it, wants consecutive-days variable
MDT-P08: BF Optimization & Raw Material Intelligence¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | PRJ-05 — Cobble & Process Risk Prediction |
| Status | identified |
| Champion(s) | Steve Palmer (corporate sponsor), Matt (R&D, primary process), Eric Bridge (R&D, iron making) |
Local problem statement:
BF 3 (1953, relined 2021) is "pre-retirement" — the $500M hydrogen/DRI replacement was cancelled. CLF has stated AI-driven optimization as the alternative. Stove management is single-expert dependent: one stove tender makes all thermal cycling decisions per BF. When that person retires, years of apprenticeship are needed. 6 blast furnaces across CLF (2 Cleveland, 2 Burns Harbor, 1 Middletown, 1 Indiana Harbor). Raw materials cost is enormous — "5 lbs of coke per net ton is a major impact." Burden mix — the ratio of coke, pellets, sinter, and other raw materials — is adjusted manually based on operator judgment with no automated recommendation system.
Bundled initiatives: - MDT-06: BF 3 optimization — broader thermal state prediction, burden distribution, fuel rate optimization - MDT-30: BF stove tender decision support — AI learning model capturing expert decision patterns, advisory system for less experienced operators - MDT-34: BF burden mix / raw material optimization — expert system for coke rate, pellet/sinter ratio, wind rate recommendations. Human-in-the-loop essential ("everybody always worries about a bad decision"). Pellet chemistry debate (flux vs. acid) can be objectively quantified.
Day 3 enrichment — R&D team evidence:
Burns Harbor precedent: Lucas Melton at Burns Harbor implemented semi-automated wind rate control using pressure drop across the furnace as a control variable. Gradually proved to operators it could be automated — a model for progressive trust-building (advisory → semi-automated → closed-loop).
IH7 = best starting point: Indiana Harbor BF 7 is the most instrumented, most productive BF in CLF. R&D: "Most productive, most technology, sensors, models, instrumentation... that could give us good ground to start from, and then you reuse what you learn." Best data foundation for initial model development.
Knowledge loss is active: Coke making knowledge already lost (Joan Edterov). Iron making down to 1-2 experts per furnace. "Maybe all the maintenance, I put on [two deep]. It's hard to go more than two."
Pellet chemistry opportunity: Flux vs. acid pellets is a live corporate tension (Mining vs. Iron Making). Eric Bridge used Copilot to analyze the tradeoff in 35 minutes — "citations, cost-benefit, spreadsheet model." AI can objectively quantify what's currently a political argument.
Systems involved: BF historian (TBD), L2 BF control, stove control system, charging system, slag/hot metal sampling. Hydrogen injection trial infrastructure may still be in place at Middletown.
Value estimate: $8-25M/yr across footprint (stove optimization $3-10M + burden mix/raw material optimization $5-15M). Small per-unit savings × massive tonnage = large aggregate value. Confidence: Medium — Palmer corporate backing, proven industry approach, R&D actively engaged, pending data validation at specific sites Implementation approach: (1) Stove tender decision support (MDT-30) as the bounded "laser strike" — advisory system for stove cycling. (2) Burden mix recommendation engine with human-in-the-loop. (3) Broader BF thermal optimization as expansion. Starting point: IH7 (best instrumented) or Burns Harbor (closest precedent), NOT necessarily Middletown BF 3.
Dependencies: Historian data access at target BF. BF operations champion identification at starting site. Stove tender / operator cooperation.
Palmer readout alignment: - Scalability: 6 BFs across CLF / Quick-ROI: stove decision support is bounded / Palmer named: YES — raised it himself. Both stove optimization and raw material intelligence align with his themes.
MDT-P09: Ops-Maintenance Data Integration¶
| Field | Value |
|---|---|
| Horizon | H1: Bridge the Gap |
| Corporate project | PRJ-01 — Ops-Maintenance Data Integration |
| Status | validated — three independent validations + Day 5 readout confirmation |
| Champion(s) | Steve Longbottom (RIT) + John Houston (maintenance technology) — corrected at Day 5 readout by Paul. Brian Benning is champion for MDT-P16 (process control knowledge), NOT maintenance data integration. Dean and Chris remain stakeholders. |
Local problem statement:
Same ops-maint disconnect as Cleveland but through a different technical surface. Middletown uses Teams/SWAMI (Armco-era homegrown CMMS), not Tabware. The human information system is more intact than Cleveland (AK Steel heritage preserved some process discipline), but the data fragmentation is the same — and the risk is succession, not current operations.
Day 3 validation — Brian Benning (Process Control, 27 years):
Brian independently confirmed the knowledge silo problem as "the biggest problem facing the plant." This is the third independent validation at Middletown (Dave Day 1, Dean Day 2, Brian Day 3). Brian and Chris Sizemore both landed on the same diagnosis: information siloed in single experts who get called at 2am and are at flight risk. Brian confirmed Middletown has "the most inward-facing data structures" of CLF plants — each department owns its own data islands, and "you have to know how to get to the data, and there's only a few people" who can navigate them.
Day 3 validation — R&D team (cross-site perspective):
R&D confirmed the same pattern from a central support view: every time they support a new plant, they must discover the data landscape from scratch. Matt maintains an informal OneNote with data access cheat sheets per plant per department (20+ links). Even as a cross-site agency, they find data discovery painful: "You have to dig, like put people in a torturing machine, and squeeze it out of them." Some departments are protective: "They have a paranoia that we're going to get them in trouble."
Bundled initiatives: - MDT-01: Ops-Maintenance data integration — semantic matching on Teams/SWAMI + operational delay reports - MDT-18: Root cause analysis platform — structured RCA with AI-assisted cause trees
Systems involved: Teams/SWAMI (Armco-era CMMS), operational delay reporting (confirm system), historian
Value estimate: $3-8M/yr (Cleveland benchmark $2-5M + RCA value $1-4M, higher than originally estimated given three-validation severity) Confidence: High — three independent validations (Dave, Dean/Chris, Brian Benning). Severity confirmed as "biggest problem facing the plant." Implementation approach: Same pattern as Cleveland (CLV-P01) adapted to Teams/SWAMI surface. RCA platform adds structured feedback loop. Steve Longbottom (RIT) and John Houston (maintenance technology) are the execution champions. Brian Benning remains a key technical ally who understands the data landscape but is focused on MDT-P16.
Day 5 readout validation:
Presented as one of 4 projects. Leadership validated the approach of extracting Teams/SWAMI data into a queryable database with conversational AI front-end. Paul corrected the champion assignment — this is RIT's domain (Steve Longbottom) and maintenance technology (John Houston), not process control. Lower priority in Dave's self-funding stack than Procurement (MDT-P04) or Virtual SME (MDT-P16).
Dependencies: None.
Palmer readout alignment: - Scalability: 4 sites / Quick-ROI: yes / Palmer named: no directly, but the data silo problem is the root cause behind several Palmer priorities. Must be reframed in Palmer's language for corporate readout — this is the infrastructure that enables Palmer's named priorities (coil logistics, surface inspection, BF stoves all require data integration).
MDT-P10: Finishing Line Scheduling¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | PRJ-02 — Production Scheduling & S&IOP |
| Status | seed |
| Champion(s) | TBD |
Local problem statement:
Multiple finishing lines (electrogalv, galvanizing, aluminizing, annealing, temper mill) all compete for cold-rolled substrate from the same 5-stand mill. Scheduling is likely manual. 4 crews by September adds complexity. Middletown's unique angle on scheduling is the finishing line competition.
Bundled initiatives: - MDT-07: Finishing line scheduling — AI-assisted multi-line scheduling for substrate allocation
Value estimate: $3-10M/yr Confidence: Low — seed only, needs validation Implementation approach: Validate current scheduling process first. If manual/Excel, significant optimization opportunity.
MDT-P11: Steelmaking Process Optimization¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | PRJ-08 — Caster Chemistry Optimization |
| Status | identified — R&D actively building BOF endpoint model |
| Champion(s) | Matt (R&D, primary process), Matt Blakely + Patrick (R&D engineers, BOF model) |
Local problem statement:
Two linked steelmaking optimization opportunities: (1) Dual-strand caster + RH vacuum degasser presents a unique chemistry optimization problem (different from Cleveland's caster). (2) BOF endpoint prediction — knowing final carbon content and temperature before tapping — is a critical control point. Middletown has a "fairly robust" existing endpoint prediction model, but R&D believes AI-based approaches (neural operators, transformers) can improve accuracy.
R&D in-flight work (Day 3):
The R&D team has already started building an AI-based BOF endpoint model using GitHub Copilot. Engineers are iterating with Copilot — uploading data, building models, conversing about scientific/thermodynamic physics constraints. They chose Middletown specifically because the existing model provides a strong baseline to beat. "If it can outperform that model, that's saying something." Scalable to all steel shops across CLF.
This is an organic AI adoption story — R&D took initiative without external help. Vooban could offer to accelerate and productionize, framing it as "we're supporting your engineers, not replacing them."
Bundled initiatives: - MDT-10: Caster chemistry optimization (with RH degasser) — chemistry transitions, alloy optimization - MDT-32: BOF endpoint prediction model — AI-assisted, R&D in-flight, neural operator / transformer approaches, scalable to all steel shops
Systems involved: L2 BOF control, caster data, historian, existing endpoint prediction model, RH degasser control
Value estimate: $4-13M/yr (BOF endpoint improvement $2-5M + caster chemistry optimization $2-8M) Confidence: Medium — R&D actively working on BOF endpoint (strong technical signal), caster chemistry is still seed Implementation approach: BOF endpoint model as the lead — R&D has momentum, Middletown has the best baseline to beat. Caster chemistry optimization as expansion. Position Vooban as ML engineering partner to productionize R&D's prototype.
Dependencies: R&D cooperation (strong — they initiated the BOF work), BOF/caster process data access
Palmer readout alignment: - Scalability: 6+ BOFs, multiple casters across CLF / Quick-ROI: BOF model is bounded if R&D has momentum / Palmer named: no specifically, but scalability across steel shops aligns with his criteria
MDT-P12: Energy & Utility Optimization¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | new (site-specific) |
| Status | seed |
| Champion(s) | TBD |
Local problem statement:
SunCoke integration on-site, BF gas recovery opportunities. Energy optimization at Middletown may have a different profile than Cleveland's compressed air story.
Bundled initiatives: - MDT-11: Energy optimization (SunCoke integration, BF gas recovery)
Value estimate: $2-5M/yr Confidence: Low — seed only
MDT-P13: HSM Rolling Model Replacement¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | new (site-specific) |
| Status | identified |
| Champion(s) | Brian Benning (Process Control) |
Local problem statement:
The pair-cross HSM runs under a Siemens L2 model — black-box, expensive to update, no adaptability without vendor engagement. Replacing with a custom AI model gives CLF full ownership, grade flexibility, and continuous improvement capability. Aligns with CLF's vendor independence narrative (hydrogen project cancellation → AI self-sufficiency).
Day 3 enrichment — Brian Benning (Process Control):
Brian confirmed the legacy code problem extends well beyond the Siemens HSM model. Multiple homegrown systems reside on isolated computers throughout the plant — even the original vendors cannot support some of them. "Nobody really knows how the process works exactly, so it's a bit of tinkering from both ends." Brian sees AI as a tool to reverse-engineer and document these codebases, making them modifiable and evolvable. This validates MDT-16 AND positions a broader "legacy code modernization" theme applicable across CLF.
Day 3 enrichment — R&D team:
R&D confirmed the furnace data gap feeding the HSM: "We think that hot mill data with lack of good information from the furnaces — we don't really have a good understanding of that initial slab coming out." Furnace soak profiles are poorly recorded, which limits any rolling model improvement — the input data is incomplete. Closing this data gap (furnace → HSM) is a prerequisite for meaningful model improvement.
Bundled initiatives: - MDT-16: HSM rolling model — Siemens replacement
Systems involved: HSM L2 (Siemens), historian (Pi or Wonderware), quality systems, furnace control systems (data gap)
Value estimate: $3-10M/yr Confidence: Low-Medium — technically proven in industry, Brian Benning confirms need and legacy complexity. Needs furnace data gap resolution first. Implementation approach: Strategic — deep process knowledge required, L2 integration, long timeline. Must address furnace data gap as prerequisite. Positions Vooban as long-term IP partner. Brian Benning is the key technical ally for understanding the legacy systems landscape.
MDT-P14: Cobble Prediction & HSM Process Risk¶
| Field | Value |
|---|---|
| Horizon | H2: Build the Foundation |
| Corporate project | PRJ-05 — Cobble & Process Risk Prediction |
| Status | identified — R&D actively engaged, IBA data available |
| Champion(s) | Matt (R&D, primary process), R&D team |
Local problem statement:
Hot strip mill cobbles — catastrophic strip loss events during rolling — are enormously costly. Cleveland's HSM has "by far the worst" cobble rate in the company. At Middletown, the pair-cross HSM may have different cobble dynamics, but the R&D team is now actively investigating cobble root causes using IBA real-time data (millisecond-level forces and loads). Each slab reacts differently to mill settings: "The fix for the last piece is the hurt for the next piece." Furnace data feeding the HSM is a "black hole" — soak profiles poorly recorded, limiting root cause analysis.
Day 3 enrichment — R&D team:
R&D is beginning cobble investigation work at Middletown after extensive Cleveland analysis. IBA data provides millisecond-level detail on forces and loads through the HSM. L3 systems capture averaged data per bar. The key insight: cobble prediction requires understanding the input slab condition (temperature profile from furnace) AND the rolling setup, but furnace data is incomplete. Cleveland is the worst performer; Middletown and Burns Harbor have better rates but still significant cost per event. Cobble prediction is scalable across all hot mills — aggregate cost is enormous.
Bundled initiatives: - MDT-09: Cobble prediction — pair-cross HSM dynamics, R&D involvement, IBA data foundation
Systems involved: IBA (real-time HSM data, millisecond-level), L3 (site-level averaged data), HSM L2 (Siemens), furnace control, historian
Value estimate: $2-8M/yr at Middletown (cobble cost reduction + yield improvement + cobble-related downtime prevention). Cross-site aggregate significantly higher. Confidence: Medium — R&D actively working, IBA data exists and is rich, technically proven approach. Furnace data gap limits full root cause capability. Implementation approach: Data acquisition first (IBA + L3 + furnace where available) → pattern recognition on pre-cobble signatures → predictive alerting. Cleveland may actually be the better starting site (highest cobble rate = highest ROI), with Middletown and Burns Harbor for model generalization.
Dependencies: IBA data access, furnace data gap resolution (shared with MDT-P13), R&D cooperation (strong — they're driving this)
Palmer readout alignment: - Scalability: all hot mills across CLF / Quick-ROI: depends on data readiness / Palmer named: not specifically, but scalable process optimization aligns with criteria
MDT-P15: Cross-Site Caster Reliability Analytics¶
| Field | Value |
|---|---|
| Horizon | H1: Bridge the Gap |
| Corporate project | PRJ-01 adjacent (cross-site operations intelligence) |
| Status | identified — data stream already exists (Matt's weekly meetings) |
| Champion(s) | Matt (R&D, primary process — runs weekly cross-site meetings) |
Local problem statement:
CLF's biggest bottleneck is slab production — "the orders are there, if we can make enough slabs." Caster reliability varies enormously across sites: Middletown has nearly zero unplanned turnarounds per week, while other shops have 5-12 per caster. Matt runs weekly cross-site caster reliability meetings tracking turnarounds, delay time, tons vs plan, and working ratio. Best practices are shared through annual round tables and questionnaires. But the process is entirely manual — assembling responses into Excel → PowerPoint, no systematic analysis of what makes Middletown different from Indiana Harbor.
The Middletown benchmark:
Middletown's near-zero unplanned turnarounds vs. 5-12 at other shops is an "order of magnitude" difference. Scrap is <1% (vs. >10% at Cleveland). The culture traces to Dave Reinhold's mid-90s accountability philosophy. R&D actively wants to understand WHY and export it — but lacks the analytical tools to systematically compare across sites. "Try to understand some of the differences. Really a challenge for us."
Bundled initiatives: - MDT-33: Cross-site caster reliability analytics — structured turnaround database, automated benchmarking, best practice knowledge base, pattern analytics
Systems involved: Excel/PowerPoint (current), questionnaire responses (manual), future: structured database + dashboard + knowledge base
Value estimate: $3-8M/yr (if bottom-performing sites improve by even 2-3 turnarounds/week → more heats → more slabs → more revenue in a supply-constrained market) Confidence: High — data exists (Matt collecting since Jan 2025), clear champion, massive value if bottom sites improve Implementation approach: (1) Digitize weekly meeting data into structured database (quick). (2) Cross-site benchmarking dashboard — working ratio, turnaround frequency, cause categories. (3) AI-structured best practice knowledge base from questionnaire responses. (4) Pattern analytics: what distinguishes Middletown from Indiana Harbor?
Dependencies: Matt's cooperation (strong — offered OneNote data), site willingness to share data (variable — "paranoia" concern)
Palmer readout alignment: - Scalability: cross-site by definition — compares all integrated sites / Quick-ROI: yes — dashboard on existing data / Palmer named: not specifically, but cross-site operations intelligence directly supports his criteria. The benchmarking data could feed directly into the corporate readout.
MDT-P16: Process Control Knowledge & Virtual SME¶
| Field | Value |
|---|---|
| Horizon | H1→H2 |
| Corporate project | PRJ-09 — Knowledge Capture / Virtual SME (Palmer explicit) |
| Status | validated |
| Champion(s) | Brian Benning (Process Control, 27 yrs), Andrew Mullen (program manager) |
Local problem statement:
Process control knowledge at Middletown is locked inside the heads of a handful of aging experts who get called at 2am when something breaks. Junior engineers act as "secretaries" — they take the operator's call, relay it to the SME, then relay the answer back. No learning happens. Brian Benning's group has 29 engineers across 12-13 departments managing automation from L0 (field equipment) through L2 (supervisory/SCADA) and into L3 (shop floor tracking). The control systems run on legacy languages (Fortran 77, CRISP, PHP) that predate the current workforce. One engineer has spent 8 months manually building flowcharts from Fortran code. Bruce, age 70+, is the last person who understands the pickle line mod comp system. When these people leave, the knowledge leaves with them — and there is no recovery path.
Middletown has a unique asset: the Turn Log, a homegrown MySQL system running 20+ years with 1.3 million entries. ~100 technicians across every department log what they worked on each turn (8-12 hr shift), ~30 entries/day. Each entry has structured metadata (name, department, equipment area, subsystem — via dropdowns) plus free-form text describing the work done and any associated delays. Brian already tries to manually scan for patterns — "I go in there and look at it and try to find a pattern of: these guys worked on it a couple weeks before it broke, two days before it broke and acted up, and then it finally broke." At 1.3M entries, this is impossible to do manually.
Brian Benning's vision (his stated #1 priority):
"Park an old timer in front of a chair and have it talk about the process and how it's supposed to run. What voltages on certain drives should be when things are running. Just let it brain dump into it. And then we have a problem — the junior engineer sits down and talks to the virtual SME and says, we're having this problem, what should I go check?"
The Virtual SME would know: how the entire control system works (L0/L1/L2), what operators look for, what equipment parameters mean, the last 20 years of maintenance activity from the Turn Log, and what fixes worked in the past. It doesn't get tired, doesn't get angry about being woken up at 2am, and doesn't retire.
Bundled initiatives: - MDT-35: Turn Log Intelligence — AI-powered pattern mining over 20-year maintenance activity database (1.3M entries, MySQL). Predictive signal extraction: which sequences of maintenance activity precede equipment failures? Correlation with delay records. Feeds the Virtual SME as a continuous learning source. - MDT-36: Process Control Virtual SME — Per-department AI agents trained on expert knowledge capture sessions, Turn Log history, Teams/SWAMI work orders, and control system documentation. Queryable troubleshooting assistant for junior engineers. Starting point: temper mills (simpler process, Brian's area, good proving ground). Expandable to all 12-13 departments. - MDT-37: Legacy System Code Documentation — AI-assisted reverse engineering and documentation of homegrown control systems written in Fortran 77 (pickle line mod comp), CRISP (caster control), PHP (Turn Log and other apps), and Siemens proprietary models (HSM). Replaces months of manual work with hours of AI analysis. Delivers structured flowcharts, decision trees, and system documentation that feeds the Virtual SME and de-risks vendor replacement projects.
Systems involved: Turn Log (MySQL, 20+ years, 1.3M entries), Teams/SWAMI (CMMS), mod comp systems (Fortran 77), caster control (CRISP language), various PHP applications, Siemens L2 models, IBA historian, OPC-migrated telemetry (~70% complete)
Value estimate: $1-4M/yr in direct operational savings + significant risk avoidance - MTTR reduction ($0.5-2M): Faster troubleshooting via Virtual SME. Brian's teams handle multiple after-hours calls per night across 12-13 departments. If SME cuts average resolution time by 30-50%, the production uptime impact is measurable. - Turn Log pattern discovery ($0.3-1M): Predictive signals from 20 years of maintenance activity data could reduce unplanned equipment failures. Currently zero pattern analysis is done on this data. - Labor efficiency ($0.2-0.5M): 8 months of manual Fortran documentation → hours. Multiple engineers currently spend significant time on manual code archaeology. Freed capacity redirects to higher-value modernization work. - After-hours callout reduction: Brian described it as potentially "75% reduction in SME phone calls at night." Overtime, burnout, and retention improvement. - Risk avoidance (not annualized but critical): If Bruce (70+) leaves and the pickle line control system fails, there is no recovery path. If CRISP-knowledgeable engineers are unavailable during a caster control issue, downtime cost is $100K+/hr. The Virtual SME is an insurance policy against catastrophic knowledge loss. This risk is active and growing — Brian: "The biggest problem is experts leaving, and then you're scrambling."
Confidence: High on Turn Log (data exists, accessible MySQL, 20 years of history). High on legacy code docs (demonstrated need, AI capability proven). Medium on Virtual SME (technically mature, but depth of knowledge capture per department is a significant effort requiring expert time commitment).
Implementation approach: 1. Phase 1 — Turn Log Intelligence (weeks 1-6): Export Turn Log MySQL data. Build pattern mining pipeline: equipment-level activity sequences correlated with delay events. Deliver: top equipment failure predictors, department-level activity dashboards, anomaly detection on recent entries. Quick win — data is clean, structured, and immediately available. 2. Phase 2 — Legacy Code Documentation (weeks 2-8, parallel): Start with Brian's highest-priority codebase (pickle line Fortran 77 or caster CRISP). AI ingests source code, produces structured documentation: flowcharts, decision trees, function-by-function annotation. Compare to Bruce's 8-month manual output as validation. Visible wow-factor — months of work in hours. 3. Phase 3 — Virtual SME Pilot (weeks 6-16): Build first Virtual SME for temper mills (Brian's recommendation — simpler process, his area). Ingest: Turn Log entries for temper mill department, Teams/SWAMI work orders, control system documentation from Phase 2, structured knowledge capture sessions with 2-3 senior engineers. Deploy as queryable agent. Measure: resolution time, callout frequency, user adoption. 4. Phase 4 — Expand (months 4-8): Roll to additional departments based on Phase 3 results. Each new department = incremental knowledge capture sessions + Turn Log data (already available) + code documentation (if legacy systems present). Feed Turn Log entries in real-time so SME continuously learns from new incidents.
Dependencies: Turn Log MySQL export access (Brian controls). Expert time for knowledge capture sessions (Brian to coordinate scheduling of old-timers). Source code access for legacy systems (Brian confirmed: "We have the source code for it and stuff, and we can pull it out of the ancient system"). IT policy on LLM access (Brian already proposed Enterprise LLM endpoint internally).
Palmer readout alignment: - Scalability: 4/4 sites — Andrew Mullen confirmed Burns Harbor has the same problem: "That's a big deal at Burns Harbor. We've lost a lot of our key subject matter experts." Every CLF site has legacy control systems, aging SMEs, and maintenance activity logs. The Turn Log is Middletown-specific but the Virtual SME pattern applies everywhere. - Quick-ROI: Yes — Turn Log mining and legacy code documentation deliver in weeks. Virtual SME pilot in months. - Palmer named: YES — knowledge capture is Palmer's explicitly stated priority. He flagged it alongside coil logistics, surface inspection, and BF stoves. Brian Benning's vision is the most concrete, implementation-ready knowledge capture proposal in the Sprint.
Relationship to other projects:
Complements MDT-P09 (Ops-Maint Data Integration): Turn Log intelligence provides the maintenance activity context that makes delay attribution richer. Complements MDT-P05 (Fleet Copilot): the Virtual SME is the plant-side equivalent of the fleet copilot concept, but focused on process control rather than vehicle maintenance. Complements MDT-P13 (HSM Rolling Model): legacy code documentation directly supports the Siemens model replacement effort. The Turn Log feeds MDT-P09's delay attribution and the Virtual SME supports MDT-P09's goal of bridging ops-maint information gaps.
Day 5 readout validation — ★ LEADERSHIP'S #1 PREFERENCE:
Paul took ownership of the Virtual SME vision during the readout, expanding it far beyond the presented scope. He spontaneously described: - Control system troubleshooting: oiler setup chain (PLC → Yokogawa flow meter → spray headers) — "if we have an oiling problem, it can say: did you check the flow meter?" - Safety integration: "You could expand it into safety. What safety concerns should I have? Okay, well, you've got two cranes in this crane bay." - Lockout procedures: "What's the lockout procedure for this piece of equipment? Was there an incident doing this job in 2017?" - Training new hires: "Use it as a training tool. Someone new — what should they know?" - Cross-department knowledge: "As people understand the overall process, how things are cross-connected better, they make better decisions."
Paul's starting recommendation: "You start on a small scale. Pick up area, department. Start with process control, then you grow."
Strategic significance: The Virtual SME was Paul's and Dave's clear #1 preference — but Dave acknowledged it may not be the FIRST project to execute: the procurement/inventory cleanup generates the quick ROI that creates budget room for this more ambitious play. Dave: "That one that we like the most, maybe the most expensive to implement... let's start thinking about that one over, look, give me some of this right now for sure."
Scope warning: The readout vision goes beyond Brian Benning's Day 3 framing. Paul envisions L0-L2 automation + operations + quality + safety — a universal knowledge layer per department. The scope needs careful management to avoid overcommitment in Phase 1. Start narrow (temper mill, process control only), prove, then expand.
Cross-site evidence:
Cleveland: Dan Hartman is the strongest copilot champion (CLV-P03). Same "call the expert at 2am" pattern. Tabware data is the Cleveland equivalent of the Turn Log. Burns Harbor: Andrew confirmed key-person risk is "a big deal." IE has existing relationship. Likely strongest candidate for first expansion.
Corporate Project Cross-Reference¶
| Corporate Project | Site Projects | Validation Strength |
|---|---|---|
| PRJ-01: Ops-Maint Integration | MDT-P09, MDT-P15, MDT-P16 | Strong — three independent validations (Dave, Dean, Brian Benning). "Biggest problem facing the plant." MDT-P16 adds Turn Log intelligence + Virtual SME as knowledge preservation layer. |
| PRJ-02: Production Scheduling | MDT-P10 | Seed — finishing line competition is unique angle |
| PRJ-03: PdM Platform | MDT-P05 | Partial — fleet as proving ground, not production assets yet |
| PRJ-04: Quality & Yield | MDT-P01, MDT-P02, MDT-P03 | Strong — highest MDT signal. Ametek 60%, longest finishing chain, Palmer priority |
| PRJ-05: Cobble & Process Risk | MDT-P08, MDT-P14 | Strong — BF stove + burden mix (Palmer named, R&D enriched). Cobble prediction now R&D-engaged with IBA data. |
| PRJ-06: Maint Workflow | MDT-P04, MDT-P05 | Strong — $104M inventory, management consensus, fleet copilot |
| PRJ-07: Logistics | MDT-P06 | Strong — 40 trips/day, Palmer #1 priority |
| PRJ-08: Caster Chemistry | MDT-P11 | Identified — R&D already building BOF endpoint model with Copilot. RH degasser adds unique dimension. Scalable to all steel shops. |
New project candidates from Middletown: - Safety analytics (MDT-P07) → candidate PRJ-09 - Cross-site caster reliability (MDT-P15) → candidate PRJ-09 or PRJ-10 (cross-site operations intelligence) - BF burden mix within MDT-P08 → strengthens PRJ-05 scope - BOF endpoint prediction within MDT-P11 → extends PRJ-08 scope - Energy optimization (MDT-P12) → likely stays site-specific - HSM rolling model (MDT-P13) → likely stays site-specific, but legacy code modernization theme is cross-site - Process control knowledge & Virtual SME (MDT-P16) → Palmer knowledge capture theme. Turn Log is Middletown-specific asset, but the Virtual SME pattern is cross-site. Brian Benning + Andrew Mullen confirmed Burns Harbor need.
★ Day 5 Readout — Palmer Alignment Gap Analysis¶
The Day 5 readout to Middletown leadership presented 4 projects. These do NOT fully align with Palmer's explicitly named priorities from Day 2. This gap must be resolved before the corporate readout.
Readout vs Palmer Comparison¶
| Readout Presented | Palmer's Day 2 Shortlist | Alignment | |
|---|---|---|---|
| #1 | Procurement & Inventory Intelligence (MDT-P04) | Coil logistics (PRJ-07) | MISMATCH — Palmer's #1 was not presented |
| #2 | QoE Knowledge & Investigation (MDT-P03) | Surface inspection (PRJ-04/Ametek) | PARTIAL — knowledge capture aligns, but surface inspection specifically was called "not short-term" |
| #3 | Maintenance Data Integration (MDT-P09) | BF stove optimization (PRJ-05) | MISMATCH — BF stoves not presented (best starting point is IH7/Burns Harbor) |
| #4 | Virtual SME (MDT-P16) | Knowledge capture (broad theme) | ALIGNED — strongest overlap |
What This Means for the Corporate Readout¶
The risk: If we present Middletown's 4 readout projects as-is to Palmer, he sees: - His #1 (coil logistics) missing - His surface inspection priority dismissed as "not short-term" - A procurement project he explicitly warned against (MRO "quicksand") - A maintenance integration project he didn't name
The opportunity: The underlying value propositions DO align with Palmer's criteria — they just need reframing:
| Project | Palmer Reframe |
|---|---|
| Procurement/Inventory (MDT-P04) | "Cross-site parts intelligence — $1B footprint opportunity. Interface layer, not MRO replacement. Weeks to first value." |
| QoE Knowledge (MDT-P03) | "Cross-site quality knowledge capture — Palmer's explicit theme. Research reports are standard format across all sites." |
| Maintenance Data (MDT-P09) | "The enabling infrastructure that makes coil logistics and surface inspection work — you can't optimize routes without connected data." |
| Virtual SME (MDT-P16) | "Knowledge capture at scale — the concrete implementation of Palmer's theme. Turn Log as unique data asset, Virtual SME pattern deployable across all sites." |
Recommended Corporate Readout Strategy¶
Present 5-6 projects, not 4. The Palmer-named projects (coil logistics, surface inspection, BF stoves) belong in the corporate readout even if they weren't in the Middletown site readout. The Middletown-validated projects (procurement, QoE knowledge, Virtual SME) belong because site leadership endorsed them and they generate quick ROI.
Tier 1 — Palmer Priorities (named by Palmer, validated by evidence): 1. Coil logistics (PRJ-07) — Palmer's #1, 40 trips/day evidence, GPS arriving March 2. Surface inspection (PRJ-04) — Palmer named, 60% Ametek accuracy, cross-site 3. BF stove optimization (PRJ-05) — Palmer named, R&D validated, 6 BFs, IH7 starting point
Tier 2 — Site-Validated Quick Wins (self-funding, endorsed by leadership): 4. Virtual SME / Knowledge Capture (MDT-P16) — Palmer's knowledge capture theme, leadership's #1 5. Procurement & Inventory Intelligence (MDT-P04) — Quick ROI, $1B footprint, self-funding starter 6. QoE Knowledge (MDT-P03) — Knowledge capture, cross-site scalable, standard format
Tier 3 — Foundation (don't present as standalone, but reference as enabling): - Ops-Maint Data Integration (PRJ-01) — the infrastructure layer that makes everything else work
This tiered approach lets Palmer see his priorities front and center while also showing that the site leadership has endorsed complementary projects with quick ROI. The self-funding cascade (procurement → Virtual SME → bigger plays) addresses Dave's budget constraint AND Palmer's "quick payback" requirement simultaneously.
Site-Specific Notes¶
- Middletown steel shop = BEST in CLF: Near-zero unplanned turnarounds (vs 5-12 at other sites), <1% scrap (vs >10% at Cleveland). An "order of magnitude" difference per R&D. Cultural root: Dave Reinhold's mid-90s accountability philosophy. Strategic framing: Middletown is the benchmark to export, not the problem child to fix.
- Longest finishing chain in CLF: Electrogalv, aluminizing, galvanizing, enameling, annealing, temper mills. This is WHY through-process quality is the Middletown differentiator.
- IAM union (not USW) — different labor dynamics, may affect copilot adoption and scheduling changes.
- AK Steel heritage: Preserved some process discipline and quality culture from the Kawasaki Steel joint venture, but same data fragmentation as other sites.
- "Most inward-facing data structures" in CLF (Brian Benning) — each department owns its own data islands. Data discovery requires knowing the right people.
- Systems landscape: Teams/SWAMI (Armco-era IBM mainframe CMMS), IBM Mainframe (shop floor/coil tracking), Ametek (SIS cameras), Oracle (ERP), GERS (legacy recipe routing), SAS (quality reporting — programmer retired), IBA (real-time HSM data, millisecond-level), L3 (site-level averaged data), Mitchell1 (fleet diagnostics).
- Dave Reinhold is stretched: GM of both Middletown and Mining. "Laser strike" language echoes Palmer.
- Hydrogen project cancellation: BF 3 alternative is AI optimization — this narrative supports MDT-P08.
- Dearborn closure: Capacity is tight, every quality remake costs more than before.
- Burns Harbor has Quinn Logic: Smart disposition system that Middletown does manually (50 hold types, 100/day). Cross-site learning opportunity.
- R&D as cross-site resource: Central support for all plants and mills. Matt's OneNote = closest thing to a data landscape map (20+ links per plant). Weekly turnaround meetings. Annual round tables. "Loss of containment" meeting format. Must obtain Matt's OneNote.
- AI maturity at CLF: Copilot was disabled until late 2025. Stelco was 2 years ahead. Now encouraged but early. Eric Bridge's pellet analysis (35 min Saturday, full cost-benefit with citations) = powerful adoption story.
- Fear of failure culture: Risk of trials rejected when capacity is tight — "afraid to make a change because of the risk." Negative feedback loop that must be managed.