Initiative Registry — Cleveland Works¶
Purpose: Living tracker of all AI/digital initiatives identified during the Cleveland site sprint. Evolves daily as workshops validate, size, and prioritize.
Workflow: - Day 1-2: Identify (status:
identified) - Day 2-3: Validate in workshops — field reaction, data check (status:validatedorrejected) - Day 4: Size — attach $ estimates, confidence, feasibility (status:sized) - Day 5: Prioritize — matrix placement, sequence, present to decision makers (status:prioritized)Last updated: 2026-04-16 (appendix preparation — no content changes, field evidence preserved as collected)
Master Summary Table¶
| ID | Initiative | Horizon | Project | Status | Value ($/yr) | Confidence | Complexity | Priority |
|---|---|---|---|---|---|---|---|---|
| CLV-01 | Ops-Maintenance data integration | H1 | PRJ-01 | validated | $2-5M | Med-High | Quick Win | — |
| CLV-02 | Cross-stage scheduling / S&IOP | H3 | PRJ-02 | validated | $10-30M | Low-Med | Strategic | — |
| CLV-03 | Roll change sequencing optimization | H2 | PRJ-02 | identified | $5-15M | Med | Medium | — |
| CLV-04 | Cobble prediction & prevention | H2 | PRJ-05 | validated | $3-10M | Med | Medium | — |
| CLV-05 | Caster chemistry transition optimization | H2 | PRJ-08 | identified | $2-8M | Med | Medium | — |
| CLV-06 | 90-day slab remelting reduction | H2 | PRJ-04 | identified | TBD | Med | Medium | — |
| CLV-07 | Maintenance co-pilot (technician assist) | H1 | PRJ-06 | validated | $0.5-2M + data uplift | Med-High | Quick Win | — |
| CLV-08 | Procurement decision tree / auto-approval | H1 | PRJ-06 | validated | $1-3M + velocity | Med-High | Quick Win | — |
| CLV-09 | Spare parts inventory intelligence | H2 | PRJ-03 | validated | $1-4M | Low-Med | Medium | — |
| CLV-10 | Intra-plant coil movement optimization | H2 | PRJ-07 | identified | TBD | Low | Medium | — |
| CLV-11 | Operator decision support (BF/HSM) | H2 | PRJ-05 | validated | $1-5M | Med | Medium | — |
| CLV-12 | PdM platform (multi-asset — bag house, scrubbing, cranes) | H1→H2 | PRJ-03 | validated | $3-12M | Med | Quick Win→Expand | — |
| CLV-13 | Environmental compliance automation | H1 | new | identified | $0.5-1M | Low | Quick Win | — |
| CLV-14 | Through-process traceability (heat→coil) | H2 | PRJ-04 | identified | Enabler | Med | Strategic | — |
| CLV-15 | Dynamic pricing by capacity consumption | H3 | PRJ-02 | identified | $5-20M | Low | Strategic | — |
| CLV-16 | Rail car inventory visibility | H2 | PRJ-07 | identified | TBD | Low | Medium | — |
| CLV-17 | BOF endpoint prediction | H2 | PRJ-05 | seed | $0.5-2M | Med | Quick Win | — |
| CLV-18 | Caster breakout prediction (ML) | H2 | PRJ-05 | seed | $1-3M | Med | Quick Win | — |
| CLV-19 | Surface defect detection (CNN on SIS) | H2 | PRJ-04 | seed | $2-5M | Med | Medium | — |
| CLV-20 | BF thermal state prediction | H2 | PRJ-05 | seed | $1-3M | Med | Medium | — |
| CLV-21 | Utility management system | H2 | new | validated | $7M+ | Med | Medium | — |
| CLV-22 | BOF bag house predictive monitoring | H1 | PRJ-03 | validated | $2-5M | Med-High | Quick Win | — |
| CLV-23 | BOF scrubbing system predictive monitoring | H1→H2 | PRJ-03 | validated | $2-5M | Med | Quick Win→Expand | — |
| CLV-24 | Caster segment lifecycle tracking | H1 | PRJ-06 | validated | $1-3M | Med | Quick Win | — |
| CLV-25 | Critical spares identification & digitization | H1 | PRJ-06 | validated | $1-3M | Med-High | Quick Win | — |
| CLV-26 | Raw materials cost optimization | H2 | PRJ-02 | identified | TBD | Low | Strategic | — |
| CLV-27 | Enterprise data platform (cloud strategy) | H2→H3 | new | identified | Enabler | Low | Strategic | — |
Status key: seed = from pre-visit research, not yet discussed | identified = emerged from field conversations | validated = confirmed by multiple stakeholders with data/evidence | sized = $ estimate attached with confidence | prioritized = placed on matrix, sequenced | rejected = not viable or not relevant
Horizon key: H1 = 0-6 months "Bridge the Gap" | H2 = 6-18 months "Build the Foundation" | H3 = 18-36 months "Predict & Optimize"
Note (Day 3): The original Cluster 1/2/3 framing has been replaced with a progressive horizon architecture. See
03-consolidation/strategic-orientation.mdfor the rationale. Maintenance-operations information flow is the flagship orientation; data centralization is the emergent outcome of progressive project delivery, not a separate initiative.
Initiative Detail Cards¶
CLV-01: Ops-Maintenance Data Integration¶
| Field | Value |
|---|---|
| Horizon | H1 — Bridge the Gap |
| Sub-process | Detection → Diagnosis (cross-cutting) |
| Status | validated |
| Source | Day 1 transcripts #1, #2 + Day 2 transcript #6 + Day 3 transcripts #7, #8 |
| Field champions | Jamie Betts (1SP maint), Dan Hartman (HSM), Paul Aaron Dash |
Problem statement:
"Our operational side, it's all captured. The maintenance side, it's all captured. But we need to be able to integrate them." — Operations tracks delays by area (coiler, furnace, etc.) in SQL reports. Maintenance tracks work orders in Tabware by asset hierarchy. A 14-minute electrical delay logged by operations has no matching work order. They can't overlay the data to have productive conversations about root causes.
Proposed solution: Semantic matching layer that links operational delay codes/areas to Tabware asset hierarchy. AI can resolve different naming conventions (operations says "coiler area," Tabware says "H08-coiler-2"). Dashboard overlaying both views.
Current state: - Both hierarchies exist and are ~95% complete - Delay reports: SQL-based, monthly/weekly/daily, by department - Tabware: work orders with hours, but default P3 priority (not triaged properly) - No cross-reference possible today - Delay categories corrected after the fact, not in real time: "we spent a lot of time going back afterwards and correcting delay categories" (Dan Hartman) - Operators enter delays under time pressure — "I just saw we're down for 26 minutes, what were we down for?" - >60 min delays require area manager documentation, but only "a couple sentences" - Operations categories use main/sub/text format NOT based on Tabware equipment hierarchy (Dan Hartman) - Monthly reliability charts published but reviewed in existing meetings — no dedicated ops-maint integration forum
Target state: - Every operational delay linked to maintenance work order(s) - Unified dashboard: "14 min delay → here's the WO → here's what was done → was it a repeat?" - Ability to argue attribution with data, not anecdotes
Value estimate: $2-5M/yr (reduced misattribution, faster root cause, targeted maintenance spend) Confidence: Medium-High — validated by 7+ stakeholders across 5 transcripts Data readiness: Partial — both datasets exist, linkage logic needs to be built Systems: Tabware + operational SQL reports (web-based dashboards), Axiom (ERP) Complexity: Quick Win (6-8 weeks to prototype) Dependencies: Access to both SQL report databases and Tabware Open questions: - [x] What is the operational delay reporting system called? → SQL-based web dashboards, custom-built (Ken/IT) - [ ] How many delay-minutes per month are disputed between ops and maintenance? - [ ] Can we get sample exports from both systems? - [x] What naming convention does ops use? → Main/sub/text categories, NOT Tabware hierarchy
CLV-02: Cross-Stage Scheduling / S&IOP¶
| Field | Value |
|---|---|
| Horizon | H3 — Predict & Optimize |
| Sub-process | Scheduling (cross-cutting layer) |
| Status | validated |
| Source | Day 1 transcript #2 + Day 3 transcripts #7, #8 |
| Field champions | Andrew Mullen (strategic), Terry Fedor, John Stubna (executive) |
Problem statement:
"The commercial side has a certain set of priorities... operation planning has their own set of rules and logic... capacity consumption just doesn't get factored in. None of the constraints talk to each other." — 5-6K orders/week, customers ordering 3 months out, still missing ship-by dates. No formal S&IOP. Scheduling is reactive and silo-driven.
Proposed solution: Phase 1: Cross-functional visibility dashboard (demand + capacity + constraints in one view). Phase 2: AI-driven scheduling optimization that balances customer demand, capacity consumption, changeover minimization, and inventory levels.
Current state: - No demand forecasting (firm orders only) - No cross-functional planning forum - Scheduling siloed: each operating unit optimizes locally - "Squeaky wheel" drives prioritization - Stubna confirmed: 1SP = THE constraint, 28 heats/day target, all casters booked through May - Manual scheduling process that breaks down during downturns (Dan Hartman: Excel-based, "incredibly complex") - In September (4th crew), constraint shifts to iron producing or HSM — dynamic constraint ID needed
Target state: - Integrated business planning cycle (monthly S&IOP) - AI-optimized production scheduling (grade sequencing, changeover minimization) - Dynamic re-planning when disruptions occur ("hit the button, recalculate")
Value estimate: $10-30M/yr (Andrew: "1% improvement = tens of millions") Confidence: Low-Medium — massive scope, needs decomposition. Validated by Stubna as real pain. Data readiness: Partial — ERP (Axiom) has demand, L2 has production, but not connected Systems: Axiom (ERP), L2 process control, Tabware (maintenance windows) Complexity: Strategic (12-18 months for meaningful optimization) Dependencies: CLV-01 (ops-maint integration), CLV-14 (traceability), data architecture Open questions: - [x] Who owns scheduling today? → Siloed per unit, no central owner - [ ] Can we see the current scheduling process/tools? - [ ] What does the planning team's manual say? (Andrew mentioned it exists digitally)
CLV-03: Roll Change Sequencing Optimization¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Hot Strip Mill — scheduling |
| Status | identified |
| Source | Day 1 transcript #1 + Day 3 transcript #7 (Dan Hartman/HSM context) |
| Field champion | Dan Hartman (HSM) |
Problem statement: Roll changes account for ~50% of HSM operational delays (which are 12% of total time, so ~6% of total HSM availability). Changeovers are driven by width/thickness changes between products. Sequencing to group similar products would reduce changeover frequency, but must balance against customer demand, inventory, and slab availability.
Proposed solution: AI-driven grade/width sequencing that minimizes changeovers while meeting customer delivery commitments. Similar to "block planning" concept from paper industry.
Current state: - Roll changes: largest single operational delay bucket - Some manual sequencing logic exists (widest first, similar chemistries together) - Often disrupted by customer service urgency or slab availability - Heat treat lines: start week at low temp, ramp up — but get interrupted by priority orders - HSM cadence: weekly turn (12h), furnace turn every 3 weeks (24h), quarterly outage (36h), annual outage (10 days) — every downturn is a battleground between planned and reactive work - Work rolls changed out on tonnage basis (Dan Hartman)
Target state: - Optimized rolling schedule: -30-50% changeover frequency - Automatic re-sequencing when disruptions occur - Integration with slab availability and customer delivery dates
Value estimate: $5-15M/yr (depends on current changeover count and HSM utilization) Confidence: Medium — need HSM-specific delay data to size precisely Data readiness: Partial — delay reports exist, need rolling schedule data Systems: L2 (HSM), scheduling system (unknown), delay SQL reports Complexity: Medium (6-12 months) Dependencies: Understanding of current scheduling tools Open questions: - [ ] How many roll changes per day/week? - [ ] What is the current sequencing logic? Manual or system-assisted? - [x] Ask Dan Hartman Wednesday AM → Met Dan. HSM cadence documented. Roll change optimization not specifically discussed as separate initiative — subsumed under broader scheduling story.
CLV-04: Cobble Prediction & Prevention¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Hot Strip Mill — rolling |
| Status | validated |
| Source | Day 1 transcripts #1, #2 + Day 3 transcripts #7, #8 |
| Field champions | Dan Hartman (HSM), John Stubna (executive), Andrew Mullen |
Problem statement:
"We just like to do a little bit of historical analysis on them and see, like, how was the mill set up? What was the temperature? What are all the inputs... predict, you know, if we think that something's too cool, maybe prompt the operator not to even process this slab?" — Cobbles destroy equipment (drive spindles shattered "multiple times"), cause major downtime, and are increasing as experienced operators retire and new ones lack the intuition to "finish a pass" or "take it out of Auto."
Proposed solution: ML model predicting cobble risk per slab based on: temperature profile, chemistry, mill setup, gap between model and actual, operator experience. Real-time alert: "high cobble risk — consider bypass."
Current state: - Cobbles tracked in both operational and maintenance buckets (attribution disputed) - Root cause analysis is person-dependent — "some people just know" - New operators causing more cobbles than experienced ones - Historical data exists but not systematically analyzed - Global Gauges strip steering camera installed on 2 stands (Stubna) — not yet controlling, Mansfield fully implemented - Context-dependent alarms (Dan Hartman): vibration depends on product/grade/speed — current alarming too simplistic. "Machine learning will say, I want to ignore that head end, right? Because it's not normal. But if I run this same product, and it alarmed once last time, and now this time it alarmed twice — there's something I want to learn from." - Stubna confirmed: cobbles = biggest downstream yield loss
Target state: - Real-time cobble risk score per slab - Operator prompt: bypass slab if risk too high (avoid equipment damage) - -20% cobble rate (leadership target) - Context-aware alarming: multi-variate thresholds that account for product, grade, speed
Value estimate: $3-10M/yr (avoided equipment damage + production time + scrap) Confidence: Medium — need cobble frequency data and cost-per-cobble Data readiness: Partial — L2 data exists, need operational delay records matched to cobble events Systems: L2 (HSM), operational delay system, historian (Pi), Global Gauges (vision) Complexity: Medium (6-9 months to deploy, needs L2 integration) Dependencies: CLV-01 (ops-maint integration helps with attribution data) Open questions: - [ ] How many cobbles per month currently? - [ ] What is the cost per cobble (equipment damage + downtime + scrap)? - [x] What data is captured per cobble today? → Minimal — same "fix and forget" pattern. No systematic post-mortem. Historical analysis exists but is ad-hoc. - [x] Ask Dan Hartman Wednesday AM → Confirmed. Context-dependent alarms are a key ML opportunity. Global Gauges is being deployed but not yet controlling.
CLV-05: Caster Chemistry Transition Optimization¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Caster — continuous casting |
| Status | identified |
| Source | Day 1 transcript #2 (Andrew + team) |
| Field champion | TBD (caster/metallurgy team) |
Problem statement:
"We've got a Prime Metals model. We don't even use. We spent millions on... It's supposed to give us a cut point... monitoring metallurgical properties, the chemistry, and the steel to give us this. Like, if we switch chemistries... this is where you want to cut to make sure you're minimizing scrap." — Chemistry transitions in continuous casting generate scrap/downgrade when the cut point between old and new chemistry is wrong. Sequencing of chemistries matters. The model they bought doesn't get used.
Proposed solution: Phase 1: Understand why Prime Metals model failed (adoption? accuracy? integration?). Phase 2: Build chemistry transition optimizer — predict optimal cut points + sequence chemistries to minimize transition waste + identify "in-between" grades that can be sold.
Current state: - Millions spent on Prime Metals model — unused - Chemistry transition waste is significant (company-wide problem) - Manual cut decisions today - Some sequencing logic (width compatibility, chemistry compatibility) but not optimized - Note: Phil Thorman's slab cut optimization ($3M/month savings) addresses a related but different problem — that's about making short prime slabs + longer conditionable slabs vs. scrapping full-length slabs
Target state: - Automated cut point recommendation per transition - Optimized casting sequence to minimize transitions - In-between material allocated to secondary grades instead of scrapped
Value estimate: $2-8M/yr (reduced scrap + better yield at caster) Confidence: Medium — depends on volume of transitions and current scrap rate Data readiness: Unknown — Prime Metals model infrastructure may still exist Systems: L2 (caster), Prime Metals model (status unknown), Axiom (grades/orders) Complexity: Medium (leverage existing model investment if possible) Dependencies: CLV-14 (traceability — need heat→slab chemistry lineage) Open questions: - [ ] Why exactly is Prime Metals not used? Technical failure or adoption failure? - [ ] How many chemistry transitions per day? - [ ] What % of slab scrap is attributable to transition zones? - [ ] Is Prime Metals infrastructure still operational?
CLV-06: 90-Day Slab Remelting Reduction¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Slab yard management / planning alignment |
| Status | identified |
| Source | Day 1 transcript #2 (team discussion) |
| Field champion | TBD (planning/slab yard) |
Problem statement: Slabs that sit for >90 days get remelted (scrapped). Symptom of misalignment between production and demand. Scrap is processed by an internal company to cut it into appropriate sizes for the BOF. "We don't have enough pain in terms of recycling it" — easy to remelt masks the cost.
Proposed solution: Analytics on slab aging: predict which slabs are at risk of expiry, recommend reallocation to alternative orders or products before the 90-day mark. Integrate with Janus (slab yard WMS).
Current state: - Remelting happens regularly (frequency TBD) - Internal scrap processing has a cost - Could have pivoted aging slabs to plate or other products "if we had more inventory to choose from" and better visibility
Target state: - Slab aging dashboard with alerts at 30/60/75 days - AI-recommended reallocation: "this slab can fulfill order X if rolled by day Y" - Reduced remelting rate by 50%+
Value estimate: TBD — need remelting volume and cost per ton Confidence: Medium Data readiness: Partial — Janus tracks slab locations, Axiom tracks orders Systems: Janus (WMS), Axiom (ERP/demand) Complexity: Medium Dependencies: CLV-02 (scheduling) for systemic fix Open questions: - [ ] How many slabs/tons remelted per month? - [ ] What is the all-in cost of remelting (processing + energy + lost margin)? - [ ] What is the current slab aging visibility? Can they query "all slabs >60 days"?
CLV-07: Maintenance Co-Pilot (Technician Assist)¶
| Field | Value |
|---|---|
| Horizon | H1 — Bridge the Gap |
| Sub-process | Execution — field work + documentation |
| Status | validated |
| Source | Day 1 transcript #3 + Day 2 transcript #6 + Day 3 transcripts #7, #8 |
| Field champions | Paul Aaron Dash, Dan Hartman, John Stubna |
Problem statement:
"If you had something where they can talk to their phone... they're not going to enter it in a screen" (Paul Aaron Dash). "Where's my Ask Jeeves?" (Dan Hartman). Green technicians go to repairs without knowing the history. Same failure repaired a week ago, they start from scratch. Paper PM sheets returned crumpled, grease-covered, thrown away.
Proposed solution: AI-powered voice-first mobile co-pilot: technician talks through repair → AI structures into work order fields. Pulls history to accelerate diagnosis. Knowledge base grows with use. Works offline with sync.
Current state: - Tablet/barcode system exists but unreliable (connectivity) - Technicians often don't know repair history - Repeat failures not flagged - EPA tracking is manual and at risk of compliance gaps - Paper PM sheets thrown away after completion — no feedback to planners (Dan Hartman) - Corrective work orders are "two line chat in Tabware, I was down for 70 different items" — free text, inconsistent (Dan Hartman) - After-action review incomplete; no close-the-loop process - Area managers = planner + reliability engineer + firefighter since 2001 restructuring — no one has time to document properly - Electronic logbook exists in pockets: electrical maintenance uses it for years, mechanical "just started doing it in certain areas" (Dan Hartman) - Crane 300 history = paper binders: "here's the binder for 300 crane, go look through that" (Dan Hartman) - Paul's daughter (PA) uses AI scribe — "they just say, rather than typing anything... AI takes care of all the details" - 1990 LISP project at Ladle Met — only user feedback was "eat shit" → UX is existential (Phil Thorman) - Stubna: "Design a solution that circumvents the need to fill tedious forms in Tabware" - Dan Hartman: "Half the time looking for the answer" — technicians searching multiple systems during diagnosis - Dan re: trust issue: "I have to strike 50% of what this person puts in" → needs moderation layer
Target state: - One-screen view: asset history + troubleshooting + compliance - Voice-first: talk to phone, AI structures data, no forms - Offline-capable (Wi-Fi dead zones: "metal box inside another metal box") - Automatic flagging of repeat failures - Moderation layer for data quality
Value estimate: $0.5-2M/yr direct + massive indirect value (data quality enabler for everything downstream) Confidence: Medium-High — explicitly requested by 5+ stakeholders Data readiness: Partial — Tabware has some history, HVAC group has separate tracking Systems: Tabware, custom HVAC tablet app, WeatherGoose Complexity: Quick Win (3-4 months for MVP on 1 asset class) Dependencies: None significant — but must solve connectivity issue Open questions: - [ ] How many technicians across all maintenance groups? - [ ] What is the current repeat failure rate? - [x] What does the existing tablet app look like? → Electronic logbook in pockets (electrical), mechanical just starting. Tablet/barcode system exists but unreliable. - [ ] Wi-Fi coverage map — which areas have connectivity? (Critical for offline-first design)
CLV-08: Procurement Decision Tree / Auto-Approval¶
| Field | Value |
|---|---|
| Horizon | H1 — Bridge the Gap |
| Sub-process | Execution — parts procurement |
| Status | validated |
| Source | Day 1 transcripts #2, #3 + Day 2 transcript #6 + Day 3 transcript #7 |
| Field champions | Andrew Mullen, Paul Aaron Dash, Dan Hartman |
Problem statement:
"Everything above $500 has to go to the plant manager... A $300 power supply, how long do you think it would take me to order that? Two months. I can't even order it... employees paying for inventory control apps with their own money." — Anti-fraud measures (responding to real historical fraud) have created paralysis. Can't create part numbers, can't order common parts, equipment sits idle.
Proposed solution: Decision tree + pattern analysis: If this part type has never been denied, and the amount is below threshold, and the asset is critical, auto-approve. Pair with fraud pattern detection so you can loosen the process where it's safe and tighten where it's risky.
Current state: - $500 approval threshold to plant manager, $10K to plant manager level - Part number creation requires corporate (item masters centralized away from plant — possibly Chicago) - Critical equipment down for months waiting for PO approval - Union contractor approval: up to 5-day wait - Sole-source parts still require competitive bidding (Paul) - Emergency work orders eliminated (Paul) - Months of work order backlog = "hidden time bombs" (Paul) - "I could only order nine gallons [of coolant] to keep it under 500" — gaming the threshold (Paul) - Purchasing "unchecks the box" on requisitions after 60 days — restarts whole process silently (Dan Hartman) - Phantom open orders break min/max auto-reorder (Dan Hartman) - PO change orders require full approval chain again — "incredibly dated system, 20 years old" (Dan Hartman) - Budget forecasting = heroic Excel: "incredibly complex Excel sheet that links... imports... all the accrual, the forecast" (Dan Hartman) - Brandon (team member) manually calling ~250 vendors for delivery status updates (Dan Hartman) - Vendor substitutions not flagged to requester — "you just have to assume they made the right choice" (Dan Hartman) - Material substitutions causing repeat failures (4340 vs stainless example) (Dan Hartman) - Requisitioner can't see RFQ status, vendor responses, or bid details (Dan Hartman) - Purchasing agents used to be on-site — "phone call or walk up to their office, done." Now decentralized. - Contractors limited to 2-3 vendors (was 4-5), onboarding requires 10-panel drug screening
Target state: - Risk-stratified approval: auto-approve for known parts on critical equipment - Fraud detection in background (anomaly detection on purchasing patterns) - Reduce approval cycle from weeks to hours for routine purchases - Vendor delivery tracking: automated follow-up instead of 250 manual calls - Requisitioner visibility: see status of RFQ, bids, delivery without calling purchasing
Value estimate: $1-3M/yr direct + significant de-bottleneck value (reduced downtime from parts delays) Confidence: Medium-High — deepest pain point across every conversation Data readiness: To build — need purchasing history and approval data from Axiom Systems: Axiom (ERP/procurement), Tabware (asset criticality) Complexity: Quick Win (rules-based MVP) to Medium (AI fraud detection + vendor tracking layers) Dependencies: Corporate buy-in (this is a policy change, not just tech) Open questions: - [x] Is this a Cleveland-specific problem or company-wide? → Company-wide (centralized policies, corporate procurement) - [ ] How many purchase requests per month? What % are approved? - [ ] Can we get purchasing approval data from Axiom? - [x] Is there appetite at corporate to change the approval policy? → Stubna: "Full support of corporate." But need Michael Mack/Newman buy-in on implementation.
CLV-09: Spare Parts Inventory Intelligence¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Execution — parts management |
| Status | validated |
| Source | Day 1 transcripts #2, #3 + Day 3 transcript #7 |
| Field champions | Dan Hartman, Paul Aaron Dash |
Problem statement: No systematic critical spares management. People over-order and hoard. "Order 50 of them, they'll sit there for 50 years." No one knows what's in inventory. Some facilities better than others. Employees buying their own inventory apps.
Proposed solution: Inventory cleanup + intelligent reorder: normalize part numbers, establish min/max for critical spares, auto-reorder when below threshold. Cross-site sharing for expensive items.
Current state: - Chaotic — hoarding, unknown inventory, no systematic reorder points - Phantom open orders break min/max auto-reorder — system thinks order is placed, but it's cancelled or stuck (Dan Hartman) - Material substitutions by purchasing causing repeat equipment failures (Dan Hartman: 4340 vs stainless example — wrong material spec causes premature failure) - Vendor substitutions not flagged to requester - People buying inventory management apps with personal money
Target state: - Clean parts master, automated reorder, cross-site visibility for critical spares - Substitution controls: flag material changes, require engineering sign-off for spec changes - Phantom order cleanup: identify and clear stuck/cancelled requisitions
Value estimate: $1-4M/yr (reduced inventory carrying + avoided downtime from stockouts + reduced failure-from-substitution) Confidence: Low-Medium — scope is large, need inventory data Complexity: Medium (data cleanup is the hard part) Dependencies: CLV-08 (procurement reform) Open questions: - [ ] How many unique part numbers in Tabware currently? - [ ] What is the annual spare parts spend? - [ ] Which critical spares have the longest lead times? - [ ] How many phantom open orders are currently in the system?
CLV-10: Intra-Plant Coil Movement Optimization¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Finishing → Shipping |
| Status | identified |
| Source | Day 1 transcripts #3, #4 — logistics discussion |
| Field champion | Facilities division manager, shipping team |
Problem statement:
"We're moving coils way too much. We're moving coils from 116 door to 45 door, then a week later to 30... triple time." — Coils are touched 3-4 times before shipping. Target should be 1-2. Rail/truck mode switching adds moves. Shipping volume increased by ~600 coils recently.
Proposed solution: Analyze coil movement data: movements per coil vs. target (1-2 max). Warehouse-style optimization: place high-velocity products closest to shipping points. Optimize storage allocation by customer/product type.
Current state: - Coils moved 3-4x on average (estimate) - Tracking exists in SQL database (from location, to location) - No optimization of storage placement - Increased shipping volume straining current process
Target state: - Average coil movements reduced to 1-2 before shipping - Storage locations optimized by shipping mode (rail vs. truck) and frequency
Value estimate: TBD — need movement data to calculate crane hours saved + throughput impact Confidence: Low — need data Data readiness: Partial — SQL database has coil movements Systems: Custom SQL tracking, shipping systems Complexity: Medium Open questions: - [ ] Can we get a sample export of coil movement data? - [ ] What is the average movements-per-coil today? - [ ] What are crane utilization rates? Is crane capacity a bottleneck?
CLV-11: Operator Decision Support (BF/HSM)¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | BF + HSM operations |
| Status | validated |
| Source | Day 1 transcripts #1, #2 + Day 3 transcripts #7, #8 |
| Field champions | Dan Hartman (HSM), Chad Helms (IP), John Stubna |
Problem statement:
"We've lost a lot of our experienced people... now we got new people that have to learn a lot of lessons the hard way... Some people would know they would just know the right call to make." — Knowledge loss from retirements creating real operational risk. New operators override safety checks, cause cobbles, destroy equipment.
Proposed solution: AI-assisted operator guidance: based on current conditions (temperature, chemistry, mill setup), recommend actions. Prevent dangerous overrides. Capture expert knowledge into the system. Context-dependent alarm management.
Current state: - Expert knowledge = tribal, walking out the door - Some safety interlocks exist but can be overridden - No systematic operator guidance or decision support - Dan Hartman: "Give me those answers and I still might have a decision matrix to go down, but at least I can focus on the critical part of that decision rather than finding and tracking all those variables based on my experience" - Context-dependent alarms: vibration depends on product/grade/speed — simple threshold alarming creates noise and alarm fatigue (Dan Hartman) - Descale water system: 5 pumps, 12 headers, weekly test → 18 variables manually charted — perfect automation candidate - Chad Helms (ex-military aviation): understands systematic approaches to equipment reliability
Target state: - Real-time recommendations based on current conditions - Override prevention for high-risk scenarios - Expertise captured as model parameters, not just in people's heads - Context-aware alarming: ML-driven thresholds that adjust for product, grade, speed, and historical patterns
Value estimate: $1-5M/yr (avoided equipment damage + better yield + reduced training time) Confidence: Medium Complexity: Medium to Strategic (depends on scope) Dependencies: CLV-04 (cobble prediction is a subset of this), data from Pi historian + L2
CLV-12: PdM Platform (Multi-Asset — Bag House, Scrubbing, Cranes)¶
| Field | Value |
|---|---|
| Horizon | H1→H2 (POV in H1, expansion in H2) |
| Sub-process | Detection |
| Status | validated |
| Source | Day 1 transcript #3 + Day 2 transcript #6 + Day 3 transcripts #7, #8 + Day 5 transcript #10 |
| Field champions | Paul Aaron Dash, Phil Thorman, Dan Hartman, John Messi, Brian Thompson, Jamie Betts |
Problem statement: Vibration and condition monitoring is fragmented and incomplete. AssetWatch deployed at powerhouses only. Vibration sensors on 1SP cranes going to cloud via third party (Starnet?) — Phil didn't even know they existed. Viz does periodic route-based vibration readings. Multiple high-value assets generate data that nobody looks at.
PdM PoV target update (Day 5 — Friday 1SP workshop):
The 1SP leadership team (John Messi, Brian Thompson, Jamie Betts, Evan, Richie) pushed back on Crane 300 as the sole PdM target. While Crane 300 is a single point of failure, it has an "easy life" (low utilization — just moves ladle up/down), doesn't generate much data, and is not a major cause of hard delays. Previous vibration sensor attempts (Asset Watch on crane 2A) failed because time-based readings were useless when the crane was idle.
Revised PdM PoV targets (ranked):
- BOF Bag House ID Fans (PRIMARY) — Team's own top pick. 4 fans, 6 cleaning modules, rich sensor data (differential pressures, damper positions, vibration). Environmental compliance tied in. At 2 fans = rate delayed (-5-6 min/heat), lose one more = shop down. Data-rich but attention-poor. See CLV-22.
- BOF Scrubbing System (SECONDARY) — "Generates a shitload of data." Directly affects vessel availability (losing one furnace = -5 heats/day). Degrades predictably over 3-week repair turn cycle (buildup, drains fail, water piles in fan housing → vibration spikes). Natural degradation curve ideal for PdM. See CLV-23.
- Crane 300 (TERTIARY) — Retained for its complexity value. Near-zero instrumentation, paper-only history, DC crane 50+ years old. If the methodology can extract signal here, it validates the approach for any asset. But NOT the primary proof point.
Proposed solution: Multi-asset PdM approach: start with data-rich assets (bag house, scrubbing system) to prove value fast, extend to data-sparse assets (cranes) as a stretch demonstration. Consolidate data from fragmented sources (Pi, AssetWatch, cloud vibration, Viz, manual readings) into unified view.
Current state: - Powerhouses: AssetWatch (working well) - 1SP cranes: cloud vibration sensors via third party — data siloed, plant IT unaware - Route monitoring: Viz (periodic readings, vendor-managed) - BOF bag house: 4 ID fans, 6 modules — lots of data, nobody watching - BOF scrubbing system: "shitload of data, bit load of attention" - Crane 300: paper binders only, no instrumentation - VIS system available — triggers on current, not time-based (better than Asset Watch) - Quarterly IR + vibration readings on cranes - DC cranes 50+ years old, dirty environment (Paul) - Gearbox failures every few months — unique parts, sent out for rebuild (Paul)
Validated PdM expansion targets (beyond PoV): - Charging crane / teeming crane (more strenuous duty than 300 — when one goes down, other becomes SPOF) - Hot metal cranes (372, 474) - HSM work rolls and backup roll assemblies (tonnage-based change-out) - Line shaft tables (700 under contract, failure-driven redesign cycle) - Powerhouse equipment (Wonderware data already exists) - Descale pumps (weekly testing already happening — ripe for automation)
Value estimate: $3-12M/yr (PdM POV → full expansion) Confidence: Medium — proven technology at powerhouses, scaling question. Bag house and scrubbing system have richer data = higher confidence for POV success. Complexity: Quick Win (technology exists) → Expand (needs data integration from fragmented sources) Dependencies: PdM POV (separate project/SOW — implementation vehicle for first assets)
CLV-13: Environmental Compliance Automation¶
| Field | Value |
|---|---|
| Horizon | H1 — Bridge the Gap |
| Sub-process | Regulatory compliance |
| Status | identified |
| Source | Day 1 transcripts #2, #3 |
| Field champion | Andrew Mullen (mentioned it affects bonus) |
Problem statement:
"Annual reports where we look at every thing with the safety data sheet... everything has to be tied with storage location, temperature, quantity used... real manual, tedious stuff." — Environmental compliance is manual, tedious, and tied to employee bonuses. Exceedances at Burns Harbor already this year.
Proposed solution: Automated SDS tracking: tie purchases to storage locations, auto-populate annual reports. Refrigerant tracking automation for EPA compliance.
Current state: Manual pencil-and-paper tracking, risk of compliance gaps Target state: Automated tracking and reporting Value estimate: $0.5-1M/yr (labor savings + compliance risk mitigation) Confidence: Low — need to scope Complexity: Quick Win (rules-based automation) Dependencies: None
CLV-14: Through-Process Traceability (Heat→Coil)¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Foundation — cross-cutting |
| Status | identified |
| Source | Day 1 transcript #2 + seed ideas |
| Field champion | TBD (quality/metallurgy) |
Problem statement: No end-to-end genealogy linking heat → slab → coil → finished product with process parameters at each step. Without this, cross-process root cause analysis is impossible. Customer claims take "days to weeks" to trace.
Proposed solution: Build automated heat→coil genealogy linking L2 data across all process steps.
Current state: Partial tracing by department, manual effort to cross-reference Target state: Automated genealogy, real-time yield waterfall Value estimate: Enabler (enables $13-27M/yr in yield improvement) Confidence: Medium Complexity: Strategic (foundation project — 6-12 months) Dependencies: Data architecture, L2 access at all stages Open questions: - [ ] What identifiers exist at each step? (Heat #, slab #, coil #) - [ ] Are they currently linked in any system?
CLV-15: Dynamic Pricing by Capacity Consumption¶
| Field | Value |
|---|---|
| Horizon | H3 — Predict & Optimize |
| Sub-process | Order acceptance / commercial |
| Status | identified |
| Source | Day 1 transcript #2 — Andrew + team |
| Field champion | TBD (commercial/planning) |
Problem statement:
"We don't have a dynamic pricing model that captures capacity consumption. It's tonnage-focused... we'll end up taking heavy line pipe orders which... we'll lose money on, and it'll destroy our equipment." — Revenue and cost decisions are made on $/ton without factoring in the actual processing time, equipment wear, and capacity consumed. Some products consume 2x the capacity for the same tonnage.
Proposed solution: Capacity-adjusted profitability model: cost per ton by product type factoring in processing time, changeover cost, equipment wear, and maintenance impact. Feed into order acceptance decisions.
Current state: Tonnage-focused pricing and planning Target state: Dynamic profitability scoring per order Value estimate: $5-20M/yr (avoided unprofitable orders + better capacity utilization) Confidence: Low — high impact but touches commercial policy Complexity: Strategic Dependencies: CLV-02 (scheduling — needs same capacity data)
CLV-16: Rail Car Inventory Visibility¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Outbound shipping |
| Status | identified |
| Source | Day 1 transcript #4 |
| Field champion | Mark Kovach (finishing), Mike Leon (corporate shipping) |
Problem statement: Manual PDFs and Excel exchanged with class 1 railroads (NS, CSX). No real-time visibility into rail car inventory or availability.
Current state: Manual, adversarial relationship with railroads Target state: Integrated rail car tracking, better planning of rail vs. truck mode Value estimate: TBD Confidence: Low — depends on railroad cooperation Complexity: Medium Dependencies: External data access (NS, CSX APIs or data feeds)
CLV-17: BOF Endpoint Prediction¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | BOF — steelmaking |
| Status | seed |
| Source | Pre-visit seed ideas |
| Field champion | TBD |
Reduce reblows through ML prediction of carbon/temperature endpoint. Not specifically discussed with Cleveland stakeholders during Days 1-3. May surface at other sites.
Value estimate: $0.5-2M/yr Confidence: Medium (proven at other steelmakers) Complexity: Quick Win (if data is accessible)
CLV-18: Caster Breakout Prediction (ML)¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Caster — safety/quality |
| Status | seed |
| Source | Pre-visit seed ideas |
| Field champion | TBD |
Replace/augment Breakout Detection System with ML. Not specifically discussed during Days 1-3. May surface at other sites.
Value estimate: $1-3M/yr Confidence: Medium Complexity: Quick Win
CLV-19: Surface Defect Detection (CNN on SIS)¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | HSM — quality inspection |
| Status | seed |
| Source | Pre-visit seed ideas |
| Field champion | Stephen Palmer (quality/vision AI interest) |
Reduce false positives in Surface Inspection System with CNN. Not specifically discussed during Days 1-3, but related: Stubna mentioned Global Gauges strip steering camera on 2 HSM stands (vision-based, not yet controlling). Mansfield plant has full vision-based steering. Shows organizational willingness to invest in vision AI.
Value estimate: $2-5M/yr Confidence: Medium Complexity: Medium
CLV-20: BF Thermal State Prediction¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Blast Furnace — ironmaking |
| Status | seed |
| Source | Pre-visit seed ideas |
| Field champion | TBD |
Predict Si/temperature 30-60 minutes ahead for BF. Not specifically discussed during Days 1-3. Chad Helms (Iron Producing) was present at Day 3 session — could be a future validation candidate. BF C5/C6 have Emerson instrumentation from recent rebuild.
Value estimate: $1-3M/yr Confidence: Medium Complexity: Medium
CLV-21: Utility Management System (NEW)¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | Cross-cutting — energy/utility optimization |
| Status | validated |
| Source | Day 2 transcript #6 (Paul Aaron Dash deep session) + Day 3 transcript #7 (Dan Hartman) |
| Field champions | Paul Aaron Dash (20-year champion), Dan Hartman |
Problem statement: No utility management system at Cleveland Works. Cannot track water, gas, O2, N2, or compressed air consumption by area. When nitrogen pressure drops affect 1SP operations, they can't diagnose the source. Paul Aaron Dash has been working on this for 20+ years with budget constraints. He is the last utility engineer — "once I'm gone, it goes away."
Proposed solution: AI layer on metering infrastructure for energy/utility optimization. Anomaly detection on compressed air, water, gas, nitrogen. Tie consumption to production areas and process conditions. Build on water meters being installed (Emerson) as foundation.
Current state: - No utility management system — can't track consumption by area - Compressed air: $7M/year savings opportunity identified. 3-figure number of air leaks found in survey. - Water meters being installed (Emerson) — will be centralized soon - Nitrogen: pressure drops affect 1SP operations, no diagnostic capability - Paul has been building toward this for 20+ years, blocked by budget - Paul is the last utility engineer — institutional knowledge at risk - Descale water system at HSM: 5 pumps, 12 headers, weekly test → 18 variables manually charted (Dan Hartman) — perfect automation candidate - Wonderware historian already captures some powerhouse/water treatment data
Target state: - Real-time utility consumption monitoring by area - Anomaly detection: leaks, pressure drops, abnormal consumption patterns - Optimization: load balancing, demand management, leak prioritization - Automated reporting replacing manual charting
Value estimate: $7M+ on compressed air alone. Additional water (~19% consumption reduction projected), gas, nitrogen savings. Total likely $10-15M/year. Confidence: Medium — compressed air figure backed by engineering study, other utilities need scoping Data readiness: Partial — water meters being installed, compressed air survey done, Wonderware has some data Systems: Wonderware (powerhouses, water treatment), Emerson (new water meters), manual charting (descale system) Complexity: Medium (6-12 months — metering infrastructure partially in place, AI layer on top) Dependencies: Metering infrastructure completion (water meters in progress) Open questions: - [ ] What metering infrastructure exists today vs. what needs to be installed? - [ ] Can Wonderware data be queried programmatically? - [ ] What is the timeline for Emerson water meter deployment? - [ ] Is compressed air leak remediation underway, or just identified?
CLV-22: BOF Bag House Predictive Monitoring (NEW)¶
| Field | Value |
|---|---|
| Horizon | H1 — Bridge the Gap |
| Sub-process | 1SP — environmental control / detection |
| Status | validated |
| Source | Day 5 transcript #10 (Friday 1SP workshop) |
| Field champions | Brian Thompson (1SP ops), John Messi (primary ops), Jamie Betts (maint) |
Problem statement:
"That area doesn't get the attention it deserves... It just works until it doesn't. And then, we all freak out and fix it." — The BOF bag house has 4 ID fans and 6 cleaning modules controlling environmental compliance (differential pressures, damper positions). When fan count drops to 2, 1SP is rate-delayed (lose 5-6 min/heat). One more = shop down. The system generates abundant data that nobody monitors proactively.
Proposed solution: ML anomaly detection on bag house fan and module data. Predict fan degradation before failure. Optimize damper positions and module cleaning cycles. Link to environmental compliance reporting.
Current state: - 4 ID fans with dampers controlling differential pressures - 6 cleaning modules with valves, air pulses, material drop-off - Dirty gas pressure, clean gas pressure, differential pressure per module + overall - Minimum 2 fans = rate delayed. 3 fans = worry. 4 fans = comfortable. - Area is remote — "people don't like to go there" - Environmental standards require maintaining differential pressure - "It works until it doesn't" — reactive only
Target state: - Continuous health monitoring of all 4 fans + 6 modules - Anomaly detection: predict fan degradation before it triggers alarm - Optimize: regulate fan speeds proactively ("lower this one, raise that one") - Tie into environmental compliance system
Value estimate: $2-5M/yr (avoided rate delays + reduced fan failure + environmental compliance) Confidence: Medium-High — data-rich system, clear degradation patterns, strong champion support Data readiness: High — system generates lots of data (pressures, damper positions, vibration). Connected to Pi historian. Systems: Pi historian, environmental systems Complexity: Quick Win (data exists, anomaly detection straightforward) Dependencies: None — standalone system with rich data
CLV-23: BOF Scrubbing System Predictive Monitoring (NEW)¶
| Field | Value |
|---|---|
| Horizon | H1→H2 |
| Sub-process | 1SP — BOF vessel availability |
| Status | validated |
| Source | Day 5 transcript #10 (Friday 1SP workshop) |
| Field champions | John Messi (primary ops), Brian Thompson (1SP ops) |
Problem statement:
"The scrubbing system... generates a shitload of data. And it gets a bit load of attention... that not only affects us directly operationally because a vessel is a goal for us." — The BOF scrubbing system cleans off-gas. It degrades predictably over the 3-week repair turn cycle as slurry builds up, drains fail, and water accumulates in fan housings. When vibrations spike on the IV fans, the vessel becomes unavailable. Losing one furnace on a planned basis = -5 heats/day.
Proposed solution: Model the 3-week degradation curve from clean (post repair turn) to critical. Predict when intervention is needed. Monitor buildup, drain performance, fan vibration in context of where you are in the repair cycle. Trigger early intervention before vessel loss.
Current state: - System starts clean after repair turn, degrades over 3 weeks - Build-up accumulates: drains don't work, water piles in fan housing - Fan vibration spikes = immediate freak-out, vessel unavailable - System generates lots of data but nobody tracks the degradation curve - Losing 1 furnace = planned basis loses 5 heats/day (massive impact) - Two-furnace availability = key 1SP metric
Target state: - Degradation curve modeled across repair turn cycle - Predictive alerts: "buildup reaching threshold, intervene before next shift" - Vessel availability maximized through proactive intervention - Integration with repair turn planning
Value estimate: $2-5M/yr (reduced unplanned vessel unavailability + fewer emergency interventions) Confidence: Medium — predictable degradation pattern is ideal for modeling, but system complexity is high Data readiness: High — "shitload of data" per John Messi Systems: Pi historian, environmental systems Complexity: Quick Win (initial monitoring) → Expand (full degradation modeling) Dependencies: None — standalone system
CLV-24: Caster Segment Lifecycle Tracking (NEW)¶
| Field | Value |
|---|---|
| Horizon | H1 — Bridge the Gap |
| Sub-process | Caster — asset management |
| Status | validated |
| Source | Day 5 transcript #10 (Friday 1SP workshop) |
| Field champions | Evan (mechanical supervisor), Brian Thompson |
Problem statement: Caster segment tracking is maintained by a single contractor (Chris Callahan) on an Excel spreadsheet with color-coded status (green=good spare, yellow=emergency, red=not usable). No link to inventory, procurement, or campaign life tracking. Springs breaking with no campaign life data. Emprotech (sole refurbishment vendor) work prioritized biweekly by Evan in informal meetings.
Proposed solution: Digitize segment tracking into a system linked to campaign tonnage, spare inventory, and procurement. Track component life (springs, rolls, bearings, gearboxes) by tonnage and condition. Auto-flag when spares are running low or approaching end of life.
Current state: - Excel sheet maintained by one contractor (Chris Callahan) - Color-coded: green (good), yellow (emergency), red (not usable) - Emprotech = sole refurbishment vendor - Evan meets Emprotech biweekly to prioritize - Spring campaign life never tracked — now trying different designs from zero baseline - Operational abuse (steel extraction after stops) reduces segment life unpredictably - No link to inventory or procurement - QC5 segments handled differently
Target state: - Digital segment tracking linked to tonnage/campaign data - Auto-alerts on spare depletion and end-of-life components - Integration with procurement for proactive reordering - Historical trending to predict segment replacement timing
Value estimate: $1-3M/yr (avoided emergency segment changes + reduced production loss from segment failures) Confidence: Medium Data readiness: Low — Excel is the only structured source, much is tribal Systems: Excel (current), Tabware (target integration), custom tracking system (process automation) Complexity: Quick Win (digitize existing knowledge) Dependencies: CLV-09 (spare parts intelligence), CLV-25 (critical spares)
CLV-25: Critical Spares Identification & Digitization (NEW)¶
| Field | Value |
|---|---|
| Horizon | H1 — Bridge the Gap |
| Sub-process | Execution — parts management |
| Status | validated |
| Source | Day 5 transcript #10 (Friday 1SP workshop) |
| Field champions | Brian Thompson, Evan, Richie |
Problem statement: Brian Thompson started an effort in January 2025 to identify critical spares in Tabware. Estimated ~500 parts. Criteria: loss of a shift or more of production, long lead time, expensive. Currently, critical spares exist only in people's heads. Example: Richie has crane contactors on a department shelf — nobody else knows. No tracking after parts arrive at door 40.
Proposed solution: Accelerate the critical spares identification with AI-assisted item master population. Cross-reference Tabware hierarchy with failure history to identify most impactful gaps. Build visibility layer showing what's on hand, where it is, and who has it.
Current state: - Effort started January 2025 — still mapping what exists vs. doesn't - ~500 critical parts estimated - Many parts exist on department shelves with no system visibility - Item masters need to be created/populated for untracked parts - No tracking after delivery to door 40 - George (engineer) evaluating criticality based on production impact + lead time
Target state: - All ~500 critical spares identified and in Tabware with min/max levels - Real-time visibility: what's on hand, where, who ordered - Auto-reorder triggers for critical items - Cross-department part sharing visibility
Value estimate: $1-3M/yr (avoided downtime from stockouts) Confidence: Medium-High — effort already in progress, AI can accelerate Data readiness: Partial — Tabware hierarchy exists, item masters need population Systems: Tabware (CMMS), Axiom (ERP/procurement) Complexity: Quick Win (builds on existing effort) Dependencies: CLV-08 (procurement reform), CLV-09 (spare parts intelligence)
CLV-26: Raw Materials Cost Optimization (NEW)¶
| Field | Value |
|---|---|
| Horizon | H2 — Build the Foundation |
| Sub-process | BF/BOF — raw materials |
| Status | identified |
| Source | Day 5 transcript #10 (Friday 1SP workshop — John Messi) |
| Field champion | John Messi (primary operations) |
Problem statement:
"From a practical perspective, it also makes sense to focus on raw materials... that's where all the money is... we're a raw materials company." — Burden/blast furnace charges, alloy mixes, and raw materials consumed in the melt shop represent the largest cost component. Recent cold snap caused coal emergencies. Barge shipping issues in summer. Supply chain risk is real and growing.
Proposed solution: Optimize raw material mix (burden, alloys) for cost while maintaining quality targets. Build supply chain risk models for weather, logistics, and market disruptions.
Current state: - Raw materials = largest cost center - Recent disruptions: cold snap → coal emergency, summer → barge shipping blocked - No systematic optimization of burden/alloy mix - Supply chain risk management is reactive
Target state: - Optimized raw material recipes balancing cost and quality - Supply chain risk models with early warning - Dynamic re-planning when disruptions occur
Value estimate: TBD — needs raw materials spend data Confidence: Low — early-stage, needs scoping Complexity: Strategic (cross-functional, touches procurement and production) Dependencies: CLV-02 (scheduling), data access to burden/alloy records
CLV-27: Enterprise Data Platform (Cloud Strategy) (NEW)¶
| Field | Value |
|---|---|
| Horizon | H2→H3 |
| Sub-process | Foundation — cross-cutting |
| Status | identified |
| Source | Day 4 transcript #9 (Databricks meeting) |
| Field champion | Andrew Mullen (AI/Innovation coordinator) |
Problem statement: Cleveland-Cliffs data is siloed across facilities, departments, and systems — legacy of multiple acquisitions (Republic Steel, LTV, AK Steel, Inland Steel, Bethlehem Steel). Finance, planning, and production operate with separate metrics and no cross-domain visibility. CEO wants enterprise-wide "start to finish" data platform. Cloud resistance from cybersecurity is decreasing but still present.
Proposed solution: Enterprise data platform (lakehouse architecture) unifying operational, financial, and planning data across sites. Databricks presented as one candidate platform. Federation approach allows querying existing systems without full migration.
Current state: - Data everywhere: by facility, by department, by system - Homegrown systems don't communicate - Finance doesn't inform production constraints - Cloud resistance from IT/cybersecurity (changing) - Databricks, Snowflake, Microsoft Fabric all under evaluation - CEO wants enterprise-wide solution
Target state: - Unified data platform with cross-domain visibility - Natural language querying of enterprise data - Single source of truth for KPIs and decision-making
Value estimate: Enabler (unlocks $50-100M+ in optimization across all projects) Confidence: Low — early exploratory, significant organizational change required Complexity: Strategic (multi-year, enterprise-wide) Dependencies: All other projects feed into this progressively Note: Aligns with our thesis that data centralization is the emergent outcome of doing the work, not a separate initiative. Databricks meeting validates the appetite but we recommend the bottom-up approach.
Daily Update Log¶
Day 1 — Feb 23 (Monday)¶
- 20 initiatives captured (16 identified from transcripts, 4 carried from seed ideas)
- Strongest signal: CLV-01 (ops-maint integration) as immediate quick win
- Biggest opportunity: CLV-02 (scheduling/S&IOP) — needs decomposition
- Surprise finds: procurement paralysis (CLV-08), coil over-movement (CLV-10)
- Key meetings scheduled: Dan Hartman (HSM) + BF team + John Stubna (GM) Wednesday
- To validate in C1 workshop (Tue AM): CLV-02, CLV-03, CLV-04, CLV-05, CLV-11, CLV-17, CLV-18, CLV-20
- To validate in C2 workshop (Tue PM): CLV-01, CLV-07, CLV-08, CLV-09, CLV-12
- To validate in C3 workshop (Wed PM): CLV-05, CLV-06, CLV-14, CLV-19
Day 2 — Feb 25 (Tuesday)¶
- 3 transcripts analyzed (#5 plant tour, #6 deep session with Chuck/Paul/Phil/Andrew)
- Major strategic pivot: Original 3-cluster model (C1/C2/C3) replaced with progressive horizon architecture (H1/H2/H3). Maintenance-operations information flow is the flagship. Data centralization is the emergent outcome of doing the work, not a separate initiative.
- Transcript #6 = goldmine session. Key findings:
- 70/30 reactive/planned ratio confirmed (Paul Aaron Dash)
- Zero close-the-loop process: "fix it, forget it, repeat 6 months later"
- Voice capture explicitly requested by Paul (PA daughter AI scribe analogy)
- Procurement even worse than Day 1 indicated: sole-source bidding, gaming $500 threshold, eliminated emergency WOs
- Utility management gap: $7M compressed air, no monitoring by area → new CLV-21 created
- Vibration sensors on 1SP cranes → Phil didn't know they existed (data silo exemplar)
- Tilden = gold standard for maintenance (reactive/planned separation, reliability engineers, job kitting)
- 1990 LISP cautionary tale: UX is existential for any AI solution
- Initiatives validated: CLV-01, CLV-07, CLV-08, CLV-09, CLV-12
- New initiative: CLV-21 (Utility Management System)
Day 3 — Feb 26 (Wednesday)¶
- 2 transcripts analyzed (#7 Dan Hartman/Chad Helms HSM session, #8 John Stubna GM meeting)
- Transcript #7 = second goldmine. Key findings:
- 2001 ISG restructuring merged planner + reliability engineer + area manager into one role — structural reason the loop never closes
- Contractors outnumber in-house crafts during HSM downturns — cannot solve maintenance with more contractors
- Procurement even MORE broken: silent requisition cancellations after 60 days, PO change orders require full re-approval, 250 manual vendor calls, material substitutions causing equipment failures
- Paper PMs thrown away. Corrective WOs = "two line chat." After-action review incomplete.
- "Where's my Ask Jeeves?" — explicit search/retrieval request for maintenance info
- Context-dependent alarms = real ML opportunity (vibration depends on product/grade/speed)
- Crane 300 = paper binders, no instrumentation, no PLC feedback
- Wi-Fi dead zones = real barrier to mobile solutions
- Transcript #8 — Stubna (Plant Manager):
- 1SP = THE priority. 28 heats/day target. All casters booked through May.
- Slab cut optimization already saving $3M/month (Phil Thorman's project)
- Cobbles = biggest downstream loss. Global Gauges on 2 HSM stands.
- "Full support of corporate." Young leadership open to change.
- Wants anomaly detection + CMMS integration. Manual scheduling.
- Initiatives validated: CLV-02, CLV-04, CLV-11
- Running totals: 9 validated, 8 identified, 4 seed | 21 total initiatives (CLV-01 through CLV-21)
Day 4 — Feb 27 (Thursday)¶
- 1 transcript analyzed (#9 Databricks meeting)
- Vendor intelligence meeting — Databricks presented lakehouse platform to Andrew + Vooban/IE team
- Key findings:
- CLF cloud strategy: resistance from cybersecurity/IT, but CEO wants enterprise-wide "start to finish" data platform — "the ambition grows every day"
- Legacy acquisitions (Republic Steel, LTV, AK Steel, Inland, Bethlehem) = patchwork of incompatible systems
- Finance, planning, production all operate with separate metrics — no cross-domain visibility
- Databricks works with US Steel, Big River Steel, Tata Steel — use cases shared (order status automation, scrap melt scheduling, safety RAG, energy efficiency)
- Federation capability = can query existing systems without moving data (addresses CLF's resistance to data migration)
- Databricks Genie = natural language querying on enterprise data (similar to maintenance copilot concept but for business users)
- Andrew confirmed: "it's existential — we know we're behind"
- CLF wants a uniform, scalable platform — don't want multiple vendors
- Committed to follow-up technical call on disparate data ingestion
- New initiative: CLV-27 (Enterprise Data Platform)
- Strategic insight: Validates our thesis that data centralization is the emergent outcome, not a separate project. CLF leadership wants the enterprise layer, but our bottom-up approach (project by project) is the right sequencing.
Day 5 — Feb 28 (Friday)¶
- 1 transcript analyzed (#10 Friday 1SP workshop)
- Critical PdM PoV target revision. 1SP leadership team (John Messi, Brian Thompson, Jamie Betts, Evan, Richie) pushed back on Crane 300 as sole PdM target:
- Crane 300 has "easy life" — low utilization, just moves ladle up/down
- Not a large cause of hard delays
- Previous vibration sensor attempts (Asset Watch on crane 2A) failed — time-based readings useless
- Counter-proposal: BOF bag house ID fans (PRIMARY) + BOF scrubbing system (SECONDARY)
- Crane 300 retained as tertiary target — complexity validates methodology
- Key findings from 1SP leadership:
- Delay tracking: homegrown system on Level 2 server, no interaction with Tabware, recently upgraded (early 2025)
- Segment tracking: single contractor (Chris Callahan) maintains Excel sheet, color-coded. Emprotech sole refurbisher. No link to inventory/procurement.
- Critical spares effort: Brian started Jan 2025, ~500 parts to identify. Currently in people's heads.
- No written procedures for breakdown repairs — only for PMs. Green workforce = slow troubleshooting.
- Voice capture strong buy-in from John Messi and Brian — smartphones as natural device
- Raw materials opportunity: John Messi raised burden/BF charges, alloy mixes as highest cost center
- 2SP going 24-hour by September — doubles 2SP maintenance burden
- Technology adoption: receptive IF it solves real problems, doesn't add work. False alarm sensitivity critical.
- IT has "specific marching orders" not to block — Panera Tech Burns Harbor lesson learned
- New initiatives: CLV-22 (bag house), CLV-23 (scrubbing), CLV-24 (segment tracking), CLV-25 (critical spares), CLV-26 (raw materials)
- New stakeholders: John Messi (primary operations), Kevin (Databricks), Dan (Databricks local)
- Updated stakeholders: Brian Thompson (met directly), Jamie Betts (met directly), Evan (mechanical), Richie (electrical)
- Initiatives validated: CLV-12 (with revised targets), CLV-22, CLV-23, CLV-24, CLV-25
- Running totals: 14 validated, 9 identified, 4 seed | 27 total initiatives (CLV-01 through CLV-27)
Prioritization Matrix (to be filled Day 4)¶
HIGH VALUE
│
│
Strategic Bets │ Quick Wins
(high value, │ (high value,
high complexity) │ low complexity)
│
───────────────────────┼───────────────────────
│
Deprioritize │ Low-Hanging Fruit
(low value, │ (low value,
high complexity) │ low complexity)
│
LOW VALUE
COMPLEXITY ◄────────────────────────────────► SIMPLICITY
Preliminary placement (based on Days 1-3 validation):
| Quadrant | Initiatives |
|---|---|
| Quick Wins (H1) | CLV-01, CLV-07, CLV-08, CLV-12 (POV) |
| Strategic Bets | CLV-02, CLV-15, CLV-14 |
| Build the Foundation (H2) | CLV-04, CLV-09, CLV-11, CLV-21 |
| Needs sizing | CLV-03, CLV-05, CLV-06, CLV-10, CLV-13, CLV-16, CLV-17-20 |
Formal placement to be completed after value sizing workshop.