Skip to content
Confidential
Chapter 03 · Main Report

Cross-Site Technology & Data Landscape


Introduction

During the Innovation Sprint, we assessed the technology and data infrastructure at four operational sites: Cleveland Works, Middletown Works, Burns Harbor, and Tilden Mine. Together these sites represent both of CLF's core industries (steelmaking and mining), three different corporate heritage lines (LTV/ISG, AK Steel/Armco, Bethlehem Steel), and the full vertical integration chain from iron ore to finished product.

The assessment was not an IT audit. It was conducted through practitioner conversations to understand how data actually flows through operations and maintenance, where it breaks down, and what that means for the AI initiatives on the roadmap. The finding is consistent: data exists at every site, often in significant depth. The problem is that it sits in islands. The systems are different everywhere, but the pattern is identical.

This chapter summarizes what we found and why it matters.

CMMS & Maintenance Systems

The maintenance management layer is the most fragmented technology category across CLF. Four sites use three different platforms, none of which communicate with one another.

Site CMMS Platform Heritage Status
Cleveland Works Tabware LTV/ISG On-prem, Citrix-based. 5 versions behind current release.
Burns Harbor Tabware Bethlehem Steel Same platform as Cleveland. 24-hour data refresh to data warehouse.
Middletown Works Teams/SWAMI Armco/AK Steel Legacy system, 20+ years old. No active vendor support.
Tilden Mine Ellipse Mining operations Integrated instance shared across all CLF mining sites.

Cleveland and Burns Harbor share the same Tabware platform but operate in siloed instances with no cross-plant visibility. The only mechanism for cross-plant reporting is a nightly data dump into an Access database, always one day behind. As one corporate stakeholder put it: "It's 2026, and that's obviously frustrating."

Middletown's Teams/SWAMI system dates to the Armco era. It has no vendor support. John Sabo (corporate procurement/maintenance systems) described it bluntly: "Ridiculous that a facility that size is using something like that."

Tilden's Ellipse is, paradoxically, the most mature of the four. It runs as a single instance across all mining sites, providing the cross-site visibility that the steel-side Tabware installations lack.

EAM Migration: CLF is migrating Tabware sites to a cloud-based EAM platform (Aptean). Cleveland goes first in September/October 2026, followed by Burns Harbor in early 2027. The migration is plant-by-plant, not a single cutover. Critically, the new EAM will still operate as siloed per-plant instances. Cross-plant reporting will need to be built separately. This migration creates both a timing constraint and an opportunity: AI projects that touch maintenance data should align with the EAM rollout rather than compete with it.

Process Data & Historians

Rich operational data exists at every site, but in different systems and at different levels of maturity.

Site Primary Historian Coverage Depth
Cleveland Works OSIsoft Pi Steel producing operations 10 years of continuous data
Cleveland Works Wonderware Powerhouses, water treatment Separate instances
Cleveland Works Emerson Iron producing (BF wireless sensors) Cloud-based
Middletown Works IBA Hot strip mill, process lines Millisecond resolution
Burns Harbor IBA Hot strip mill Decades of data
Burns Harbor Multiple data warehouses Production statistics, delay codes On-prem + partial cloud
Tilden Mine OSIsoft Pi Concentrator, pellet plant 1.3 billion entries
Tilden Mine Foxboro IA DCS Concentrator/pellet plant control Late-1990s vintage

The data is deep. Burns Harbor's hot mill team described their IBA historian as containing "all the data you can ever hope for." Tilden's Pi instance holds 1.3 billion entries. Cleveland has 10 years of continuous steel-producing data. Middletown's IBA captures at millisecond resolution, and R&D is already using it for cobble analysis.

The challenge is not data volume. It is connectivity. At every site, the historian data lives in its own domain, disconnected from maintenance records, delay reports, and quality systems. Linking a process excursion in the historian to the work order it triggered (or should have triggered) requires manual effort by someone who knows both systems. That linkage is the foundation for every AI initiative on the roadmap.

ERP & Business Systems

The enterprise layer reflects CLF's acquisition history. Each heritage company brought its own systems, and full standardization has not been achieved.

Site ERP / Business System Purchasing Notes
Cleveland Works Axiom Via Tabware No Oracle at Cleveland
Middletown Works Oracle EBS Oracle Implemented from AK Steel's Dearborn plant. Inconsistent rollout.
Burns Harbor SAP Hana + Oracle EBS Oracle + Tabware SAP built since 2017. 13 integration projects post-CLF merger.
Tilden Mine Ellipse (integrated) Ellipse Purchasing, maintenance, and warehousing in one system

Burns Harbor carries the most complex enterprise stack. SAP Hana, Oracle EBS, Tabware, a legacy system called COS, and an IBM mainframe all coexist. During the EAM transition, buyers at some sites will work in three systems simultaneously. Lisa, the SAP architect at Burns Harbor with 36 years of experience, documented the integration architecture across 13 post-merger projects. Her documentation is a corporate-level asset.

The practical impact: procurement and inventory data follows different paths at every site. A spare part at Cleveland lives in Axiom. The same part at Burns Harbor lives in Oracle and Tabware. At Tilden, it lives in Ellipse. Any AI project that touches inventory or procurement must account for this heterogeneity at the integration layer.

Quality & Inspection Systems

Quality systems vary significantly, but the pattern of untapped potential is consistent.

Middletown operates Ametek surface inspection cameras on its hot strip mill. These cameras currently run at approximately 60 percent accuracy. The investigation-restart problem (the system loses context between coils) contributes to a quality loss rate of 1 percent, roughly double the historical baseline. Palmer has flagged surface inspection as a priority. Retraining the classifier with connected process data is a high-value, well-scoped opportunity.

Burns Harbor uses a Quality Management System (QMS) to flag coils for review. Senior operators estimate that 80 percent of the manual disposition decisions could be automated. Temperature maps have been collected for decades but are not read in real time. OCR cameras for field inventory are being evaluated.

Cleveland has partial gauge coverage, with two stands on the hot strip mill not actively controlling strip steering.

The common gap: quality data exists, but it is not connected to upstream process data. A defect detected at the coil surface cannot be automatically traced back to the casting or rolling conditions that caused it. Building that genealogy, from heat to coil to customer, is one of the highest-value data integration opportunities across the enterprise.

Infrastructure Constraints

Three infrastructure constraints have direct implications for AI project deployment.

Burns Harbor cloud bandwidth. Patrick, the Process Automation manager for central/coke/iron, described the site's cloud connection as "a very delicate and small pipe." This is a years-old constraint that blocks cloud-based AI deployments at CLF's largest facility. Any initiative requiring cloud compute at Burns Harbor must either resolve the bandwidth bottleneck or deploy on-premises.

Wi-Fi coverage gaps. At Cleveland, the plant environment was described as "a metal box inside another metal box." Floor workers at Indiana Harbor (CLF's other major integrated site, assessed during the Burns Harbor sprint) have no wireless connectivity in buildings. Scheduling updates reach the floor only via LAN terminals that refresh every 180 seconds. Mobile-first solutions require network investment.

Read-only database access. At Burns Harbor, only the hot mill process automation group has a read-only database replica. Other departments risk disrupting production systems if they run analytical queries against live databases. This is a standard SQL replication fix, but until it is implemented, it limits who can safely work with operational data.

Data Maturity Assessment

The table below summarizes each site's data strengths and gaps as observed during the sprint.

Dimension Cleveland Middletown Burns Harbor Tilden
Historian depth Strong (10yr Pi) Strong (IBA, millisecond) Strong (IBA, decades) Strong (1.3B Pi entries)
CMMS data quality Moderate (95% hierarchy) Weak (legacy, no support) Moderate (24hr lag) Strong (Ellipse, rigorous)
Cross-system linkage None None None None
Reporting tools Limited SAS (broken) Power BI (emerging) Business Objects + Power BI
Process control access L2 (in-house, upgradable) Siemens (black box) GE (source code available) Foxboro DCS (1990s) + G2 fuzzy logic
Inventory management Unknown IBM Mainframe + Oracle IMS (1980s-era) Ellipse + ABCD cycle counting
Paper/manual processes High Moderate High (1980s IMS, 6-person team) Moderate
Key gap No ops-maint linkage No vendor support for CMMS Cloud bandwidth, DB access Fleet dispatch disconnected from maintenance

Cross-Site Patterns

The table below distills the landscape into patterns that repeat across all four sites.

Pattern Observation Implication for Roadmap
3 CMMS platforms, 0 cross-plant visibility Tabware (Cleveland, Burns Harbor), Teams/SWAMI (Middletown), Ellipse (Tilden). No shared data model. The maintenance datamart must normalize across all three platforms. The integration cost is paid once; every subsequent maintenance initiative benefits.
Rich historians, isolated islands Every site has deep operational data (Pi, IBA, Foxboro). None of it connects to maintenance or quality systems. Linking historian data to work orders and delay reports is the single highest-leverage data engineering task.
Acquisition-driven ERP fragmentation Axiom, Oracle, SAP, Ellipse. Each site inherited its predecessor's business systems. Procurement and inventory AI must abstract across ERP boundaries. A common integration layer avoids rebuilding per site.
Quality data exists but lacks genealogy Surface inspection, temperature maps, chemistry data, and customer claims all exist. They are not linked heat-to-coil. Through-process quality traceability is an H2 opportunity that builds on H1 maintenance data integration.
Infrastructure varies, but the bottleneck is connectivity Cloud bandwidth at Burns Harbor, Wi-Fi gaps at Cleveland, no wireless at Indiana Harbor. Deployment architecture must be flexible: cloud-first where bandwidth allows, edge/on-prem where it does not.
The people bridge the gaps At every site, specific individuals hold the cross-system knowledge in their heads. They export, merge, and interpret data manually. This is both the risk (knowledge flight) and the validation (the data connections are valuable, because people already make them by hand).

The systems are different at every site, but the diagnosis is the same. Data exists in islands. The historian knows what happened on the process side. The CMMS knows what happened on the maintenance side. The ERP knows what was purchased. No single system connects them. At every site, people bridge these gaps manually, and they do it because the connections are valuable.

This is the information flow problem. It is not unique to any one site or any one system. It is structural, and it is why the roadmap applies the same integration-first approach everywhere: connect the islands, normalize the data, and then build the AI applications on top of a foundation that works across the enterprise.

The detailed system inventory is available in Appendix D.