O&G Super Major Identified Cause of Failure Saving $10M Over 3 Years

Industry: Oil and Gas

Use Case: Diagnostic Analytics

For decades, oil companies have excelled at finding and efficiently processing crude oil. The same can’t be said of a major byproduct of these efforts: operational data. Like crude oil before the 20th century, operational data at oil companies today lies largely, and frustratingly, unused. Operational data, when analyzed, promises to reduce downtime, increase efficiency and grow profits. But making use of this data has proven difficult because tools haven’t existed to efficiently collect, manage and prepare the data to share for analysis. As a result, valuable insights have remained buried. Recently, however, the refining division of a super-major oil and gas company piloted an automated approach for integrating, organizing and managing operational data. The work paved the way for data analysis and the possibility of saving $3.5M per year.

Challenge

The company engaged Element to determine if operational data could be used to address an issue at one of its refineries. For more than seven years, refinery engineers had reduced the throughput of a stripper column, used to remove sulfur-containing compounds from naphtha, because of frequent fluctuations in the column’s liquid levels causing faults. Reducing throughput was the only way to address the fluctuations but, in the process, the slowdown harmed efficiency and profits and didn’t address the root cause of the problem. The reason for fluid level fluctuations was unknown despite the wealth of operational data from the stripper and other sources. Several refinery employees, including a PhD technologist, searched for the cause but were hampered by the lack of tools.

All I had were process charts, a spreadsheet and my brain. We lacked the organizational capabilities to efficiently and reliably gather and manage data for analysis. Site Technologist

Previous attempts to use the raw data to build diagnostic models had failed to identify the cause. Dozens of staff manually examined detailed spreadsheets, hand-wrote software scripts, and continuously re-prepared and re-organized data to no avail. The cost and time spent on these steps frustrated everyone to the point that productivity and morale suffered. The lack of tools to efficiently and precisely sift through the data needed to be addressed.

Solution

Element’s Forward Deployed Engineers (FDEs) worked with the customer’s project team (process engineers, a site engineering leader, and the OSIsoft PI administrator) to conduct an economic analysis. This analysis estimated that fixing the issue to return the stripper column to full throughput would result in at least $3.5M per year in savings to the refinery. The project started by working with the operational data using Element’s AssetHub. The following activities were performed:

ConnectManageShare
- Element AssetHub hosted within Elements Azure tenant
- Ingest metadata from OSIsoft PI data archives
- Design and build equipment-centric Asset Twins based upon the desired attributes of the stripper column and other process equipment
- Transform and contextualize data including removing turn around data and creating event labels for faulty periods
- Export various hierarchies and raw data as OSIsoft PI Asset Frameworks (AFs) and to the customers Azure storage for use with Microsoft PowerBI, OSIsoft PI Vision, Coresite, and ProcessBook

Outcome

Element’s AssetHub provided a 360-degree, flexible, data-oriented view of the relationships between processes and equipment, identifying the root cause of the frequent fluctuations in the column’s liquid levels. Raw data from the OSIsoft PI System, totaling 1,600 sensor streams representing seven years of data, was imported into Element AssetHub deployed on Microsoft Azure. The data was then translated into flexible, digital representations of the unit’s equipment and operations in the form of asset data models (or Asset Twins). Engineers were able to evaluate and improve the quality of the data on a continuing basis – identifying, for example, a piece of equipment that wasn’t generating enough operational data and fixing the problem by replacing and adding sensors to the equipment. Data was then analyzed by Element and the project team to build a set of diagnostic models to identify deviations occurring in the process prior to each fault. These models were then connected to visualizations in Microsoft PowerBI and OSIsoft PI Vision to support analysis of the historical data.

The ability to efficiently collect and manage the operational data, along with the resulting analysis, led to several results. These included:

  • A data-driven understanding of the extent of the problem: 276 faults were caused by variations in stripper column fluid throughout the 7 years of data analyzed. This ruled out previously theorized causes, including the condition of stripper column and changes in the throughput of the column. The average fault duration was 0.4 days and the mean time between faults was 7.4 days.
  • Identifying the root cause: The data-driven diagnostic models enabled engineers to determine that excessive heat supplied in the reboiler of the stripper column was strongly correlated with the onset of the fluctuations The refinery engineer who previously struggled to find the cause of the fluctuations now realized why: the answer had been buried in the utilities side of the business, with which she was unfamiliar, and lacked the tools to examine that data.
I would never have looked at that part of the process without Element. Site Technologist

Element enabled the customer to shift from analyzing data one sensor at a time to a more extensive approach of analyzing multiple, seemingly unrelated data streams comprehensively. As a result of these findings, a program was developed to limit future excessive heating in the stripper column’s reboiler.