O&G Super Major Saved $10M Over 3 Years

← Back to Case Studies
Power station at sunset
Industry

Oil and Gas

Use Case

Diagnostic Analytics

The refining division of a super-major oil and gas company piloted an automated approach for integrating, organizing and managing operational data. The company engaged Element to determine if operational data could be used to address an issue at one of its refineries.

Working with Element, the company predicted it would save at least $3.5M per year for the refinery.

Additionally, the ability to efficiently collect and manage the operational data, along with the resulting analysis, led to several results. These included for the first time, having a data-driven understanding of the extent of the problem as well as the ability to identify the root cause of any issues.

Challenge

For decades, oil companies have excelled at finding and efficiently processing crude oil. The operational data from this activity, when organized and analyzed properly, can reduce downtime, increase efficiency, and grow profits. Unfortunately, at this company, the operational data created by these efforts was disorganized, fragmented, and largely unusable. As a result, making use of this data has proven difficult because tools haven’t existed to efficiently collect, manage and prepare the data to share for analysis. In the end, valuable insights have remained buried.

Recently, the company piloted an automated approach for integrating, organizing and managing operational data. The work paved the way for data analysis and cost savings

The company engaged Element to determine if operational data could be used to address an issue at one of its refineries. For more than seven years, refinery engineers had reduced the throughput of a stripper column, used to remove sulfur-containing compounds from naphtha, because of frequent fluctuations in the column’s liquid levels causing faults.

Reducing throughput was the only way to address the fluctuations but, in the process, the slowdown harmed efficiency and profits and didn’t address the root cause of the problem. The reason for fluid level fluctuations was unknown despite the wealth of operational data from the stripper and other sources. Several refinery employees, including a PhD technologist, searched for the cause but were hampered by the lack of tools.

All I had were process charts, a spreadsheet and my brain. We lacked the organizational capabilities to efficiently and reliably gather and manage data for analysis.

Site Technologist

Previous attempts to use the raw data to build diagnostic models had failed to identify the cause. Dozens of staff manually examined detailed spreadsheets, hand-wrote software scripts, and continuously re-prepared and re-organized data to no avail. The cost and time spent on these steps frustrated everyone to the point that productivity and morale suffered. The lack of tools to efficiently and precisely sift through the data needed to be addressed.

Solution

The Element team worked with the customer’s project team to conduct an economic analysis.

The following activities were performed:

The project started by working with the operational data using Element’s AssetHub™. Element’s AssetHub provided a 360-degree, flexible, data-oriented view of the relationships between processes and equipment, identifying the root cause of the frequent fluctuations in the column’s liquid levels. Raw data from the OSIsoft PI System, totaling 1,600 sensor streams representing seven years of data, was imported into Element AssetHub deployed on Microsoft Azure.

The data was then translated into flexible, digital representations of the unit’s equipment and operations in the form of asset data models (or Asset Twins). Engineers were able to evaluate and improve the quality of the data on a continuing basis – identifying, for example, a piece of equipment that wasn’t generating enough operational data and fixing the problem by replacing and adding sensors to the equipment. Data was then analyzed by Element and the project team to build a set of diagnostic models to identify deviations occurring in the process prior to each fault. These models were then connected to visualizations in Microsoft PowerBI and OSIsoft PI Vision to support analysis of the historical data.

Results

Working with Element, the company predicted it would save at least $3.5M per year for the refinery.Additionally, the ability to efficiently collect and manage the operational data, along with the resulting analysis, led to several results. These included:

  1. A data-driven understanding of the extent of the problem: 276 faults were caused by variations in stripper column fluid throughout the 7 years of data analyzed. This ruled out previously theorized causes, including the condition of stripper column and changes in the throughput of the column. The average fault duration was 0.4 days and the mean time between faults was 7.4 days.
  2. Identifying the root cause: The data-driven diagnostic models enabled engineers to determine that excessive heat supplied in the reboiler of the stripper column was strongly correlated with the onset of the fluctuations The refinery engineer who previously struggled to find the cause of the fluctuations now realized why: the answer had been buried in the utilities side of the business, with which she was unfamiliar, and lacked the tools to examine that data.

I would never have looked at that part of the process without Element.

Site Technologist

Element enabled the customer to shift from analyzing data one sensor at a time to a more extensive approach of analyzing multiple, seemingly unrelated data streams comprehensively. As a result of these findings, a program was developed to limit future excessive heating in the stripper column’s reboiler.

← Back to Case Studies