Super Major Oil and Gas Company

Diagnostic analytics

$10 M saved

over three years

1,600 sensor streams

analyzed

276 faults

uncovered

The refining division of a super-major oil and gas company piloted an automated approach for integrating, organizing and managing operational data. The company engaged Element to determine if operational data could be used to address an issue at one of its refineries.

Working with Element, the company predicted it would save at least $3.5M per year for the refinery.

Additionally, the ability to efficiently collect and manage the operational data, along with the resulting analysis, led to several results. These included for the first time, having a data-driven understanding of the extent of the problem as well as the ability to identify the root cause of any issues.

CHALLENGE

For decades, oil companies have excelled at finding and efficiently processing crude oil. The operational data from this activity, when organized and analyzed properly, can reduce downtime, increase efficiency, and grow profits. Unfortunately, at this company, the operational data created by these efforts was disorganized, fragmented, and largely unusable. As a result, making use of this data has proven difficult because tools haven’t existed to efficiently collect, manage and prepare the data to share for analysis. In the end, valuable insights have remained buried.

Recently, the company piloted an automated approach for integrating, organizing and managing operational data. The work paved the way for data analysis and cost savings

The company engaged Element to determine if operational data could be used to address an issue at one of its refineries. For more than seven years, refinery engineers had reduced the throughput of a stripper column, used to remove sulfur-containing compounds from naphtha, because of frequent fluctuations in the column’s liquid levels causing faults.

Reducing throughput was the only way to address the fluctuations but, in the process, the slowdown harmed efficiency and profits and didn’t address the root cause of the problem. The reason for fluid level fluctuations was unknown despite the wealth of operational data from the stripper and other sources. Several refinery employees, including a PhD technologist, searched for the cause but were hampered by the lack of tools.

Previous attempts to use the raw data to build diagnostic models had failed to identify the cause. Dozens of staff manually examined detailed spreadsheets, hand-wrote software scripts, and continuously re-prepared and re-organized data to no avail. The cost and time spent on these steps frustrated everyone to the point that productivity and morale suffered. The lack of tools to efficiently and precisely sift through the data needed to be addressed.


See More

See Less

SOLUTION

The Element team worked with the customer’s project team to conduct an economic analysis.

The following activities were performed:

Connect

  • Element Unify hosted within Elements Azure tenant
  • Ingest metadata from OSIsoft PI data archives

Manage

  • Design and build equipment-centric Asset Twins based upon the desired attributes of the stripper column and other process equipment
  • Transform and contextualize data including removing turn around
  • data and creating event labels for faulty periods

Share

  • Export various hierarchies and raw data as OSIsoft PI Asset Frameworks (AFs) and to the customers Azure storage for use with Microsoft PowerBI, OSIsoft PI Vision, Coresite, and ProcessBook

The project started by working with the operational data using Element Unify, which provided a 360-degree, flexible, data-oriented view of the relationships between processes and equipment, identifying the root cause of the frequent fluctuations in the column’s liquid levels. Raw data from the OSIsoft PI System, totaling 1,600 sensor streams representing seven years of data, was imported into Element Unify deployed on Microsoft Azure.

The data was then translated into flexible, digital representations of the unit’s equipment and operations in the form of asset data models (or Asset Twins). Engineers were able to evaluate and improve the quality of the data on a continuing basis – identifying, for example, a piece of equipment that wasn’t generating enough operational data and fixing the problem by replacing and adding sensors to the equipment. Data was then analyzed by Element and the project team to build a set of diagnostic models to identify deviations occurring in the process prior to each fault. These models were then connected to visualizations in Microsoft PowerBI and OSIsoft PI Vision to support analysis of the historical data.

  • Design and build equipment-centric Asset Twins based upon the desired attributes of the stripper column and other process equipment
  • Transform and contextualize data including removing turn around data and creating event labels for faulty periods

See More

See Less

RESULTS

Working with Element, the company predicted it would save at least $3.5M per year for the refinery.Additionally, the ability to efficiently collect and manage the operational data, along with the resulting analysis, led to several results. These included:

  • A data-driven understanding of the extent of the problem: 276 faults were caused by variations in stripper column fluid throughout the 7 years of data analyzed. This ruled out previously theorized causes, including the condition of stripper column and changes in the throughput of the column. The average fault duration was 0.4 days and the mean time between faults was 7.4 days.
  • Identifying the root cause: The data-driven diagnostic models enabled engineers to determine that excessive heat supplied in the reboiler of the stripper column was strongly correlated with the onset of the fluctuations The refinery engineer who previously struggled to find the cause of the fluctuations now realized why: the answer had been buried in the utilities side of the business, with which she was unfamiliar, and lacked the tools to examine that data.

Element enabled the customer to shift from analyzing data one sensor at a time to a more extensive approach of analyzing multiple, seemingly unrelated data streams comprehensively. As a result of these findings, a program was developed to limit future excessive heating in the stripper column’s reboiler.

See More

See Less

All I had were process charts, a spreadsheet and my brain. We lacked the organizational capabilities to efficiently and reliably gather and manage data for analysis.

Site Technologist

Interested in trying Element Unify?
Try for Free →