For decades, oil companies have excelled at finding and efficiently processing crude oil. The operational data from this activity, when organized and analyzed properly, can reduce downtime, increase efficiency, and grow profits. Unfortunately, at this company, the operational data created by these efforts was disorganized, fragmented, and largely unusable. As a result, making use of this data has proven difficult because tools haven’t existed to efficiently collect, manage and prepare the data to share for analysis. In the end, valuable insights have remained buried.

Recently, the company piloted an automated approach for integrating, organizing and managing operational data. The work paved the way for data analysis and cost savings
The company engaged Element to determine if operational data could be used to address an issue at one of its refineries. For more than seven years, refinery engineers had reduced the throughput of a stripper column, used to remove sulfur-containing compounds from naphtha, because of frequent fluctuations in the column’s liquid levels causing faults.
Reducing throughput was the only way to address the fluctuations but, in the process, the slowdown harmed efficiency and profits and didn’t address the root cause of the problem. The reason for fluid level fluctuations was unknown despite the wealth of operational data from the stripper and other sources. Several refinery employees, including a PhD technologist, searched for the cause but were hampered by the lack of tools.
Previous attempts to use the raw data to build diagnostic models had failed to identify the cause. Dozens of staff manually examined detailed spreadsheets, hand-wrote software scripts, and continuously re-prepared and re-organized data to no avail. The cost and time spent on these steps frustrated everyone to the point that productivity and morale suffered. The lack of tools to efficiently and precisely sift through the data needed to be addressed.