I’m frustrated. I have all this data that I have been collecting for years. I know I have opportunities to get more out of my plant. I have some working theories about what could give me more from my existing assets. BUT I’d like the data to confirm this so I can:
Prove that I’m right!
Demonstrate value to others
Get the resources I need (people, time and money) to implement it
Make a benefit to cost value judgment
This dilemma haunted me for most of my professional life and I see it haunting most of my peers, customers and potential customers today. Let’s talk about how we can move past it and make progress.
How We Think About Data
Most of today’s industry leaders grew up in Industry 2.0 or 3.0, pre-cloud, siloed applications that ran on PC’s and were targeted at specific user profiles. Think inspection, operations, maintenance, process engineering and the like. Unsurprisingly, we still think about data the way we always have. We pull data together outside of systems in Excel, which is difficult and challenging to do.
We would like to have all our data in a common framework and easily accessible but the task is daunting. A typical facility has dozens of databases and systems, all of which have useful, but often overlapping, information. Furthermore, the accuracy and currency of that data needs to be validated every time we want to use it.
Our data is also segregated because our first priority is the safety of our people, the environment and the equipment itself. So, we segregate network access to keep away evildoers. We add a safety layer, a control layer, a control support/optimization layer, and a business layer. A plethora of critical information is being generated at the safety and control layers, where the sensors are. However, many of us wishing to use the data operate at the business layer, further complicating and confusing the situation.
Technology Has Moved Forward... But We Haven’t
With the advent and widespread adoption of cloud computing, we have the technology now to bring the data together and “flatten” it back to a single layer where we can access it and use it to run the business.
We also have the ability to map the context of the data across data sources, much in the same way that Google Maps, Waze, and Facebook (Meta) do. These companies bring together multiple sources of information and provide access to the things that interest you in a user-friendly way, allowing you to spend less time looking and linking and more time solving.
Industries with simpler data structures, like finance and banking, have already made this shift. However, it’s more difficult for industrial companies to follow suit because of a much higher level of complexity. But the technology exists and has been rapidly mature for several years now.
Our Thinking is Holding Us Back
Perhaps our thinking is holding us back. The usual questions paralyze us:
How do we start?
What information should we link together?
What are the biggest opportunities?
I see people getting trapped by a few common hurdles:
We can’t get started until we get the final structure perfect My peers and I have wasted years of effort trying to decide what good looks like and build the perfect structure to the point of making no forward progress. Currently, I see a growing body of work trying to develop and deploy new industry standards.
While much of this will be necessary if we want to share data and get the most out of machine learning, it is not necessary to get it all right before we start. New technologies such as graph databases help us restructure the output format without having to go back and restructure the input formats. So, we can get started, make mistakes and adjust, with very little technical debt along the way.
We’re thinking about continuous improvement and optimization without considering step change improvement opportunities
Manufacturing and process industries of all types have been on continuous improvement journeys for a long time. For many of us, this is structured into our systems, processes and thinking. However, we don’t often think about how we might make a step change. After all, the availability of the process is designed in, once we put capital equipment on the ground. The physics and chemistry of the process doesn’t change.
We haven’t had a technology breakthrough like a new catalyst, nano-technology, or a sentient, all-knowing process control system yet. But what could we do differently if all our data was pooled together in a fast, scalable cloud infrastructure? Could we make better decisions by running simulations and models in parallel, then using computer processing power to help us review multiple options? Does this sound better than a single engineer running as many scenarios as they can think of and making a recommendation?
Can we link machine learning models to the logistics supply chain and get the right spare parts at the right time? Or, forecast plant run rate for downstream facilities? What about optimizing our people across shifts to impact plant performance? What ideas can you come up with that you always wanted to execute but felt you could not? Now, maybe you can. And it’s probably not that expensive to try.
The people running the plant and the people running the business are not well integrated
I have been in discrete manufacturing facilities where every scrap bin has a value on it so that the crew running the line can see the waste. Or where cost information is available in real time so production crews can see the impact they have and take immediate action to save money or increase production.
But I have rarely seen this in the Oil & Gas and Chemicals industries. I’ve never heard the maintenance team be supportive of running the equipment at extreme limits because margins are at their peak. They understand that even though the repairs will be more expensive and take longer, the company is still winning by making more money. I’ve also never seen a finance or production leader be supportive of spending more money, or time and effort to repair something, after running at extreme conditions that made the company a lot of money.
By making more data available to the teams operating and supporting operations, we can give visibility and transparency to the workforce to engage them in helping us make the best decisions for the company. Subsequently, the team benefits from business knowledge and experience, making them better employees.
I need one vendor to do it all for me
I hate to break it to you, but that company does not exist. We need to look at our business objectives, work with our existing partners, bring in a few new partners and start getting in the habit of working together. Without facilitation from the owners, most companies will not work well with each other.
There is always some overlap in capability. There are always going to be some competing priorities and agendas. The mindset of ‘all the service dollars for ourselves’ or ‘our competitor is going to steal our intellectual property’ is unhelpful. We have to get over it and work together to solve the big problems.
Everything is a major project
Executives can’t seem to function without a project management team. Big budgets, lots of project management overhead, lots of process and reporting. These things all slow down your data project.
Data projects are often exploratory and experimental. Agile work processes work much better. Work can be demonstrated as it happens and feedback is nearly immediate. When we go into project mode, everything takes 3+ years. In agile mode, those same data projects are finished in 3 - 6 months, leaving your team free to do another project and bring more value back to your company.
What Are You Going to Do?
Examine your own organization and your own thinking. What can you do differently? Can you take a risk and bring a different perspective and thinking to your data projects? Will you take a chance on an emerging company with a leading technology that will enable you to use your data in ways that will change your position in the marketplace?
Are you willing to take some small personal risk to be the change catalyst?