The Evolution of Digital Twins for Asset Operators

Note: This is Part 2 of a 3-part blog series

In Part I of this series on Digital Twins (DT) Andy crafted a great explanation of what a DT is, why it's needed and the guiding principles on building DTs for Asset Operators. In Part II, I’ll address the evolution of how industrial asset operators will adopt DTs.

To refresh, a digital twin is a dynamic digital representation of the physical environment. We’re most familiar with DTs from the consumer world as the apps on our phones that manage your Nest Thermostats, the Philips Hue Light System’s color of the lights, the information about a Fitbit, or to summon a Tesla.

The underlying data model that enables these mobile applications are digital twins. From left to right: Philips Hue, Fitbit, and Tesla Model S

Unfortunately, for asset operators, these app-based consumer DTs don’t scale to the hundreds of thousands, even millions, of measurement data streams required to operate modern industrial equipment and assets, whether it’s a discrete manufacturing line, oil refinery, or a large scale film production studio.

Two major trends, while creating a lot of marketing buzz, are pressuring asset operators and their supporting IT organization to act:

  1. Greater connectivity of our equipment with more sensing typically referred to as “IIoT” or the Industrial Internet of Things

  2. Cheaper compute and storage allowing for more powerful analysis and operational improvement, a trend typically referred to as “Industry 4.0” or “Digital Transformation”

To connect those two trends you need digital twins. But which digital twin should asset operators pursue and how should they get started? We see DT’s evolving in stages, and the good news is that through past investments many companies already have ingredients in place to begin (and many have already started and not realized) their journey.

The Past

Status 500X

Stage 1 - Status Only Twin: Ironically, these consumer digital twins took a page from the decades of work in the industrial market around industrial automation. In fact, many asset operators already have digital twins - called Status Digital Twins (providing a fixed schema of how to view the current readings of a piece of equipment/device). Great examples of status digital twins in the industrial world are asset registries, SCADA systems, process models for simulations, and multivariate process control.

All of these have been around for 20+ years and require someone to manually create a schema of the physical environment. More often than not, our operational data is stuck within a flat reference file where engineers and technicians need to memorize the name of every single instrumentation signal, resorting to sensor naming conventions to help manage the data. When we do create the Status DT, it requires manually mapping every single sensor in the instrument registry to a process drawing, just to build a single process screen. The effort to create a new screen or report can take months of work.

Just to see “the current voltage across all pumps for the past 3 months”, you need to build a whole new schema and data model mapping all the voltage sensor readings across your tens of thousands of pumps. This will take months to build, so most assume it’s not possible. Traditionally, Status DTs have taken this approach wherein the use case defines the data model. This approach doesn’t scale because you can’t have a brand new data model for every new question. Why can’t industrial data analytics work like the rest of the general analytical tools out there?

The Present

Today, many in the industrial sector have begun the “analytics” path by pursuing Simulation Digital Twins, which require physics-based models to simulate how equipment operates. Even OEMs have begun selling these physics models as a service. When a physics model for a piece of equipment stands on its own and is not related to any other equipment, this model works incredibly well (e.g. onshore oilfield pumps, windmills, locomotives, etc). For more complex processes and assets - a majority of industrial operations - this model also doesn’t scale because it’s incredibly hard to create, let alone maintain.

Accordingly, we see a different second stage in the DT evolution as operators move from basic Status DTs and into a Digital Transformation that requires subsequent DT stages.

Operational 500X

Stage 2 - Operational Twins: We’re often asked, “How do I extract greater value from the operational data I’ve gone to great cost to collect and store"? Our answer is to start the journey with an Operational Digital Twin which connects a flexible, community managed data model to high-fidelity, real-time and historical data to support performing general purpose analytics in tools like PowerBI or Tableau. This avoids having to build custom data models for each analytical activity. To deliver a truly capable Operational DT it must be a community-managed, flexible data model, allowing anyone (with appropriate permissions) to add their own context to the data model (we spent an entire blog explaining why a more flexible, graph-based data model is required for this).

In the industrial world, the Operational DT is the basis of all future work, because it is the data model. A flexible data model, along with some level of data integrity, allows operators to quickly serve data to the appropriate constituents and applications no matter the analytical use case. All of a sudden, data becomes a massive asset for the organization and begins to take on increasing the gravity of its own.

Stage 3 - Operational Twins with Events: We’ve also begun to see a new, more advanced Operational Digital Twin emerge in the market - one where the events have been labeled on the Operational DT. An event, or labeled time-frame, unlocks machine learning (ML) capabilities. In the ML world, they call labeled data “supervised data” (ML works better with the “right data” not just “more data”). With supervised data, ML algorithms know what to look for, and can now begin to uncover insights around what may be happening during the event, just prior to the event, or even assessing what are the leading indicators of when a future event may occur.

Because the Operational Digital Twin is the foundation to these labeled events, the ML analysis can run across thousands of sensors and over a decade's worth of data (sometimes we’ve seen companies only provide events without the data model behind it - limiting analytical capabilities to just a few sensors). With the ability to analyze thousands of sensors and decades of data, identifying what is causing an event, or predicting when a future event may occur months ahead of time actually becomes a reality. One of the things I’m proudest of here at Element is that we’re helping many companies achieve these ML-analytical capabilities today, so they can increase throughput, decrease downtime, and most importantly, prevent hazardous events.

Screen Shot 2017 08 31 At 9 52 02 Am

The Future

Simulation 500X

Stage 4 - Operational + Simulation Twins: Once the Operational Digital Twin is well established, you’ll want to augment your ML capabilities with both process and physics information - what we and others call the Simulation Digital Twin because it simulates operations of a piece of equipment/device through physics and process models. A great dimension of how to think about physics-based models vs data-driven models is in the figure below:


The best part of having an Operational Digital Twin underlying the physics and process information is that you don’t need a fully comprehensive physics model. The physics and process models actually provide the necessary feature engineering required to amplify the results of ML-based analytics - so the more you layer in over time, the better your operational and simulation models will become.

Based on your business and operations you may choose to develop a Simulation Digital Twin before building an Operational Digital Twin with events. We’ve typically seen better outcomes with operational twins over simulation twins, and operational twins are easier to maintain. That said, the two together are more powerful than each in isolation.

Stage 5 - Twins with Business Models: With both of the Operational and Simulation Digital Twins in place, it becomes a lot easier for engineers to improve operations. The next step is to connect those improvements to the income statement and balance sheet. To improve profitability and reduce risk, we need to store the relationship between the digital twin and financial models, people, and even process hazard information (e.g. PHA).  With this information it becomes easier to use machine learning and analytics to support decisions and improve overall operations.

Autonomous 500X

Stage 6 - Autonomous Twins: The last stage of the DT evolution is the Autonomous Digital Twin (often called a “cognitive twin”) which is primarily focused on being the overarching twin to control the equipment/devices, where the decision making and control is managed by software. We don’t see this happening for several years, but in some circles, it has become in vogue, given recent developments in the automotive sector. While a motor vehicle digital twin is much simpler than an industrial digital twin (as we established earlier in this blog), the The Society of Automotive Engineers (SAE) has published a method of measuring autonomy for motor vehicles which is a great guide on how to think about the levels of autonomy that will emerge for industrials. Today industrials operate at level 2 or 3 autonomy, but who is to say that over the decades industrials won’t evolve to level 4 or possibly level 5 autonomy?

Roadmap Review 20170807

After Stage 5 of the DT evolution, we may begin to see the digital twin evolve from just supporting decisions to actually making decisions on the operations. This digital twin must be trained in the cloud (where scalable compute is available), but be allowed to execute at the site, or “the edge”, requiring the right hardware to make these low-latency decisions. Initially, there will still be a human in the loop, or what the SAE calls level 4 autonomy. However, as humans take the appropriate actions around ambiguous events, as more sensing is added, and as the simulation digital twins take a Carcraft approach of cycling through the same scenarios hundreds of thousands of time, at some point we may begin to see full level 5 autonomy.

Interested in what it takes to build Digital Twins? Sean wrote a great blog describing the Architecture of a Digital Twin Service.