Why Asset Operators Should Have Their Own Digital Twins

Note: This is Part 1 of a 3-part blog series

Digital Twins are a hot topic, landing on Tech analyst Gartner’s famed Hype Cycle and named a Top Ten Strategic Technology Trend for 2017.  Not a new idea (credit to Dr. Michael Grieves, 2002, Univ Michigan), Digital Twins are now at the forefront of Industrial/Internet of Things zeitgeist because they are essential to unlocking I/IoT value by enabling analysis, simulation and control of physical things and systems.

Within three to five years, billions of things will be represented by digital twins, a dynamic software model of a physical thing or system. Kasey Panetta, Gartner
"Gartner’s Top 10 Strategic Technology Trends for 2017"

What’s a Digital Twin?

The Digital Twin (DT) is a dynamic digital representation of a physical piece of equipment, or thing, and its associated environment.  Every DT will have a dynamic data model containing only a few, or even thousands, of data attributes of the physical thing or system it represents.  Attributes are associated with sensors that are measuring temperature, pressure, and other variables and associated physics in order to represent real world operating conditions as well as static values like the installation date or OEM.

A DT can represent a single physical thing, or it can comprise multiple nested twins that provide narrower or wider views across equipment and assets based on the process or use case. For example, a complex asset like an oil refinery can have a DT for a compressor motor, the compressor, the process train served by the compressor, and for the entire multi-train plant.  Depending on its size, the refinery could have anywhere from 50,000 to 500,000 sensors taking measurements that would be represented in the DT. At the end of the day, digital twins provide the necessary schemas required to easily compare and benchmark like things against one another - helping the user/operator to  understand what’s operating well and what’s not.

What are the types of Digital Twins?

While the size and complexity of DT’s vary, so do the functions and lifecycle.  We like to think of three types of Digital Twins:

  • Status Twins originate from the earliest design stages of the product cycle, mostly representing consumer products like a connected home or connected car.  Data from PLM systems is a major input, and use cases are typically device management, product control, and product quality.  Most product twins have short service lives when compared to industrial assets.
  • Operational Twins enable industrial organizations to improve the operations of their complex plant and equipment and are used to support the work of engineers (process, reliability etc.) and data scientists doing analytics and lifecycle operations. Operational DTs may inherit data from a Status DT.  Operational DTs may also include machine learning analytical models. Dan Miklovic at LNS Research calls these “Smart-Connected Asset” DT’s.  Operational DTs have long lifecycles, and will change over time as the underlying assets change.   
  • Simulation Twins replicate equipment/device behavior and contain built in physics models and even process models for what’s connected to the equipment.  Simulation twin use cases include running simulations of how equipment performs under varying conditions, training and VR.

Other twins (cognitive twin or autonomous twin) are beginning to receive attention, and tend to be an amalgamation of those listed above.  With that general DT overview, let’s unpack the Operational Twin.

Why do I need an Operational Digital Twin?

Operations teams are looking for ways to improve asset utilization, cut O&M costs, optimize capital spend and reduce health, safety and environmental incidents.   At the heart of every company’s digital transformation is a desire to achieve these objectives through analytical solutions that improve operations through increasingly sophisticated use of technologies that can augment, and even be proxies for engineers and technicians. But getting there is really hard, whether it’s working to deploy basic analytics like BI, sophisticated ML-driven analytics, or IIoT applications through platforms provided by OEM vendors like GE, Honeywell and ABB. The biggest challenge is sensor data, which is mostly locked up in process historian systems and stored in a format (typically flat with no context) making analytics next to impossible.  This data must be modeled and then kept continuously up to date to reflect the underlying state of affairs with the equipment and its associated asset.

An Operational DT solves this challenge by enabling the federation of data via the DT’s data model, which is built using meta data associated with the physical equipment.  Once this data model exists, any operations data represented by its metadata in the DT can be shipped to a data lake in an organized way for analytics.  The DT’s data model can also be published at the edge to enable shipping data from one system to another, for example from a PI historian to an IIoT application like GE Predix APM.  The Operational Digital Twin also allows for continuous maintenance of the operations data to reflect constantly changing real world conditions.

Digital Twin Plant

What should Asset Operators keep in mind for the Operational DT?  

We have five ideas.

1.     Leave no Data Behind. Data is the foundation for the DT, so bring it all together from the following sources:

  • Time series data from data historians, IoT hubs/gateways, and telematics systems;
  • Transactional data residing in Enterprise Asset Management, Laboratory Information Management System, Field Service Management System etc.; and
  • Static data from spreadsheets, especially those left behind by EPC’s who built the plants, and Process Hazard data.  

Federating as much data as possible via the DT will improve the value of IIoT analytics and applications, and also reveal new and valuable information about the physical twin that was previously unavailable. For example, a DT containing maintenance, equipment, sensor and process hazard data on a critical process can give operators brand new insights on the state of maintenance on critical equipment and how it relates to high risk process safety hazards.

2.     Standardize Equipment Templates across the Enterprise.  The starting point for building Operational DTs is the equipment template which allows for modeling the equipment, its sub-components, associated sensors, sensor attributes, and other related meta data like equipment functional location.  Asset operators too often rely on the OEM model, or try to keep the model limited to only the data streams they presume they need.  Instead, use standard templates by target equipment for every DT.  This will allow for easier analytics across all compressors, pumps, motors etc., providing a view of instrumentation coverage across each, and performance benchmarking allowing comparison of different OEMs.

3.     Insist on Flexibility.  Those in industrial companies responsible for delivering Operational DTs have a huge challenge.  Not only must the equipment and assets be represented hierarchically (compressor motor to compressor) but also across a process.  Compounding this challenge, different data off-takers want to consume the data differently, and what’s good for the reliability engineer who wants a hierarchical view of target equipment at one or multiple sites, doesn’t work for the process engineer who needs to see the data across multiple units within an overall process. DTs should be flexible enough to handle both hierarchies and process representations of equipment and assets, but be able to be easily managed to meet the need of the data consumer.   

4.     Integrity is Integral.  Asset operators deploying analytics and applications will struggle getting their engineers to adopt these new tools if they can’t assure the integrity of the data, especially the sensor data, feeding the tools.  Engineers understand the serious drift issues associated with sensor data and need to know they can trust the data.  Operational DTs should establish and maintain ongoing integrity of time series data feeding twin, including being able to manage changes to tags feeding the DT.

5.     Leverage the Cloud.  Cloud-enabled data infrastructure provides the scale required to build and maintain Operational DTs, that can then be published to the edge for streaming analytics.   The cloud also allows DTs to be built and shared across the enterprise, and enables analytics to compare processes and performance and share best practices enterprise-wide.