You’re Building the Wrong Twin!

Everyone is talking about Digital Transformation, Industrial Analytics, IoT, and Industry 4.0. Much of the discussion advocates a crawl, walk, run approach by taking on the low hanging fruit of smaller, limited use cases using Proof of Concepts (POCs). This typically requires a lot of data wrangling to be completed before the use case can even be executed. More often than not, these data wrangling activities enable just one use case. So when things change operationally, applications fail and they lack the scalability to be deployed enterprise-wide.

If you find yourself manually mapping data, building specialized integrations, or manipulating 40,000 row X 26,000 column spreadsheets then you need to consider whether the solution will scale across your organization and provide you with the runway that supports you as your operations and use cases change over time.

Old Trains

Everyone recommends that in order to pursue use cases, you need a strong foundation. One thing is clear however - use cases don’t define a foundation. Much of the underlying work required in limited scope projects is to build Digital Twins. Digital Twins mean different things to different people. Conventional thinking has defined Digital Twins as a 3D, software-representation of the physical asset that may be augmented with a value of its current status. Others have defined Digital Twins as a highly complex mechanical or chemical model that can produce simulations. In either case, it requires a tremendous amount of effort (often done in spreadsheets) to manually map data to create these 3D or simulation data models. Building and maintaining these Digital Twins is the dirty little secret of Digital Transformation because if anything changes operationally, additional effort reconfiguring the Digital Twin is required. Without an up-to-date twin, the applications that are dependent on the twins fail. Additionally, there are only a limited set of use cases related to 3D or simulation twins - and these don’t attack the real problem of transforming your operations. Pursuing use-case after use-case incrementally has to stop. And so does the amount of effort to build and maintain your foundation. So what’s the right path?

What is clear, is that we need a use-case independent model. And to do that, we need a new type of Digital Twin: one that is built around aggregating your operational data into a common enterprise data model of your assets. Asset Twins do just that. Existing as data representations of real equipment, assets, and process units...

Asset Twins target your operational challenges, acting as an information map that defines how production systems are related to each piece of instrumentation, equipment, or an entire process unit.

This informational model can be leveraged to avoid the headaches of data wrangling, acting as a layer of abstraction to connect to any data about your Assets without having to understand where the data is, what system it belongs to, or who might consume it.

The Digital Twin Landscape

Iio T Vs Consumer

An asset-oriented information model, like Asset Twins, is going to be needed to achieve transformation at scale. This simple shift in focus gives you extraordinary new power within the organization: you can create Simulation or 3D models and make them available to anyone. You can also connect engineering analysis products like Seeq and allow anyone in your organization (e.g. Remote Ops) to perform the investigations they need. With a common data model in place, business analysts can gather insights about your operations without having to create custom data pulls or translate what the data means to them. And if you’re interested in Machine Learning and Data Science, that layer of abstraction together with a templatized model of each type of equipment offers data science at scale with tools like Uptake to look across a fleet of like-assets.