Digital Twins Beyond Manufacturing: Modeling Workflows as Living Graphs
Part 3 of the Adaptive Enterprise series. Digital twin technology has transformed aerospace and manufacturing. Here's what happens when you apply it to business operations.
Digital twin technology started in aerospace. NASA used paired systems — one physical, one virtual. To mirror spacecraft conditions and simulate scenarios during missions. The concept spread to manufacturing, where virtual replicas of production lines enable real-time monitoring, predictive maintenance, and process optimization.
But there’s a class of systems that nobody’s built digital twins for yet: the logical systems that run every organization. Workflows, approval chains, state machines, coordination protocols. These systems are just as complex as a production line, change just as often, and fail just as expensively. They just happen to be invisible.
From Physical Twins to Behavioral Twins
A physical digital twin mirrors a tangible asset: a turbine, a factory floor, a supply chain. The twin ingests sensor data, maintains a synchronized model, and enables simulation without touching the real system.
A behavioral digital twin mirrors a process. It captures how work actually flows through an organization, not how the process documentation says it should flow, but what actually happens. Which states do work items pass through? How long do they spend in each state? Where do exceptions cluster? What triggers human intervention? Where do items stall, loop back, or disappear?
The formal model is a directed graph. Nodes represent states. Stages in a process, statuses of a work item, conditions of a case. Edges represent transitions. The events or actions that move work from one state to another. Each edge carries measured data: frequency, latency, cost, variance.
This graph is the behavioral digital twin. It’s a living model of operational reality, continuously updated from the event stream of actual work.
Operational Entropy
Once you have a behavioral graph, you can measure something that’s been invisible to most organizations: operational entropy.
In information theory, entropy measures disorder or uncertainty. Applied to operational graphs, it quantifies how unpredictable the flow of work is. A perfectly efficient process: where every item follows the same path in the same time. Has zero entropy. A chaotic process: where items bounce between states unpredictably, loop back frequently, and take wildly varying times. Has high entropy.
Most organizations can’t measure this because they don’t have the graph. They have anecdotes. They know that “things are taking longer than they should” or that “the process feels broken” but they can’t quantify where the disorder lives or how it compounds.
With a behavioral twin, operational entropy becomes a concrete metric. You can decompose it by subprocess, by team, by time period. You can identify the specific transitions that contribute the most disorder. And you can measure whether structural changes. Adding a state, merging two states, automating a transition. Actually reduce entropy or just move it somewhere else.
Transition Matrices and Absorption Time
The mathematical foundation here is Markov chain analysis. If you model the behavioral graph as a transition matrix: where each entry represents the probability of moving from one state to another. You can compute structural properties of the process.
The most useful is expected absorption time: given that a work item enters the process, how long does it take on average to reach a terminal state (completion, cancellation, etc.)? This is computed from the fundamental matrix of the absorbing Markov chain, and it gives you a single number that captures the structural efficiency of the entire process.
Reducing absorption time is the operational equivalent of reducing latency in a network. Every structural modification to the workflow can be evaluated by its impact on expected absorption time. Did merging those two approval stages reduce expected completion time? Did adding that automated triage step change the absorption profile? The behavioral twin gives you the answer.
Event Sourcing as the Foundation
Building a behavioral twin requires a specific data architecture: event sourcing. Instead of storing the current state of each work item, you store the complete sequence of events that led to the current state. Every state transition, every timestamp, every actor involved.
This is more expensive in storage than a simple state-tracking database. But it provides something that state tracking can’t: the ability to reconstruct the behavioral graph at any point in time. You can see how the process actually evolved, compare current flow patterns to historical ones, and detect structural drift. The gradual divergence between how the process is supposed to work and how it actually works.
Event sourcing also enables something critical for the adaptive loop: the ability to simulate the impact of proposed structural changes on historical data. Before you modify the live process, you can replay the event history through the proposed new structure and measure whether it would have improved absorption time, reduced entropy, or eliminated bottlenecks.
Where This Goes Next
The behavioral digital twin is the observation layer of the Adaptive Enterprise. It answers the question: how does work actually flow? In Part 4, we’ll build on this foundation with the blueprint DSL. The declarative schema language that makes operational structures formally modifiable rather than implicitly hardcoded.