In my previous post, “How a Supply Chain Digital Hub can Drive Post-pandemic Supply Chain Resiliency?” I outlined what data an analytic platform for Supply Chain would have to accomplish to enable more resiliency, both in the operation of the supply chain, and in the underlying data and analytics. In this chapter, I will outline the components for addressing resiliency. In your organization, you may not call it a Digital Hub, but for the purposes of this blog, I’ll refer to this thing as a Supply Chain Digital Hub. In this post I’ll outline the main components of the Supply Chain Digital Hub, the problems that this approach avoids, and the resulting benefits of the approach.
Here are the main components of the Supply Chain Digital Hub:
A Supply Chain Business Metamodel: Foundational to a model-driven supply chain domain is the metamodel, and associated methodology for mapping (visually) the enterprise’s end-to-end supply chain (or selected scope). This metamodel provides a way to convert the structure of the physical world to the digital world. It drives the automatic generation of the data structures and application behaviors and can be regenerated automatically to restructure and accommodate changes in the physical world.
A Supply Chain Object Model: A methodology for specifying the data/data structure and analytic behaviors of each object, allows for supply chain analytic behaviors (calculations for KPIs, metrics, predictive models, and other analytic application behaviors) to be managed in a micro-services environment.
Supply Chain Data Objects and Analytic Models Objects Instantiated: The actual data, sourced and curated ((material movement, orders, plans, product transformations, and all associated information (quantity, date, cost, revenue, energy)).
A Supply Chain Data Connector Toolset: Allows for connection and streaming data acquisition of supply chain business event data from operational sources (blockchain networks, external trading partner systems, company (internal) systems).
Supply Chain Data Source Translation Rules: Allows for the convergence, quality assurance, and control of data as it is sourced and curated.
- As a Service: Allows for the supply chain organization to choose how or whether it invests in infrastructure, or simply pays as they go.
Now, linking this technology back to the business problems like resiliency, cost, quality, and complexity. Supply chains and supply chain analytics must be built to withstand change and disruption. But why do supply chain applications (like visibility, alerting, and orchestration) break in the first place? When the supply chain operations change (we change shipping routes, add a new supplier, or a new processing center, or historical lead times change) our analytic assumptions, our scenarios, and our planning models need to shift automatically to accommodate. If our systems are slow to change, we cannot respond quickly using our valuable data to drive our decisions. That leads to sub-optimal decision-making and potentially increased costs, poor customer service, inventory shortages or spoilage.
How do we address these issues? A few ways:
Controlling App Creep: Today we are seeing the proliferation of supply chain apps: planning apps, control towers, alerting and visibility apps. Each app brings its own redundant application-specific data store. The impact of app creep is higher cost of application development to support the business, higher cost to maintain these apps, as well as the potential errors due to conflicting results due to data that do not reconcile between apps.
Solution: Using a converged, managed domain data repository area allows companies to build reusable data products: each can power multiple supply chain visibility applications and other analytic uses.
Result: Redundancy is reduced. We put the effort in one time to curate the data. The incremental cost of each application is reduced, along with the time to market.
Reducing redundant/duplicated effort: Each time a new supply chain app is created, it potentially creates a portion of redundant data, and worse, the potential for mismatching and erroneous results.
Solution: The digital hub approach for supply chain supports reusable data products. Building and implementing applications using data products that are pre-certified allows analytic application engineers to quickly produce supply chain applications that do not get out of sync with the real world, and where the data heavy lifting has been done by standardizing on common object/data models.
Result: Reducing technical debt and improving quality/accuracy.
Reducing Breakage/Brittleness: How do we cope with the constant “breakage” of systems and data structures that get out of date and out of sync as our supply chain changes and reconfigures (locations, products, suppliers, transportation modes, etc.)? The technical debt of this situation along with the risk to the business of not having accurate picture of supply chain activities makes it important to address.
Solution: The domain specific digital hub as described above is model driven: it allows the mapping between the real supply chain and the model of the supply chain to be dynamic (change when the world changes), to allow automatic reconfiguration of the data organization and analytic calculations accordingly.
Result: Improving accuracy/quality/visibility to the business.
The notion of a Supply Chain Digital Hub combines the “back to basics” approach of data integration and reuse with the latest modern architecture for data ecosystems. This balanced approach brings swift time-to-market for supply chain data and analytics consumers, while reducing considerable technical debt to the data operations team to maintain it. It allows the data organization to cost effectively respond to assist the operations team adapt to changes in the outside world with the least amount of delay and friction. It is also supremely well supported by Teradata’s connected multi-cloud data platform for enterprise analytics, Vantage.