Supply chains contain every material, component, product and packaging for the objects that together compose the world we live in. However, there is an often invisible ingredient to successful supply chains: data.
As COVID-19 disrupts the largest supply chain in the world—between China and the United States—the phrase “business continuity” has jumped to the top of every leader’s priorities.
Supply chain management is the fundamental foundation for making the right materials show up at the right place at the right time (and in the right quantities)—whether that means food and toilet paper, personal protective equipment (PPE), or ventilators.
If business leaders take one lesson from the pandemic, it will be that supply chain management can be a matter of life and death.
Consider the emergent PPE supply chain, composed of both formal manufacturing and informal craft networks, that has risen up to meet the urgent demand for masks from healthcare workers fighting COVID-19 across the US.
Arguably, the single greatest challenge is that minimum order quantities associated with affordable unit prices are far too large for most healthcare providers to handle.
This problem seems technical, even arcane (why can’t the factories just produce the masks in any quantity or distribute them through regular fulfillment channels?) yet its complexity and the real-world impact demonstrates that when it comes to supply chain data, grappling with “big data” is far from an academic exercise.
Supply chain data, in its simplest form, can be broken into three distinct types:
- qualification (who are the parties involved)
- specification (what material/product/service is it)
- quantification (what volume and at what price).
Qualification, specification, and quantification data travel down the supply chain via a set of documents that govern the duties and obligations of the involved parties: bill of materials (BOM), requisition, purchase order, invoice, encumbrance form, shipping label.
Each set of transaction documents can be thought of as contracts, each with terms and conditions, including governance, risk management, and service levels. As more physical goods and infrastructure are consumed as “services” and “subscriptions”, these contracts (and the data within them) increasingly refer to the transfer in ownership (risk) for virtual goods, materials, even outcomes.
At each stage in the supply chain, these three data types are used by humans to create and execute instructions: what needs to be done, what do we need in order to succeed, and who’s paying for it.
For many major companies today, managing this supply chain data and meeting customer demands is a lot of work, and there is a great appetite to automate it.
A Transition in Search of a Tipping Point
Under today’s present conditions, supply chain autonomy will not happen by itself — and for one simple reason: it’s expensive. Given that it is not required, and that supply chain is already conceived of as pure cost (e.g. bottom-line, rather than top-line revenue), managing supply chain data is rarely a priority for the C-Suite.
The current system, the one that involves lots of people looking at documents and using their professional experience to observe, orient, decide and act at each point in the supply chain, works as well as we’d expect it to.
It is difficult for a business leader to quantify the ROI between an average and an excellent supply chain. If the product shows up in customers hands in a timely fashion, what more could you ask for? Better to stay focused on top-line revenue!
Three trends will accelerate over the next five years and change all of that, rendering the contemporary supply chain unscalable and inadequate. Each of these trends follow a single larger north star, a mega-trend: supply chains are imperative to respond to the mass personalization of the customer experience expressed via preferences in e-commerce platforms.
1. On-Demand Production & Distribution, Modular Supply Chains
The advent of Amazon Prime and Same Day delivery has set in motion a mass transformation in the approach to retail fulfillment for consumer products driven by consumers expectation for immediate gratification (e.g. gratifying as fast as hopping in a car and going to a store).
Today, every company that delivers their product to their customer (instead of a distributor or a brick and mortar store) is figuring out how to manage and optimize that delivery experience to align with consumer expectations, which ultimately means tracking individual product journeys from manufacture to the customer’s doorstep.
This massive shift will render existing approaches to supply chain management obsolete and will create competitive opportunities for optimized networks of on-demand production and on-demand distribution, potentially leveraging modular assemblies and smaller, distributed micro-fulfillment centers storing smaller amounts of inventory for shorter periods of time.
Think about the journey of computational services from an on-premises set of servers to first, a single-tenant remote server (the early “private cloud”), and then to a distributed, multi-tenant “public cloud.”
What if the same move from a linear set of nodes operating within a single pathway, to a single network with multiple pathways, to an overlapping set of network clusters with a staggering number of pathways, will happen for physical goods and products instead of merely virtual products or packets of information?
This would mean that when an e-commerce site shows someone a midnight blue version of a dress with the option to customize the size, fit, and hemline, a data packet will be delivered to queue in a modular assembly where the material and components for customizing the dress are already waiting and production can be initiated near-instantly.
As soon as this person confirms her purchase in the shopping cart, a set of automated actions are performed on a distributed network to produce and box the dress, move the dress to a micro-fulfillment center and then alert a third party service provider to pick up the dress and deliver it to her house.
Throughout the process, the retail brand will track the location of Stacy’s dress and will attempt to collect data on Stacy’s satisfaction regarding the product and the overall customer experience. Increasingly, we will think of this business process as the creation of a personalized supply chain for Stacy’s midnight blue dress, rather than as a persistent supply chain for Woman’s Dress 1BN43T for which Stacy is one of many customers.
If Stacy rates her experience as positive, perhaps the same pathway, the same modular assembler, the same fulfillment center, and the same delivery service will be used when another person orders a yellow dress with the same custom hemline to the doorstep of an address in Stacy’s same town. If Stacy reports the package was damaged or that the dress was dirty, then perhaps the retail brand will utilize a different pathway for future orders to addresses in that area.
The important thing to realize is that while humans who work in operations for the retail brand will select the pool of possible modular assemblies, micro-fulfillment locations and delivery services (and build deep relationships with their human operators), the selection of the path through the network that Stacy’s order will take from the moment that the money leaves her bank account will likely be executed by machines.
Increasingly these machines will leverage AI to match patterns in the pathways of producing, storing and delivering a particular product with ever-richer pipelines of data from customers about which pathway offers the best overall customer experience. Many major companies are in the R&D stage of assembling each of the puzzle pieces for this experience today, and even fast-growing startups direct-to-consumer brands are struggling to get every step of the process as buttoned up as Amazon, who we even find has its own hiccups from time to time.
2. New Supplier Selection Influences
Brands have been told for the past ten years that customers will pay more for eco-friendly, fair wage and fair trade brands and increasingly they will have the ability to empower consumers to do exactly that, using increasingly value-added shopping experiences.
To reward suppliers who attain certifications and offer pricing that occupy a greater share of customers shopping cards, brands will utilize performance-based contracting that offers attractive two-sided rebates and rewards and create mutual incentives for both the brand and the supplier to increase value to the consumer.
Consumers, more educated, demanding, and purpose-driven than ever, will require meaningful eco-labels and third-party certifications to determine that a product’s selection aligns with their personal purchasing identity.
Brands will use AI to analyze patterns across demographic and psychographic data to determine what types of consumers are motivated by what types of certifications, and then craft programs to incentivize their suppliers to attain these third-party certifications at scale, winning loyalty and better aligning their brand identity with the identity of their customers.
Performance-based contracts will bet sets of instructions and business logic that live inside digital application layers, and will monitor minimum order quantity, lead times and increasingly will execute on orders to suppliers without human intervention, following rule engines that supervene over the larger ERP/MRP architectures.
3. Hyperdynamic Pricing & Configurable Delivery Experience
Amazon arguably needed to offer Same Day Delivery to win market share (and consumer mindshare, forming new habits) against brick and mortar stores, but other market players may offer consumers the ability to save money by offering longer delivery times or upsell by offering a fossil fuel-free delivery service or sustainable packaging.
eCommerce enables hyperdynamic pricing, better known as “surge pricing,” because consumers cannot see that they are being offered different prices for the same product at the same time. Products that are high demand in certain areas can cost more without impacting sales—a massive untapped revenue opportunity.
Already common in the travel and transportation space, we will see the personalization of pricing rise alongside the personalization of products, as well as the personalization of rewards/loyalty programs designed to incentivize repeat and subscription customers.
Real-world Examples
API-based Purchasing in Electronics Manufacturing
Recently I spoke with a leader of a sourcing organization at an electronics manufacturer who has direct API integrations with his top suppliers.
This means that when he uploads a bill of materials into his system, he can get immediate real-time line item quotes from his suppliers via their API, compare and select each order for each supplier, and place the order without needing to generate a PO.
His team has fully automated over 40% of their purchasing activities using a custom-built integration alongside Calcuquote, enabling them to focus on more strategic pricing activities and bigger ticket custom assemblies. From an architecture standpoint, he suggested that these direct multi-firm API integrations will replace EDI, since they are cheaper, faster, and act as a repeatable clearinghouse for regular order types.
Modular Component Production in Electronics Manufacturing
Companies like Bright Machines and Autovol are enabling modular sales orders with detailed customization to be quickly converted into products with metadata that can be shipped to business customers.
AP automation companies like Conexiom and Artsyl may provide a middleware layer between these smart production systems and the ERP, transforming incoming orders in documents, csvs, and other file formats into a centralized repository to be automatically checked against business logic before being approved.
These systems are acting as the automated checks and balances that have traditionally sat on desks for approval, slowing the process.
Mass Localized Sourcing in Food and Beverage Distribution
Sourcing local produce and delivering to many customer locations isn’t easy to begin with, and when you take into account dietary restrictions, as well as sustainability goals for food packaging and presentation, the magnitude of combinations alone can seem daunting
Using Bid Ops, Oh My Green is able to forecast and recommend competitive pricing across many suppliers all offering different order quantities and execute sourcing logic sophisticated enough for the Oh My Green sourcing team to create a tiered, performance-based contract, including rebates and utilization rewards.
Planning for the Future
If I am an IT leader at any large business that needs to move physical goods and materials, I am shortlisting AI vendors to provide me deep dive product demos in the following categories:
Purchasing and Ordering
The most important factor in this category is automating the movement of line item data from various endpoints into my system of record, likely my General Ledger. The crown jewel is line item visibility across all of my spend for the past 30 days. Until I have an AI vendor that can help me get to over 90% visibility into what we’re ordering and from who, I am simply not going to get much value from additional analytics solutions.
Be skeptical of IT providers who say that a “data lake” will solve this purchasing/ordering unification problem because it won’t. However, there are good P2P solutions that will leverage AI to help cleanse and harmonize data to give one a thesaurus of its manufacturer/distributor part numbers, de-duplicate, and begin to see opportunities.
For data harmonization, vendors like Tamr and Verusen could help.
For automated exception management and deep analytics, there are vendors like Suplari and Sievo. Overall P2P experiences are available in companies like SourceDay, Procurify, and Coupa.
One will sometimes hear business leaders say that their goal is “on-contract” spend (that all spend occurs with vendors that have offered negotiated pricing) or “on-PO” spend (that all spend is matchable with purchase orders in the General Ledger).
While logical from a management perspective, from a data perspective neither of these goals will help bring one closer to a world where they know how many of what products their buyers are ordering from which suppliers.
Why is this the case?
A purchase order typically includes multiple products, line items, shipping and other value-added charges. Analysts who look at one’s data will be able to tell how much one spent on which category (or with which supplier), but the same analyst will not be able to tell how many of what products one bought and certainly not offer an explanation for why spend with a specific supplier doubled from last year to this year.
For example, if one automated the line item data and captured it in their system of record, they could tie metadata associated with a specific product (say: umbrellas) back to a specific PO placed with a specific supplier.
Then they could look at the weather that week and see that it was a rare week of heavy rain in Los Angeles where many of one’s franchisees are located.
Depending on how much line item metadata, one could see the main driver of their costs was requests from franchisees for their supplier to print custom logos on umbrellas, likely for vendors at a large VR industry conference.
Now one knows exactly what levers are driving costs with this supplier. If they can know, then it will not be long before a machine can proactively inform them about this opportunity, without them having to dig through the data to find out.
Sourcing and Contracts
The most important factor in this category is automating the real-time collection of data from one’s suppliers (without manual data entry) and getting them to execute on sourcing requests. Contracts are important, but all current contract management systems are already obsolete because they do not contain business logic that helps manage ongoing governance obligations of both parties against the negotiated terms.
Repositorying documents, even with some metadata, is simply not enough. The real prize for IT leaders pursuing this category is behavioral data—data about how one’s suppliers value and price meet or exceed the market, how responsive they are to discovery questions about their capability and partnership options, and how their performance as a partner measures up against expectations.
For supplier discovery, there are vendors like Tealbook, Scoutbee, and ConnXus. For sourcing enablement, there are vendors like Bid Ops, ScoutRFP, and Vendorful.
Markets and Risk
The most important factor in this category is automating the collection of real-time data from markets and global events to proactively respond to price changes and supply disruptions.
“Risk” is rather broad. It includes insurance, fraud, cybersecurity, bad press, and response to global events.
“Markets” similarly are a very broad proxy for both risk and opportunity, and supply chain managers want to know whether they can benefit by waiting or should buy now before prices rise. Companies like Quid and Primer claim to build AI architecture for pulling and validating signals from the larger media ecosystem, with new entries in market research and trend analysis focused on specific verticals.
Measuring Maturity in Supply Chain
AI in business process automation has largely been under-theorized by the academic community. While there exist many taxonomies to describe the levels of intelligent automation, most taxonomies support an understanding of the maturity of an intelligent agent using the following four categories (as in the Parasuraman-Wickens-Sheridan taxonomy and the Endsley-Kaber taxonomies circa 2000):
- Information acquisition
- Information analysis
- Decision selection
- Action implementation
These taxonomies are best suited for intelligent machines that, much like humans, are meant to perceive circumstances and embody behavior in the physical world (for example, UAV drones or self-driving cars).
Given the extent to which business processes exhibit ontological virtuality, abstraction, and contextuality, these levels of autonomy are arguably inappropriate for understanding the maturity of intelligent automation as AI advances in the business process space.
However, the more general 1978 version of Sheridan’s 10 Levels of Autonomy can be mapped towards any business process, because they simply indicate the amount of assistance that an intelligent agent can offer:
Sheridan’s 10 Levels of Autonomy67
Level Description
- The computer offers no assistance, human must do it all.
- The computer offers a complete set of action alternatives, and
- Narrows the selection down to a few, or
- Suggests one, and
- Executes that suggestion if the human approves, or
- Allows the human a restricted time to veto before automatic execution, or
- Executes automatically, then necessarily informs the human,
- Informs the human after execution only if the human asks, or
- Informs the human after execution if it, the computer, decides to do so.
- The computer decides everything and acts autonomously, ignoring the human completely.
However, in the context of supply chain business processes, there are four major gaps in mapping Sheridan’s taxonomy for intelligent automation:
- Context-dependent data source identification: Identifying and associating correct data sources with a particular business process
- Context-dependent data validation: Extracting and validating the necessary data from each external source and aggregating it versus internal requirements
- Context-dependent rule and exception creation: Adapting process requirements to apply specific rules and not others, allow specific exceptions and not others
- Context-dependent pathway generation and multi-attribute ranking: Displaying the parameters for generating pathway options and the attributes against which each pathway is ranked in comparison to alternatives
- Communicating context of the pathway generation, ranking and selection in a way that enables receiver to add information that re-generates, re-ranks pathways (or adds pathways not previously known): Communicating the selected pathway to the agents in the process in a way that leaves room for them to change the rules and exceptions and re-rank pathways to select an alternative based on information that is not available from data sources selected at the outset (or excluded based on changed context)
From these gaps, one can see that business processes contain iterative loops at each of Sheridan’s four major stages. The main difference for mapping the 10 Levels is that there are important sub-decisions about where to acquire the information from, whether or not the information is correct, how the information ought to be compared to generate pathways, which pathway should be selected and how to handle situations where errors at any stage of the process require that rules and exception criteria be revised.
If a supply chain can be thought of as an invention, then much of the cognitive work that skilled supply chain managers perform today is akin to rapid prototyping. The only difference is the prototype is a pathway that is then revised against contextual information from suppliers, internal stakeholders, and customers.
While we may have expected AI to fully automate the work associated with managing supply chain data, we can see that the need to manage multiple iterative versions of this same process simultaneously, all while keeping the imperatives of the business top of mind, is not going to be automated away any time soon.
But as intelligent automation progresses in the field of business processes, how should we measure its maturity?
Five Stages of Business Process Autonomy
Mapping
A human defines the business process, the sources of data, creates the business logic and is responsible for communicating the action. Data (other than data input by a user into the machine) plays no role in success or failure of the business process.
Guiding
A machine defines business process, human gathers data and checks data against business logic and is responsible for communicating the action. Data regarding the business process (which was likely input by another human user) is the only way that the machine impacts human decision.
Blueprinting
A machine defines the business process, identifies data sources, pulls data from those sources and recommends correct business logic. A human confirms the business logic, generates possible options and selects pathway (next steps). Data from the business process, external data sources and business logic are all influenced by the machine, but the application of rules/exceptions and the selection are all done by human.
Assembling
A machine defines optimal business process, identifies data sources, pulls data from those sources, applies business logic, generates a set of pathways, applies rules to the pathways based on multi-attribute algorithms to show relevant dimensions of the decision and recommends a best outcome. Human selects best outcome (or removes rules to generate a different multi-attribute ranking) and communicates decision out.
Ignition
A machine defines optimal business process, identifies data sources, pulls data from those sources, applies business logic, generates a set of pathways, applies rules to the pathways based on multi-attribute algorithms to show relevant dimensions of the decision, executes the selection, and generates the communication back to the relevant stakeholders of the process completion and separately generates an audit trail.
A human does not touch the process except to audit it, and spends most time expanding the nodes in the network so that the machine can have better pathways to chose from.
Conclusion and Action Items
Much like Stacy’s midnight blue dress, one size does not fit all when it comes to mapping a taxonomy of intelligent automation to the business processes in your organization.
It is likely that different processes and systems have differing levels of maturity. It is also likely that the levels of maturity of one’s processes and systems are constrained not only by one’s internal stakeholders, but by one’s supplier partners.
The fact that the electronics manufacturer I mentioned has achieved the heights of “Blueprinting” Autonomy via API connection to suppliers depended to a great degree on a supplier building those APIs in the first place.
Given that one likely has a diversity of processes and systems in place at different levels of maturity (and whose maturity is likely constrained by factors not in one’s control) the best action item you can take is to begin exploring partner relationships with technology providers that check three boxes:
- They can help you define optimal processes
- They can apply business logic to relevant data
- They can generate sets of ranked options and learn from the decisions that your experienced professionals make, so that one day, one’s professionals can spend their time on more high-leverage activities.
If a technology provider’s solution meets these criteria, your action item is to find one process, one use case, and run an experiment to see how much intelligent automation your process is capable of achieving.