“No matter how you mitigate risk, you’re always going to be measured on your reaction time when something occurs, because generally speaking, it’s the speed of your reaction that matters,” said Tom Linton, chief procurement and supply chain officer at Flextronics, when discussing the 2011 earthquake in Japan. By detecting disruption quickly, a company can respond to disruption quickly. “Who’s first to lock in their supply?” Linton asked. Detecting the onset of disruption before it hits gives a company time to prepare.
Detection time is measured from the instant a company realizes it will be hit to the time the disruption actually takes place. Thus, it is the amount of warning time during which the company can prepare for and mitigate the disruption, be it a hurricane or an upcoming regulatory deadline. The detection time can be practically zero—as is the case with earthquakes and fires. It can also be negative when a company realizes it has been hit only after the disruptive event—as is the case, for example, with a latent product defect or stolen intellectual property.
Incident Monitoring: Wary Watchfulness
Companies rely on a wide range of monitoring activities to detect a wide range of potential disruptions. The goal of the monitoring is “situational awareness”—relevant and timely data that reflects risk conditions that might affect the company and its decisions. For companies with global supply chains, this implies monitoring global events. Although monitoring the entire world for any of dozens of kinds of potential disruptions seems like a daunting task, companies can rely on automatic software and devices as well as on monitoring services that convert the flood of raw data into relevant alerts and warnings.
Preparing for a Potential Disruption
In 2008, the Black Thunder mine in Wyoming—the largest coal mine in the United States—planned to install a massive new conveyer tube to move coal to a silo for loading trains. When David Freeman, who at the time was vice president of engineering at BNSF Railway Company, first heard the mine’s plans, he wanted to be sure the railroad had input. The mine planned to hire a 2.4 million pound Lampson Translift crane to hoist the 260-foot long, 500,000-pound conveyer tube 150 feet into the air and place it on pylons. The intricate installation process would suspend the tube over three tracks, where eighty BNSF and Union Pacific (UP) trains travel every day carrying almost a million tons of coal to fuel power plants all across the Midwest and the East Coast. Over the course of an average year, one-third of America’s transported coal would pass on the rails under that 260-foot long tube as it swung from the crane’s cables.
Freeman’s job was to review the plans and to ensure close coordination to minimize service interruptions to BNSF’s customers. Thus, BNSF and UP halted traffic while the crane was scheduled perform its delicate maneuver on a Saturday in May 2008. Freeman also sent two repair crews with four large D-9 Caterpillar tractors to the site to assist if needed. And he had a team on notice at BNSF’s Fort Worth headquarters.
BNSF’s readiness made a difference. At 12:30 pm that Saturday, he got a phone call. “They dropped the tube on the tracks,” said a stunned onsite worker. The crane doing the heavy lifting had collapsed and the giant tube had fallen directly across all three tracks., Three construction workers were injured in the incident.
Freeman and his team immediately flew to the site to help the mine with the response. Given the injuries, the MSHA (the US government’s Mine Safety and Health Administration) needed to investigate the accident. The investigators would arrive on Tuesday, three days later, and they had asked that nothing be moved at the site until they completed their investigation. Freeman explained that the delay would affect more than 200 trains, potentially affecting the supply of coal to power plants, and that he had to move the tube as soon as possible. MSHA agreed that BNSF could shift the tube very carefully, as long as it did not disturb any evidence associated with the collapsed crane.
Although moving the huge conveyor tube was daunting, BNSF had experience moving large objects—a derailed locomotive or loaded rail car can easily weigh up to 450,000 pounds. Applying that expertise, BNSF accomplished the move in 21 minutes. After shifting the tube, they were relieved to find minimal damage to the tracks, and trains were soon running normally later that day. By developing contingency plans in advance, BNSF had equipment and personnel ready and on the spot. Once they had gotten permission to move the tube, they could do it promptly with minimal disruption.
The Weather Watchers
In an average year, 10,000 severe thunderstorms, 5,000 floods, and 1,000 tornadoes rage across the United States, not to mention about a dozen named Atlantic tropical storms and hurricanes. The US National Weather Service detects and monitors these storms via reams of data from two weather satellites in geosynchronous orbit, 164 Doppler weather radar sites, 1,500 real-time monitoring stations, and the SKYWARN network of nearly 290,000 trained volunteer severe weather spotters. Nor is the US network unique. Each country and region has its own portfolio of weather-data gathering and forecasting resources.
Because logistics is an all-weather sport, companies tap into data, forecasts, and warnings through a variety of local and national channels. After a surprise blizzard shut down UPS’s main air hub in Louisville, Kentucky, in 1994, the company hired five meteorologists for its Global Operations Center. “Our customers in Barcelona and Beijing don’t care that it snowed in Louisville. They want their packages,” said Mike Mangeot, a spokesman for UPS Airlines. “So we felt the need to have a greater read on the weather that was coming.” Jim Cramer, UPS Airlines meteorologist added, “UPS Meteorologists work very closely with the flight dispatchers and contingency coordinators who fine-tune the air system based on weather issues every day.”
Facility Monitoring: Who’s Minding the Stores
Other data streams come from a company’s own assets. Walgreens, like Walmart (see chapter 6), uses in-store sensors to monitor each of its 8,300 US locations. The raw data flows to Walgreens’s centralized Security Operations Center (SOC), which handles the retailer’s safety, security, and emergency response needs. “In the SOC, we monitor all the burglar and fire alarms in our stores, and, on average, three stores are robbed each day and two stores have break-ins every night,” said Jim Williams, manager, Walgreens emergency preparedness and response, asset protection, Business Continuity Division.
Electrical power sensors alert Walgreens to blackouts, which lets the company quickly take steps such as contacting the power company, dispatching generators, or sending refrigerated trucks to recover perishable inventory. Walgreens stores carry both refrigerated foods and temperature-sensitive pharmaceuticals, so faster detection means less spoilage. “The process has saved us over $3.6 million in perishable goods in just one year,” Williams said. SOCs and emergency operations centers (EOCs) are on the frontline of detection of disruptions, especially those occurring at the company’s facilities.
Keeping an Eye on Government
Changes in government policies affect companies’ cost structures, siting decisions, and compliance challenges. Lead time for government regulations varies and can be quite long. For example, US legislation on “conflict minerals” (Dodd–Frank Act section 1502) passed in July 2010 with a nine-month timeline for writing regulations for subsequent years. Conflict minerals include tin, tantalum, tungsten, and gold, which are mined by militia groups (notably in the Congo) using slave labor in war-torn areas and then sold to fund the continued fighting. The US Securities and Exchange Commission now requires certain public disclosures by publicly held companies relating to conflict minerals contained in their products. Companies had months if not years to prepare for the regulations. Drafting the rules, public comments, and final issuance took until August 2012, with coverage beginning in 2013, and reporting beginning in 2014, nearly four years after the law was passed. Indeed, Flextronics released its Conflict Minerals Supplier Training document in January 2013, specifying suppliers’ reporting of the sources of the materials and parts supplied to Flextronics. These reports support Flextronics’s own mandatory annual reporting, which was scheduled to commence in May 2014.
Other government actions can hit with little or no warning. When the United States boosted import duties on Chinese tires from 4 percent to 35 percent in September 2009, it did so with only 15 days’ notice. Tires that left Chinese ports in early September became instantly one-third more expensive during the long boat ride across the Pacific. Similarly, when the Chinese cut exports of rare earths in late December 2011, the new limits applied almost immediately. About 200 import tariff events take place each year.
Government policies affect a wide range of corporate affairs such as financial reporting, taxation, human resources, workplace safety, product requirements, environmental emissions, facilities, and so forth. At the US federal level alone, the government publishes some 20,000 to 26,000 pages of new or changed rules every year. The job of noticing relevant regulatory changes often falls under a centralized corporate function such as legal, compliance, or ERM (enterprise risk management).
Managing Thousands of Fire Hoses of Event Data
To manage disruptive events, Cisco uses a six-step incident management lifecycle, as mentioned in chapter 6: monitor, assess, activate, manage, resolve, and recover. The company does not try to predict incidents—an impossible task—but focuses on monitoring and early response instead, according to Nghi Luu, Cisco’s senior manager, supply chain risk management. Cisco built an incident management dashboard to detect potential disruptions to the top products that make up the majority of Cisco’s revenues. The development cost of the dashboard was in the low five figures, and Cisco’s investment has been paid back many times over, according to research firm Gartner.
Rather than attempt to monitor all possible events worldwide, many companies subscribe to event monitoring services such as NC4, Anvil, IJet, OSAC, or CargoNet. These services collect incident data, analyze the severity, and then relay selected, relevant alerts to their clients. Different services might focus on different types of threats, ranging from travelers’ security (Anvil) to sociopolitical threats (OSAC) to cargo security (CargoNet). Thus, many companies subscribe to more than one service.
In a representative week, a service such as NC4 might issue 1,700 alert messages covering 650 events around the world. Many events seem quite localized, such as a shooting in a mall in Omaha, student demonstrations in Colombia, or the crash of a small plane in Mexico City. Yet if a company has facilities or suppliers in the area, they could easily be affected by lockdowns, blocked roads, heightened security, or the event itself. Most alert software tools offer customization, allowing companies to specify alert thresholds for each type of facility based on event severity and distance from the facility. Cisco, for example, uses NC4 and overlays event data on a Google Earth map to visually highlight the Cisco (and suppliers’) locations that are within affected areas. Cisco incident management team members can view events on the map or as a list, and they can assign events to an incident watch list to indicate its severity, status, and potential quarterly revenue impact. Cisco also taps “informal” sources to detect developing problems, including in-country personnel in Cisco’s global manufacturing organization, commodity teams, and what it calls “lots of feelers.” In many cases, relationships between engineers in trading partners may bring up issues that managers are not ready to discuss, or are even not yet aware of.
Companies including Walmart, Intel, and Cisco noted that multiple functional groups in the organization share incident-monitoring data feeds. The supply chain group might watch for incidents affecting the company’s facilities, logistics channels, and suppliers. At the same time, HR might watch the incident feeds for events that might jeopardize the safety of any employees who are posted or traveling abroad. Finally, corporate finance might monitor the feed for events that affect financial matters, such as exchange rates and credit ratings. Supply chain risk is only one part of the broader security and enterprise risk management picture.
Crying Wolf vs. Missing the “Big One”
When Cisco saw news of wildfires in Colorado in 2012, it wasn’t concerned, because it had no manufacturing or suppliers in the area. What the company missed, however, was that the fire affected one of the company’s call centers. Detection faces a classic trade-off between two types of detection-error risks. An oversensitive detection system can generate false alerts on benign events too often; an undersensitive system can be too late to recognize important disruptions.
A related issue is comprehension and response. During Superstorm Sandy, for example, governments up and down the East Coast had the same federal data on the storm’s track, forecast impacts, and warnings. New Jersey’s governor issued a mandatory evacuation of coastal and low-lying areas, but the mayor of Atlantic City did not. Both “detected” Sandy, but the mayor didn’t comprehend its significance. At a practical level, detection occurs only if the organization realizes the implications of the event and takes appropriate action. Just as a monitoring system can underdetect or overdetect, so too, the response system can underreact or overreact.
Delayed Detection, Accumulated Damage
Not all disruptions are as visible or instantly news-making as an earthquake or tornado. Some disruptions lurk in the complexities of the materials, components, people, companies, and interactions inherent in supply chains.
Hidden Causes, Invisible Effects
On January 9, 2011, Intel began shipping a new generation chipset called Cougar Point that PC makers would use to connect Intel’s newest generation of microprocessors to other devices such as hard disks and DVD drives inside PCs. In mid-January, after making about 100,000 of the chips, Intel began receiving reports of problems. As with any extremely complex product, some rate of failures is expected. PC makers and chip makers track failure rates and their proximate causes. As more failures took place, engineers began to suspect the Cougar Point chip. Although the chip had passed quality assurance and reliability testing, Intel began retesting the chips with intensified stress.
Intel discovered that a design update had introduced a tiny engineering defect into a key transistor that supported four of the chip’s six communications channels used in some models of PCs and laptops. One of the transistor’s microscopic layers was a little too thin and could fail over time. “On day one or two of using a device with the chip, you won’t see a problem,” said Chuck Mulloy, corporate communications director for Intel. “But two or three years out, we’re seeing degradation in the circuit on ports 2 through 5. We’re seeing a failure rate in approximately 5 to 20 percent of chips over a two- or three-year period, which is unacceptable for us.”
On January 31, 2011, Intel announced the flaw, shut down deliveries of the defective chip, and started a recall. By the time Intel had traced the defect and comprehended its seriousness, it had shipped eight million Cougar Point chips. More than a dozen OEMs had to halt production of affected models and offer some sort of program of refunds, exchanges, or other remediation for customers who bought computers with the flawed chips. Intel initially estimated that the flaw cost it $300 million in lost revenue and $700 million in added costs for replacing the flawed chips. But Intel’s fast response completely mitigated the revenue impact and halved the expected costs.
Events such as product design defects, manufacturing errors, and contamination can create delayed consequences in product performance. In such cases, the effects of the defect may not be readily apparent until the product reaches consumers’ hands and is put to use for some time. Moreover, in consumers’ hands, products can be used in ways the manufacturer never envisioned and which reveal a safety issue. Such events spawn after-the-fact disruptions, which mean the detection time is negative.
The impact of a defective part or product grows worse with each added day. The higher the level of inventory spread across the supply chain when a defect is finally caught, the more defective units must be scrapped, returned, replaced, or reworked. Naturally, just-in-time production and lean supply chain processes reduce the consequences of these negative-detection-time events. When a flaw is discovered, the affected products already in customers’ hands have to be recalled. In addition, the finished goods inventory in retail stores and warehouses has to be returned and fixed. The lower the inventories on the shelves of stores and warehouses, the lower the total costs of sending it back and repairing the defect. Thus, make-to-order, postponement, and just-in-time schemes reduce the consequences of late detection of problems in a finished product.
The Race to Trace
In August 2004, a German clay mine sent a load of marly clay to a McCain’s potato processing plant in Holland. Unbeknownst to either the clay company or the potato processing company, the clay was contaminated with dioxin, which is a highly regulated carcinogen. McCain’s plant made a watery slurry with the clay and used it to separate low-quality potatoes (which float in the muddy mixture) from denser, high-quality potatoes. Fortunately, the dioxin did not contaminate the processed potatoes, which were used to make French fries and other snacks. Unfortunately, the dioxin did contaminate the potato peels that were converted into animal feed.
Not until October did a routine test of the milk at a Dutch farm reveal high levels of dioxin. Initially, the authorities suspected a faulty furnace as the cause, but further investigation finally uncovered the true cause. “On Nov. 2, 2004, it was confirmed that the potato industry by-product had been contaminated by marly clay used in the washing and sorting process,” said Dutch agriculture minister Cees Veerman in a letter to his parliament.
By the time authorities traced the source of the dioxin to the potato peels, contaminated peels had been fed to animals at more than 200 farms. Fortunately, the EU’s food traceability rules include a “one step forward and one step back” provision for all human food and animal feed companies. That capability enabled the authorities to trace all the customers of the tainted peels to animal food processors in the Netherlands, Belgium, France, and Germany and on to the farms that may have received the poisonous peels.
Rapid detection and tracing in both directions prevented any dioxin-tainted milk from reaching consumers. Yet detecting the contamination after the fact was cold comfort to the farmers who were forced to destroy milk or animals. Ironically, the reason McCain began using clay in the separation process was that a previous salt-water process had been outlawed for environmental reasons.
When people get food poisoning, health authorities look for commonalities among the victims—did they all eat particular foods of particular brands or from particular restaurants? These analyses take time, delaying the identification of the cause and allowing more cases to occur. To this end, the US Centers for Disease Control and Prevention developed FoodNet, a joint program with 10 states and the US Agriculture Department, to conduct active surveillance of laboratory-confirmed foodborne infections in order to accelerate the detection process.
Other types of events can have negative detection time, too. These include cyber security breaches, embezzlement, IP thefts, disruptive innovation, and CSR risks. If the effects aren’t readily visible in the product (e.g., the use of child labor) or the causes aren’t immediately obvious (e.g., a mysterious pattern of illnesses), then detection can be delayed. And the greater the delay in detecting effects and tracing causes, the greater the magnitude of the disruption.
The Costs of Inadequate Detection
Even when a company knows of a problem, corporate culture, financial pressures, and wrong incentives can prevent quick actions that would limit the damage. In other cases, legal liability issues may prevent a company from taking action, even when it is not at fault. For example, when two employees of Domino’s Pizza created five YouTube videos in 2010 showing them defiling sandwiches (as a prank), the videos went viral and were viewed by over a million people. Although crisis communications experts counsel companies to take immediate responsibility, such a strategy would have exposed the company to liability. Instead, Domino’s crafted an NGO-based viral response, but it took 24 hours to craft, leaving the social media world abuzz with speculations and making the brand’s recovery longer and more difficult.
Chevrolet Cobalt cars circa 2005 used Delphi ignition switches that were easier to turn than GM had initially wanted. These so-called low-torque switches had a problem. If the driver had a lot of keys on their keychain and went over a big enough bump or hit the keys with their leg, the torque on the key could suddenly jostle the ignition switch from “run” to the “accessory” position. Drivers experienced these events as unexpected stalls, and both GM and the National Highway Traffic Safety Administration (NHTSA) received complaints about the car.
When GM became aware of this cause of stalling, it discussed various options, including redesigning the ignition switch. At that point, the issue was seen as a customer satisfaction issue on the low-margin Cobalt, not a safety issue, which made it not worth solving via more aggressive steps such as a more expensive ignition switch. Instead, GM created a key insert that reduced the torque from the key ring on the key and distributed a technical service bulletin in December 2005 to dealers describing the issue, the key insert, and suggesting that dealers tell customers not to load too much weight on their key rings. GM did modify the ignition switch design a couple of years later and used an improved torque switch during the 2007 model year.
What GM’s engineers and even federal safety officials didn’t realize at the time was that the defect had more insidious side effects. Although many of the car’s electronic systems remained “on” when the ignition switch was in the “accessory” position, the airbag system was de-energized. This was an intentional safety feature to reduce injuries from accidental deployment of airbags in parked cars. But, the side effect turned the problem of the mis-turned key into a hidden safety problem. If the ignition switch was jostled into the “accessory” position and then the car crashed, the air bags would not deploy.
In 2007, a Wisconsin state trooper reported on an off-road crash of a 2005 Cobalt that killed two teenagers. The state trooper discovered that the ignition was in the “accessory” position and the air bags had not deployed. The trooper even cited GM’s technical service bulletin as the likely cause of the nondeployment. The trooper’s report went to both GM and the NHTSA, but neither organization realized the implications of that report or subsequent reports of a similar nature.
At many points in both GM’s and NHTSA’s investigations, key data were never found in all the disparate databases and reports related to automobile performance, defects, complaints, and crashes. GM investigators and attorneys reviewing Cobalt cases were unaware of the state trooper’s report for years. At least two investigations by the Office of Defects Investigation at NHTSA found no correlation between the crashes and the failure of air bags to deploy. The complexity of airbag deployment algorithms, the potential for sensor anomalies, the potential for crash damage preventing a deployment, and the off-road nature of many of the crashes made it hard to connect all the dots and easy to explain away the various accidents.
Eventually, the “dots were connected” as a result of a Georgia attorney who sued GM on behalf of a dead driver. The company then recalled the 780,000 cars with low-torque ignition switches in early 2014. By the time of the recall, the defect was linked to at least 13 deaths (according to GM) or as many as 74 deaths (according to Reuters).
The issue sparked congressional investigations of both GM and NHTSA, yet no evidence of a cover-up was found. The matter led to a broader examination of ignition switches and more recalls. Ultimately, GM recalled about 14 million vehicles in 2014 as a result of potentially defective ignition switches. Chrysler had two recalls totaling 1.2 million vehicles with similar problems, although no deaths were linked to Chrysler’s ignition switches. The recalls cost GM $1.2 billion in addition to the costs of lawsuit judgments and fines. Between the first week in March 2014, when publicity broke wide open on the GM faulty ignition switches, and the first week in April, the company’s market value declined by nearly 8 percent.
Mapping the Happenings: Where Are the Links in Your Supply Chain?
Weather, earthquakes, social unrest, electrical blackouts, and government regulations all have a strong geographic element. Mapping the facilities of the company and suppliers is a prerequisite to detecting disruptions linked to geographic causes. Chapter 7 noted that companies like Cisco determine the locations of key suppliers to assess supplier risks. These companies map their Tier 1 supply base using BOM, ERP, and other data. Such location data then feeds into incident monitoring systems.
The Pressure to Peer into the Tiers
Conflict minerals regulations and traceability regulations are pushing more companies toward mapping at least some parts of their supply chains to greater depths. For example, Flextronics and many other electronics companies are using a standard template developed by the Electronic Industry Citizenship Coalition (EICC) and the Global e-Sustainability Initiative (GeSI) for reporting the use of conflict minerals as well as suppliers’ due diligence on tracing the sourcing of conflict minerals. That template essentially encourages each supplier to get its own suppliers to fill out the template, too, cascading the analysis all the way to the smelter level and beyond.
Traceability regulations affect many industries. Although conflict metals such as tantalum, tin, gold, and tungsten would seem to be restricted to hard-goods products such as electronics and automotive, even apparel companies may be subject to conflict mineral regulations owing to the use of these metals in zippers, rivets, fasteners, glittery materials, belt buckles, dyes, and jewelry. Other industries face analogous traceability rules on other types of commodities. EU Timber Regulations target illegal logging, affecting the supply chains of construction, furniture, office supplies, and packaging companies. FSMA (the US Food Safety Modernization Act of 2010) includes a mandate that the US FDA create traceability requirements for certain categories of food products, and California’s drug pedigree law, set to take effect in 2015, foreshadows traceability regulation in the pharmaceutical industry.
There’s an App for Mapping and Detection
Mapping of suppliers and their deeper tiers remains a challenge because of the dynamic nature of supply chains and the proprietary nature of each supplier’s relationships with its partners. Moreover, as more companies attempt to map their supply chains, suppliers face administrative costs for responding to multiple requests for information.
Resilinc Inc. of Fremont, CA, exemplifies a new generation of supply chain software and services companies addressing these mapping issues. In 2005, Bindiya Vakil graduated with a master’s degree from the supply chain management program at the MIT Center for Transportation and Logistics and went to work managing a program in supply chain risk at Cisco. Five years later, she left to found Resilinc. Her husband, Sumit (also a graduate of the same MIT program) quit his job and joined his wife, first as a software developer and then as the chief technology officer.
Resilinc surveys a client company’s suppliers to map them and keeps suppliers’ data secure. The surveys cover risk management issues such as supplier facility locations, subsupplier locations, BCP, recovery times, emergency contact data, conflict minerals, and other concerns. Resilinc uses the client’s bill-of-material and value-at-risk data to cross-reference parts with mapped locations and identify high-risk parts. The software uses data on the locations producing each part, the parts in each product, and the financial contributions of each product to estimate the value-at-risk of each supplier location via a methodology like that described in chapter 3.
To support real time response, Resilinc scans several event data sources for potential disruptions. The company filters out non-supply-chain disruptions (e.g., residential house fires) and then cross-compares potential disruptions with the known facilities of mapped suppliers. If an event potentially affects a supplier and thus affects one or more of its clients’ companies, Resilinc determines which parts and products may be affected as well as the potential value-at-risk and sends an alert about the event to each affected company. During the 2011 Thailand floods, Resilinc helped Flextronics gain about a week’s warning regarding the threat posed by the rising waters.
Other companies offering mapping and detection software (and related consulting services) include Razient Inc. of Miami, FL, and MetricStream of Palo Alto, CA. Several companies providing supply chain event management applications—including Trade Merit Inc., CDC Software, and Manhattan Associates—have also geared their offerings to risk management. In addition, many consulting organizations have developed supply chain risk management practices, assisting companies in assessing risks and developing prevention and mitigation measures. Examples include PricewaterhouseCoopers, JLT Specialty Limited in the UK, Marsh Risk Consulting, Capitol Risk Concepts Limited, LMI, and scores of others.
Some companies, such as IBM, Cisco, and ATMI, created in-house supplier mapping applications. However, third-party services such as Resilinc and its competitors reduce the costs of supplier mapping and updating. The reason is that they gather their information primarily through suppliers’ questionnaires. Thus, once a supplier fills out a questionnaire, the anonymized information can be used for other customers of that supplier because most suppliers serve multiple industry players. Such a “network effect” reduces the costs of information collection as well as the suppliers’ compliance efforts. Similar information-gathering efforts and mapping are also used for CSR applications when suppliers have to comply with codes-of-conduct of their customers.
Supplier Monitoring
With the general shift from local production and vertical integration strategies to globalization and outsourcing comes the need to monitor a global supply base. This monitoring goes beyond the kinds of geographic mapping and incident detection described in the previous section. Companies concerned about supplier bankruptcies, failures in quality, changes in supplier business strategy, and corporate social responsibility try to detect potential problems through comprehensive supplier monitoring.
Watching the Warning Signs
To create a list of warning signs, Boston Scientific queried its materials employees, manufacturing people, outside contractors, and accounts payable staff—everyone who interacted with the supply base. Managers created a list of 20 warning signs and then trained employees to watch for these signs as they visited or interacted with suppliers.
Some of the most important signs included financial telltales such as failure to prepare timely financial reports, multiple adjustments to annual reports, frequently renegotiated banking covenants, deteriorating working capital ratios, and lengthening accounts payable (or check holding). Industry participants in a 2009 MIT supply chain conference on the financial crisis reported, however, that many sources of financial data don’t provide timely detection. Financial data may be infrequently collected and is a lagging indicator because it reflects the supplier’s previous-year or previous-quarter sales, profits, and debts. The conference participants cited examples of abrupt bankruptcies among seemingly sound suppliers. To collect more timely in-depth financial data, companies wrote contracts that mandated supplier cooperation with audits and timely reporting of key financial metrics.
Another option is the monitoring of news aggregation services such as Lexis-Nexis for indicators of business health. For example, in the two years prior to the bankruptcy of the British retailer Woolworth, the service reported 4,400 news items about Woolworth with the terms “going into administration,” “company strategy,” “redundancies and dismissals,” and “corporate restructuring,”—all clearly indicating turmoil. The number of these news reports swelled six months before the bankruptcy in November 2008. Similarly, the service reported 15,000 news articles mentioning Kodak with the terms “insolvency,” “Chapter 11,” “law and legal systems,” and “spikes in divestitures” in the two years leading up to Kodak’s bankruptcy. The frequency of these kinds of articles surged in the final months leading to its bankruptcy filing on January 2012.
Companies can also watch for operational problems at critical suppliers. These include high employee turnover, especially in key positions; failed projects such as acquisitions or new product launches; operating losses and lack of capital investments; and so on. Companies can monitor operational warning signs such as late or missed deliveries, incomplete shipments, quality issues, billing and invoicing errors, and carrier selection errors. These may be signs of corner-cutting, of layoffs, and that the supplier’s management is preoccupied with issues other than customer service. “It’s all in the data,” said one presenter during a 2012 conference at MIT. By monitoring supplier quality carefully, companies can get three to five months’ warning of an impending failure and can take steps either to help the supplier or to find alternative suppliers.
The frequency of formal reviews of supplier risks can vary from monthly to annually, depending on the company and the risk profile of the supplier. During the financial crisis, for example, many companies increased the frequency of reviews, especially of weaker suppliers. “The frequency with which we identify the risk depends on what we’ve classified as risk. For example, we monitor materials on a daily basis, but if we’re talking about business continuity planning (BCP), we assess risk on a yearly basis and review it every six months,” said Frank Schaapveld, senior director supply chain Europe, Middle East, and Asia for Medtronic.
Trust-But-Verify Monitoring
Detection at a distance has it limits. Surveys and third-party data go only so far in detecting incipient disruptions or disruption-prone suppliers. EMC Corporation uses a “trust but verify” approach to detect emerging risks with suppliers. Trevor Schick, vice president of Global SCM and chief procurement officer at EMC, said that the company deploys 50 people in Asia (where its manufacturing is done) to focus on quality and to identify red flags early. These people visit suppliers, walk the lines, see the warehouses, and speak to the engineers and factory workers. They use a checklist of warning signs such as quality problems, capacity reductions, stopped lines, and excessive inventory. If a supplier is reluctant to let EMC people in the door, then that is a warning sign in itself.
Other companies use similar methods to detect risks in the supply base, but with a focus that depends on the types of risks most salient in their respective industries. For example, Ed Rodricks, general manager, supply chain at Shaw’s Supermarkets, said that the grocer’s field buyers pay attention to food handling and product quality standards when they visit farms and contract manufacturers. Apparel retailer The Limited, on the other hand, inspects apparel suppliers with an emphasis on working conditions and workplace safety to avoid the use of “sweatshops” or child labor. Ikea, the Swedish furniture giant, employs 80 auditors, performing about 1,200 audits per year at supplier locations, most of which are based on unannounced visits. The audits are focused on environmental sustainability and working conditions. As with EMC, refusal to let the auditors in is considered a violation, triggering an immediate stoppage of deliveries from the supplier. Similarly, chemical company BASF conducts onsite audits of high-risk chemical suppliers to assess environmental, health, and safety issues.
Those Dastardly Detection Defeaters
To monitor the quality of raw materials supplies, many companies use routine laboratory tests to detect low-quality, diluted, or adulterated materials. For example, cows’ milk and wheat gluten are tested for protein levels. But the protein test isn’t perfect, and unscrupulous suppliers can fool the test by adding melamine—a cheap industrial chemical used in plastics, insulation, and fire retardants.
Unfortunately, melamine causes kidney failure if consumed in large amounts. In 2007, an estimated 14 dogs and cats died in the United States from melamine-adulterated gluten used in pet food. In 2008, six infants died and 300,000 were sickened in China as a result of melamine-laced infant formula. The episode forced regulators and companies to deploy more expensive tests to detect the protein-mimicking melamine. Similar problems with “undetectable” counterfeits were discovered with the anticoagulant drug heparin. On March 19, 2008, the US FDA reported that the contaminant found was “likely made in China from animal cartilage, chemically altered to act like heparin, and added intentionally to batches of the drug’s active ingredient.”
Chinese manufacturing issues were also identified as the cause in the death of children in Panama and Haiti using drugs that were supposed to contain glycerin but instead contained diethylene glycol, a syrupy poison used as antifreeze. The Chinese authorities and the Chinese companies involved refused to cooperate with the FDA and stonewalled its investigation.
Sometimes a supplier’s efforts to elude auditors are simple and crude. Ikea’s senior auditor, Kelly Deng, has seven years’ experience, and the typical auditor in her office has been on the job for five years. The experience helps her spot telltale signs of violations during her visits—such as a worker hurrying by with a stack of papers. Factory managers may falsify records, she said, and send a worker to take the accurate records out of the building.
Accelerating Detection and Response
Fast detection gives companies time for avoiding impact, preparing for response, or mitigating the consequences. To accelerate detection, companies can collect data more often and from closer to the cause of the disruption. Some supply chain risk services companies are using data mining to predict disruptions even before they happen. For example, Verisk Analytics uses data science to find possible correlations between various incidents and impending geopolitical events that may disrupt businesses.
When Cisco started its monitoring program, it specified an eight-hour response time; then through process improvements such as stationing team members around the globe and using follow-the-sun operations, Cisco was able to cut response time to less than two hours. For example, when a large, 7.8 magnitude earthquake hit Sichuan in Central China near midnight on a Sunday (Cisco headquarters’ time), Cisco used team members in Asia for the immediate incident response and was able to shift suppliers and reschedule orders by the end of the following Monday, minimizing the impact on key customers.
Clearing the Tracks with Better Data on Storm Tracks
In mid-April 2012, a massive low-pressure system formed over the central plains of the United States. Its strength prompted the National Weather Service’s Storm Prediction Center to issue an unusual multiday advance warning. It predicted a 60 percent chance of severe weather for north-central Oklahoma and south-central Kansas for the afternoon and evening of Friday the 13th. Strong convection and the build-up of ominous clouds promised a busy day for weather forecasters and storm chasers. Government and private weather services watched the skies for the threat of high winds, severe rain, large hail, and tornados. At 3:59 pm, as if on cue, a tornado touched down southwest of Norman, Oklahoma.
Trains and tornados don’t mix, and BNSF Railway has thousands of miles of track crisscrossing “tornado alley” in the central states of the United States. BNSF subscribes to AccuWeather, a company that detects, tracks, forecasts, and warns its clients of impending severe weather. AccuWeather uses data from advanced high-resolution Doppler radars to help spot tornados accurately and quickly. AccuWeather detected the formation of the Norman, Oklahoma, tornado 30 minutes before it touched the ground. Using data on prevailing winds and models for tornado behavior, the company forecast the likely trajectory of the twister and warned BNSF that a tornado might cross BNSF’s tracks in Norman sometime between 3:50 and 4:30 pm. By the time the tornado thundered across BNSF’s tracks at about 4:10 pm, the railroad had had 40 minutes to clear the area of trains.
From an Internet of Things to an Internet of Warning Dings
The declining cost and growing use of technology in the supply chain plays an increasingly important role in managing supply chain risks. For example, FedEx’s SenseAware is a flat, hand-sized, red-and-white device that shippers can slip into a box, pallet, or container. The device contains a battery-powered GPS receiver, temperature monitor, pressure monitor, and light sensor. It also contains a cellular data network circuit that can connect to the same ubiquitous cell phone networks used by mobile phones. Periodically, the device “phones home” with data about the package’s location and status.
With these data, the shipper, carrier, and customer can detect problems with a package while in transit. The GPS location data can detect misrouting or theft, confirm delivery, and trace a lost package. The temperature data can ensure that temperature-sensitive shipments have not been damaged by freezing or heat during the trip. The light sensor can detect unauthorized access to the cargo (e.g., theft, tampering, counterfeiting, contamination), and damage to light-sensitive cargos, as well as unexpected delays if a time-critical package hasn’t been opened.
SenseAware is but one example of a broad trend of the “Internet of Things,” which refers to a growing use of low-cost computing, sensors, wireless data, and Internet connectivity to provide enhanced situational awareness and control. For example, Schneider National, a large American truckload carrier, has GPS/cellular data tracking units on every last one of its 44,000 intermodal containers and van trailers. Tracking and communicating with a moving truck fleet leads to higher utilization, increased driver productivity and lifestyle improvements, fuel cost optimization, better customer service, and more accurate billing. The same sensors also improve freight security by detecting and tracking stolen trailers.
Since the 2005 publication of The Resilient Enterprise, cell phones (and cell phone towers) multiplied to serve 85 percent of the world’s population, the iPhone arrived, and cellular data networks went through four generations. With 1.4 billion users, smartphones provide an increasingly important data source for detecting and assessing disruptive events. Smartphones typically include a GPS, compass, and a camera, which lets in-the-field supply chain workers or ordinary citizens document and transmit geotagged pictures and data about facilities and events. Real-time damage reports accelerate detection of the extent of disruptions as well as help to assess response needs.
Supply Chain Control Towers
Airport control towers—with their all-weather ability to choreograph the intertwining movements of aircraft on the ground and in the air—provide a natural model for managing supply chains. A supply chain control tower is a central hub of technology, people, and processes that captures and uses supply chain data to enable better short- and long-term decision making. “You can respond much more quickly when your people, technology and systems are in a single location,” said Paul McDonald of Menlo Worldwide Logistics.
In 2009, Unilever established an internal organization, UltraLogistik, based in Poland. UltraLogistik operates as a control tower, managing all Unilever transport movements in Europe. Centralizing all transportation procurement and operations (using Oracle’s transport management system) yielded cost savings, reduced carbon footprint, and increased visibility, which translates into quick detection of problems. Beyond a single company, one of the main missions of the Dutch Institute for Advanced Logistics (DINALOG) is the “Cross Chain Control Center (4C).” The 4C vision is to coordinate and synchronize the flow of physical goods, information, and finance of several worldwide supply chains relevant to the Netherlands.
Although a supply chain control tower primarily serves day-to-day operations, it sits on the front line for detecting disruptions, handling incidents, and coordinating responses. In that capacity, the control tower is similar to a full-time emergency operations center in that its staff can be the first to notice telltale signs of looming significant disruptions such as unexpected supplier component shortages, problems in the flow of items (e.g., port closures or customs worker strikes), and accidents. It can then respond by rerouting flows, notifying customers, informing company facilities, and so forth.
Good Geofences Make Good Supply Chain Defenses
Dow Chemical tracks the location of rail tank cars using GPS trackers with cellular data connections. If a tank car deviates from its expected route or approaches a heavily populated area, Dow’s systems automatically detect it and warn the company, who can then alert authorities of any potential danger., Dow Chemical’s Railcar Shipment Visibility program is an example of geofencing, which is the creation of a virtual boundary around either a high-value or high-risk mobile asset, or a critical geographic area, along with ways of detecting when the item enters or exits the virtual boundary.
Geofencing can detect negative events such as theft, misrouting, or terrorist hijacking, as well as positive ones such as the arrival of the shipment in port or at the customer’s loading dock. Some companies participating in a 2012 MIT industry roundtable on risk management practices were working on combining shipment tracking with dynamic geofences around disruptive events to detect, for example, the real-time entry of an ocean shipment into the path of a hurricane.
“Every citizen is a sensor,” said Brian Humphrey of the Los Angeles Fire Department. Six billion people (out of the estimated seven billion people in the world in 2014) had access to a mobile phone. In fact, more people have access to mobile phones than to working toilets. In addition, 1.7 billion people use social media such as Twitter, Facebook, Instagram, or other country-specific services. The USGS (United States Geologic Service) now monitors Twitter to detect earthquakes. “In some cases, it gives us a heads up that it happened before it can be detected by a seismic wave,” said Paul Earle, a USGS seismologist.
Meteorological and geophysical sensors for tornados, floods, tsunamis, and earthquakes offer only a crude proxy for detecting disruption. Just because a quake exceeded some number on the Richter scale doesn’t indicate which buildings, logistics infrastructure, or utilities were damaged. Social media channels can provide an informal, real-time damage assessment because the local population will naturally talk about what they felt, what they saw, and the problems in their location.
The United States Army says organizations should encourage people on the scene to send situation information via social media. During Superstorm Sandy, people posted as many as 36,000 storm-related photos per hour. Geotagging of these photos, a common feature built into smart phone cameras, provides the exact time and GPS coordinates of the image for accurate data on the location of the damage (or lack of damage). Services like SeeClickFix encourage citizens to report nonemergency problems to city governments such as potholes, damaged traffic signals, debris on roadways, and other problems, leading to more timely maintenance of the transportation fabric of the city.
Twitcident is a broader monitoring system that analyzes Twitter’s social media data stream to detect and monitor disruptive events such as fires, extreme weather, explosions, terrorist attacks, and industrial emergencies. Twitcident uses semantic analysis of messages and real-time filtering to automatically extract relevant information about incidents. The initial system is intended to help first responders, but it could be adapted for commercial use.
After the April 15, 2013, Boston Marathon bombing, the FBI posted photos of the likely bombers on the Internet, unleashing a tsunami of responses. Within minutes, thousands of armchair detectives were scouring the Internet to identify the suspects. David Green, a marathon runner and Facebook user, helped by providing a high-resolution picture of the suspects. Within about 24 hours, one suspect was dead and one was in custody.
Social media data, however, should be used with care. The Internet crowd of terrorist hunters was not equipped to handle allegations with professional care and healthy skepticism. For example, Twitter became rife with rumors that a missing Indian-American student was one of the bombers, flashing the accusation across the Internet, to the dismay of the student and his parents. The rumor subsided only after NBC News contradicted the false reports.
Dell created its Social Media Listening Command Center as a means to detect and respond to problems big and small. The computer maker uses this “listening and responding” program for customer service and support, community-building, and topical discussions. Every day, thousands of Dell customers use Twitter, Facebook, and Dell.com for routine product support. Yet the company also monitors message trends to detect problems such as product defects, negative public relations drives, or an adverse shift in customers’ attitudes toward Dell and its products. The company tracks 22,000 mentions of Dell each day. “When you are embedding social media as a tool across virtually every aspect of the company to be used by employees as one of the ways they stay in touch with customers every day, it simply becomes part of how we do business. Listening is core to our company and our values,” said Manish Mehta, Dell’s vice president for social media and community.
Response-on-Warning: Faster than a Speeding Missile
Detection leads to alerts, and alerts lead to response. When militants in the Gaza strip fire a missile at Israel, the time from launch to impact is very short. Most Israeli citizens live and work within range of Palestinian rockets. That includes Intel’s $3 billion Kiryat Gat chip fabrication factory with 3,000 workers, which lies a scant 45 seconds—as the missile flies—from potential launch sites.
Fortunately, radar in Israel can detect the attack and estimate the missile’s trajectory within one second and relay the signal to a central monitoring facility that then activates sirens and SMS messaging in the likely target zone. When the sirens blare, Intel’s people know what to do. Weekly, monthly, and annual drills have trained them to respond instantly to the warning. They rush to secure locations such as hardened bomb shelters formed from the plant’s staircases, reinforced concrete shelters spread in all open places, and hardened basements. (The system is also used to launch interceptor missiles as part of Israel’s “Iron Dome.”)
Many warning systems include “reverse 911” systems, which can notify wide populations in a plant or an urban area about an impending problem. The systems now rely on mobile technology to reach people wherever they are., The USGS tweets for quakes, and its Earthquake Notification Service sends emails and texts messages of quakes to all of its registered users.
Winning the Race between Data and Destruction
In 2003, OKI’s semiconductor factory suffered $15 million in damages and 30 days of lost production from two earthquakes near Sendai, Japan. The company then installed a system that used Japan’s new Earthquake Early Warning system. Earthquake warning systems can’t predict when or where a quake might start, but they can predict when and where the shockwave will go once a quake does start. An earthquake’s shockwave radiates through rock at about 3,000 to 6,000 miles per hour, but radio signals from a seismograph and government warning system travel at the speed of light, or 186,000 miles per second. In places like Japan, Mexico, and California, seismologists have deployed real-time warning systems using networks of detectors. For facilities located more than a few miles from the epicenter, the detection signal can arrive seconds or a few tens of seconds before the shaking.
Although a few seconds of warning seems useless in the event of a devastating quake, it can avert some of the consequential disasters. Organizations can use an early warning signal to bring elevators to a halt, close pipe valves carrying hazardous materials, park the heads of hard disk drives to reduce the chance of lost data, shut off heat sources, put industrial processes in safe mode, halt trains, stop traffic from driving on bridges, and alert people to seek shelter. When an 8.0-magnitude quake struck off the coast of Manzanillo, Mexico, officials in Mexico City were able to shut down the Metro system 50 seconds before the tremors arrived. After OKI installed an early warning system, two subsequent earthquakes caused only $200,000 in damage and a total of only eight days of downtime.
The same applies to tsunami warning systems, which can provide minutes or even hours of warning based on fast analysis of seismic activity and deep water sensing of the passing tsunami wave. Moreover, this principle applies across the tiers of a supply chain, in which data about disrupted deep-tier suppliers could flow much faster than the weeks it might take for the absence of shipped parts to be felt at the end of the chain. “Lower-tier suppliers that serve a number of markets often spot shifts in the economy early on—and can warn customers about them,” wrote Thomas Choi and Tom Linton. If the data can travel faster than the disruption can, then companies with good “listening” networks can detect disruptions early and prepare to respond before the disruption hits them.