When disaster strikes, companies’ first order of business is to help first responders. These responders could be firefighters and medical personnel, or they could be plant employees responsible for performing emergency procedures to prevent further damage, such as the release of harmful material into the environment or the loss of property. Simultaneously, or shortly thereafter, companies start the effort to minimize the business impact of the disruption and to recover as quickly as possible. Doing so requires nonroutine activities: putting special teams to work, creating ad hoc supply chains, communicating the unusual situation to stakeholders, and collaborating with others (even competitors). In terms of the “quadrant of events” discussed in chapter 2, these response activities are all aimed at reducing the impact of an event. (At this point, the probability of the event is 1; it already happened.)
Every year, an average of six hurricanes form in the Atlantic Ocean, although 2005 saw a record 15 hurricanes.1 Almost half of these, on average, are classified as major storms (category 3 or higher), which are deadly and can create substantial damage. Even though each storm follows its own track, long-term statistics, short-term forecasting, and substantial experience allow companies to know how to respond to these events, even if the details may vary from storm to storm.
Hurricane Katrina began as a tropical depression near the Bahamas. As the storm strengthened, The Procter & Gamble Company (P&G) started tracking the potential threat. With several facilities in the coastal southern United States, and millions of customers in the region, P&G watched carefully. When Katrina turned north toward Louisiana, the storm became a serious threat to P&G. Half of P&G’s coffee production and 20 percent of all coffee drunk in American homes was roasted, ground, and canned in Folgers’s plant in Gentilly, Louisiana, just east of New Orleans. P&G also operates several other facilities in the same geographic area: the smaller Millstone coffee plant adjacent to Gentilly, a coffee storage operation, and the Lacombe coffee distribution center.
On August 25, 2005, four days before Katrina’s landfall, P&G activated its emergency preparations, even though at that point the storm was not predicted to hit New Orleans. This activation included moving product out of the region, getting backup data tapes, and preparing for a possible shutdown. Inventory from New Orleans was shipped out to Cincinnati. On August 27, two days before landfall, the hurricane turned north and veered toward New Orleans. In response, P&G shut down its New Orleans sites at 10 pm on Saturday night (August 27) and told employees to evacuate the city.2
On August 29, at 5:10 am, Hurricane Katrina made landfall in Louisiana. At 8:14 am, the US National Weather Service issued a flash flood warning for several parishes in New Orleans, citing a levee breech in the Industrial Canal. By the afternoon, three more levees ruptured. Within hours, the city of New Orleans flooded as the levee system failed catastrophically. Rising water surged 6–12 miles inland from the waterfront,3 flooding many low-lying districts, including Gentilly.4 The death toll reached 1,836, and property damage was estimated at $81 billion.5
Even as Katrina lashed the coast, P&G convened its crisis management team at the company’s Cincinnati headquarters. As the team members met at headquarters, while Katrina blasted Louisiana, they had two priorities: to support P&G’s employees, and to save the business. P&G needed to restore its supply chain before October because the months of October to December are peak times for consumer coffee buying.
When the storm moved out of the area, P&G’s team moved in. They set up a command center in Baton Rouge, 80 miles from New Orleans, which was as close as they could get to the affected area. The crisis team oversaw recovery efforts, with team members working on a two-weeks-on, two-weeks-off cycle. Baton Rouge also became P&G’s logistical staging area for construction materials, recovery supplies, and generators.
P&G’s first task was to assess the damage. But this was impossible, because all roads into the area were impassible and authorities were prohibiting entry to the disaster area. P&G didn’t even know when the roads would open or when the company would be allowed access to its property. Rather than wait for a government-provided assessment of the damage, P&G hired a helicopter just after the storm to take hundreds of aerial photographs of its facilities, the surrounding area, and New Orleans’s damaged roads, railroads, and port infrastructure.
The photos revealed a mixed picture. The first bit of good news was that P&G’s plant was located on high ground and was protected by a long railroad embankment that saved it from the storm surge and flooding from the breeched levees. The plant itself had suffered only minor wind damage.
The bad news was the dire state of the infrastructure. All the surrounding roads were flooded and covered with debris from the high winds and flood waters; P&G’s team would have no access by road to the plant for 12 days. Railways suffered damage to tracks as well as to rolling stock. Almost every rail line was out of service for months. One third of the port of New Orleans where P&G imported its coffee beans was destroyed. Beyond the damage to logistical infrastructure, P&G faced the absence of utilities: supplies of power and natural gas were out for two weeks, and phones were out for several weeks.
One of the biggest uncertainties and biggest challenges faced by P&G was tracing the fates of its employees. After the storm, P&G tried to locate its employees and ensure their safety. The task was difficult given the complete failure of the phone system in New Orleans. P&G used local broadcast systems and its own phone networks, asking people to call into its toll-free consumer relations hotline in Cincinnati. Fortunately, no P&G employees lost their lives in the storm. Yet it took P&G three nerve-wracking weeks to learn that everyone was alive, because so many employees had evacuated out of the state.
In addition to resurrecting the Gentilly coffee plant, P&G helped employees solve three major issues that workers faced in their own lives. First, the company promised continuity of pay for its employees, regardless of the reopening date of the plant, so workers would know that their livelihoods were protected. In addition, the company offered interest-free loans, with approval in less than 24 hours, to employees who needed emergency funds. Second, it soon became clear that the storm had traumatized on-the-ground leaders as well as employees, so P&G brought in leaders from unaffected areas to help, and provided counseling to employees. Third, most employees lost their homes in the flood. In fact, most houses in the New Orleans area were unsuitable for habitation because of the flooding. As a result, there was a lack of housing and hotel accommodations. The company needed shelter for its employees and for the construction workers at the plant who were repairing the facility. The company looked at three options: chartering an anchored cruise ship, partnering with a hotel, or building a trailer village.
P&G’s solution was to build a trailer village, which offered the most flexibility for expansion during construction phases and post-construction when housing would be needed for employees only. Named Gentilly Village, it had 125 trailers that slept more than 500 people—employees and their families as well as contractors. The village included laundry and recreation facilities. To accommodate workers whose families had evacuated to other states, P&G provided money for them to visit their families twice a month and created a seven-days-on, seven-days-off work schedule that let plant employees have time to rebuild their lives after the disaster. To deal with the lack of access to groceries or restaurants, the company brought in its own kitchen and cafeteria, serving three meals a day and providing snacks around the clock—all free to employees and contractors working at the site.
One of the immediate challenges was getting potable water to the plant. Initially, the company arranged for 20 trucks to run in a continuous supply loop bringing water in from Baton Rouge. Then, because the plant uses 18,000 gallons of potable water per hour when operating at full capacity, P&G decided to dig a well—drilling 700 feet deep—to get the needed water. The decision to drill the well proved prescient because city water was not restored to the plant until mid-December. P&G kept using the well even after the municipal water system was restored because it was a less-costly source of fresh water.
During the crisis, the recovery coordination plan was organized around three daily meetings (at 9:00 am, noon and 6:00 pm) to review various facets of the recovery. The morning meeting covered business continuity issues, the midday meeting handled resource issues, and the end-of-day meeting tackled engineering work to ensure that all tasks were proceeding as planned.
Part of P&G’s response effort was focused on finding alternate sources of supply to make up for lost production during the time-to-recovery of the Gentilly plant. The challenges were to gain access to additional coffee production capacity, and to do it quickly. P&G wanted to avoid the literal white-space of empty store shelves and the potential that competitors could block P&G’s replacement of lost capacity by filling retailers’ shelves with their own brands, securing retail shelf space for the long term.
In its effort to secure emergency second sources, P&G temporarily changed its procedures for negotiating large supply deals and let procurement teams make decisions on the spot using existing competitive analysis to ensure that P&G was getting acceptable contract terms. The two guidelines for the new contracts were that the supply be a quality product and that P&G could still make a profit. P&G used a “should-cost” methodology to estimate a reasonable cost for a second-source supply. Having a precalculated cost estimate accelerated negotiations with suppliers and avoided bureaucratic delays, while not damaging profit margins with excessive payments for that needed capacity. If those guidelines were met, employees were empowered to commit to the contract to ensure that P&G product got to retailers’ shelves. The company was successful in locking up needed capacity in all but one case and was able to lock out competitors from taking P&G’s retail space.
In addition to the damage caused by the storm, local regulators caused further slowdowns. The area was put under martial law with dusk-to-dawn curfews, hampering P&G’s efforts to travel to the facility and to move needed supplies. Fortunately, P&G was able to sort through the bureaucratic issues because many of its local managers had prior established relationships with local government officials. This helped speed the process of working with the four different agencies involved in infrastructure issues. As a result of these relationships, P&G was even able to get a police escort through the road checkpoints that would otherwise have added hours of delay trying to confirm the legitimacy of their permits.
P&G’s New Orleans plant started production on September 17, three weeks after the hurricane. Each operation at the plant that was brought back online went through quality assurance, and each operation was audited by the FDA in two days. By October, the plant was operating at full capacity. The recovery teams and top management celebrated each successful step along the way. This recognition contributed to keeping the teams motivated to continue working seven days a week for six weeks.
In all, P&G was the first company back in operation after the hurricane. The governor of the state called the company a role model, and President George W. Bush visited the facility on September 20.7 From a business standpoint, P&G in 2005 shipped 96 percent of the previous year’s volume despite the disruption, and its first-quarter 2006 brought record volumes, with business back stronger than ever.
As was the case with P&G during hurricane Katrina, disruptions force companies to create ad hoc supply chains—temporary flows of products, recovery materials, and personnel that are unlike the company’s workaday supply chain. These ad hoc supply chains might reroute product around damaged nodes of the supply network, use different modes of transportation instead of disrupted ones, use emergency procurement procedures, or connect secondary suppliers into the network.
Just seven weeks before Christmas, on Tuesday, November 1, 2005, UK-based clothing retailer Primark Stores Limited suffered a devastating warehouse fire. Some £50 million in apparel—half of its stock—went up in flames as TNT Fashion Group’s 440,000 square feet warehouse at Magna Park near Lutterwirth and adjacent offices burned.8 Fire is often a significant disruption, and it’s not a rare or beyond-the-pale hazard, which places it in the high-impact/high-likelihood category of risks (see figure 2.1). “This is as bad as it gets, but the disaster recovery plan is designed to cope with this. The loss of a key distribution center is a top-10 risk on our list,” said a Primark spokesperson.9
“Our first priority was to get the warehouse management system back up. The second priority was to provide the necessary equipment for the new warehouse location,” said Jim Flood, IT director at TNT Fashion Group, which ran the warehouse for Primark.10 “We had the alternative site operational by [Wednesday at] 8:30 am on the same business park,” the Primark spokesperson said.11 TNT Fashion Group invoked its recovery contracts with its IT suppliers to deliver equipment to the new location. By Wednesday afternoon, TNT was uploading Primark’s data onto the datacenter-based warehouse management system from daily backup tapes. By Thursday, TNT had installed the hardware to run the warehouse management system locally.12
The company took three steps that weren’t part of its usual supply chain strategy. First, the company rushed extra orders to suppliers to replace the lost inventory.13 Second, the company chartered a giant Russian six-engine Antonov-225 aircraft to bring in replacement stock by air from Shanghai, Hong Kong, and Dhaka.14 Third, the company rerouted shipments directly to stores to reduce distribution delays.15 Accelerating the supply chain helped ensure that Primark had good 2005 holiday-season sales.
When Hurricane Sandy roared through New York City, the storm surge pushed the Atlantic Ocean into lower Manhattan and flooded underground utility tunnels with corrosive seawater. Water six feet deep surrounded Verizon’s main switching office in lower Manhattan and flooded the ground floor and four subbasements full of equipment. It took Verizon a week to pump out almost one billion gallons of water. In the areas surrounding the city, high winds downed trees, power lines, and telecommunications towers.
Verizon faced a significant challenge repairing flooded equipment boxes, repairing downed lines, and repairing damaged cell towers. The company brought in hundreds of trucks, hundreds of linemen, and tons of repair and replacement equipment that it had prestaged in the area. But that wasn’t all Verizon needed to supply its repair efforts. The disruption to New York City also curtailed fuel supplies through a confluence of three storm-related disruptions: filling stations had no power, the local gasoline refineries were down, and the port was closed. Verizon needed 50,000 gallons per day to run trucks and emergency backup generators around the area.
So in addition to bringing in all the needed telecommunications supplies for repairs, Verizon had to create a fuel supply chain. The company built 18 temporary fueling depots in the region using 1,000-gallon gasoline tanks and 500-gallon diesel tanks with pumps and safety features (such as spilled fuel retention berms and firefighting equipment). The company obtained special approvals from local officials to store and handle large amounts of fuel. Then it transported fuel up from Louisiana and Texas, with drivers carrying wads of cash to pay the $200 tolls on the bridges into New York City. Finally, Verizon managed the distribution of fuel to trucks and to 220 thirsty backup generators.
As with P&G during Katrina, Verizon also had to provide transportation, housing, food, water, and essentials to a recovery workforce of 900 technicians, engineers, and managers in the New York/New Jersey area. To house many of them, Verizon found a 250-room hotel that was without power. The company brought in a big generator and spliced it into the hotel’s electrical system to make that hotel a home away from home for its troops.
Even before waters of the 2011 Thai flood retreated, companies moved in to extract equipment, raw materials, inventory, and office records to begin recovery efforts by shifting production to other sites. For example, Japan’s Nidec Corp., a leading manufacturer of small motors, cut a hole in the roof of its Rajana factory, sent divers in to disconnect the equipment, and hoisted to equipment out onto boats.16 At other factories, teams of dozens of workers waded through knee-deep noisome water and hand-carried materials to small skiffs that could navigate in the shallow flood waters.
Workers operating in the flood zone never knew what they would encounter: sand, mud, sewage, spilled fuel, industrial waste, and the occasional crocodile. When workers at Hana Microelectronics encountered a cobra lurking in a dark, damp, flooded factory, they quickly evacuated and then fumigated the building. “The concern now is employees’ health because this water is pretty dirty. A lot of what we were bringing in was bathing water, as well as drinking water and food, because when they get out of the water, the first thing everybody wants to do is wash,” said Bruce Stromstad, general manager of Hana Microelectronics.17
After retrieving the equipment and materials, companies needed to clean things carefully, calibrate machinery, and requalify the systems. “Equipment that passes our initial assessment of its working condition is shipped to our Pinehurst campus,” said Fabrinet, a precision optical, electromechanical, and electronic manufacturing services supplier in Thailand, in its update messages, “where further tests validate whether the equipment is functioning and in good working order. Equipment passing these stages is calibrated and stored in a controlled environment. The remaining equipment continues to be cleaned and debugged, an exercise that can be laborious and time consuming.”18
Supply chains comprise three flows—the flow of goods, the flow of money, and the flow of information. Disruptions certainly hit the flow of goods, but they also impede the flow of information. In fact, the disconcerting unknowns inherent in disruptions stem entirely from the disruption of information flows about the extent of damages, mitigation efforts, and anticipated time-to-recovery. Customers in disrupted supply chains are looking for as much information as possible so they can plan and execute a proper response. But disrupted suppliers often don’t know the extent of the damage or know how quickly they can recover.
Serious disasters often create a literal communications blackout resulting from physical damage to power and telecommunications infrastructure. After the Japan quake, some suppliers were entirely unreachable for as long as five days. Similarly, hurricane Sandy took out power to more than 7.9 million people19 and damaged one quarter of the cell towers in the storm-affected areas of 10 states.20 The storm also damaged critical back-haul links that connect cell towers to the rest of the world.
During the unrest in Egypt in 2011, the government shut down telecommunications systems to prevent antigovernment forces from coordinating their riots or attacks.21 Anticipating that these kinds of situations can happen, Intel equips each of its facilities with a satellite phone. Unfortunately, according to Jim Holko, Intel’s program manager of corporate emergency management, Intel’s Egyptian sales office managers could not get to the phone during the 2011 unrest because the phone was in an office near Tahrir Square, the epicenter of heavy clashes between police and demonstrators.22
Even if the infrastructure is not damaged physically, disasters create disrupted communications caused by congestion at two levels of the system. First, the high volume of attempted communications clogs the telecommunications infrastructure. This creates busy signals, dropped calls, and a degraded ability to reach people in the affected area. Here, technology can help with communications channels such as SMS (short message service, aka “texting”) and email, both of which use much less of the scarce bandwidth than voice communications. A single SMS text message uses less network capacity than a fraction of a second of voice. In disasters like the 2011 Japan quake or hurricane Sandy, the voice telecommunications network typically collapses when many people try to reach loved ones, but text messages can still go through. A similar reduction in network availability took place in Madrid in 2004 after the March 11 terrorist bombing in the Atocha train station and following the July 7, 2005, London bombing.
Second, the volume of demands for information overwhelms the people and companies in the affected area with every customer calling, calling, calling to get updates. For example, during the aftermath of the 2012 Evonik Industries’s fire (see the section titled “Massively Horizontal Collaboration” below for more on the event), Delphi faced the challenge of communicating very complex information sets concerning the effects of the PA-12 resin shortage. The company needed to communicate to its customers the status of each part or assembly from each supplier and to each customer that might be affected by the shortage of PA-12. Delphi used spreadsheets, distributed via SharePoint, for two-way communications internally and with customers and suppliers. The only problem was that others had the same idea, and Delphi was frustrated by each OEM customer using his or her own clever format for crisis-related data. “We spent more time trying to provide them data in their format than in chasing parts,” said Rick Birch, global director, Operational Excellence at Delphi.23 Delphi hoped that the AIAG (Automotive industry Action Group)-moderated collaborative response to the Evonik crisis would help coordinate a shared approach and common formats.
During interviews for this book, Intel, GM, Delphi, Flextronics and others all mentioned the problem of disruption-affected suppliers being inundated by customer calls and demands. Some companies try to reduce this. For example, contract manufacturer Jabil centralized supplier contacts in the aftermath of the Japan earthquake so that its 59 sites weren’t all calling the same suppliers and creating chaos. “We centralized that through our global commodity management team and put a point person for each supplier,” said Joe McBeth, vice president of global supply chain at Jabil.24
At 1:19 am on March 8, 2014, Malaysia Airlines flight 370 (MH370) stopped communicating with air traffic controllers while flying over the Gulf of Thailand. Eventually, the plane’s disappearance triggered a massive search for the jet airliner and its 239 passengers and crew members. But the search did not go smoothly. Some of the problems with the search were unavoidable because of issues such as mistaken witnesses who thought they saw a plane crash,25 satellite images that found garbage instead of wreckage,26 an oil slick that turned out to be bunker fuel from a ship,27 and the overall mystery of how a large aircraft could disappear and yet fly on for seven hours after its last seemingly routine communication. The deeper problems stemmed from how the disaster was managed and communicated to the world. “At best, Malaysian officials have thus far been poor communicators; at worst, they are incompetent,” said Peter Goelz, former managing director of the United States government’s National Transportation Safety Board.28
Delays in releasing information began with Malaysia Airlines, who did not announce that the plane was missing until six hours after contact was lost and one hour after the slated landing time.29 Then came the revelation that Malaysian military radar had detected an unidentified aircraft at the time of the disappearance of flight 370 but did not alert anyone. When reporters asked for more information about where the aircraft was spotted, Malaysian authorities said that the answers were “too sensitive.” Chinese authorities—representing 154 Chinese passengers—repeatedly urged Malaysia “to report what they have … in an accurate and timely fashion.”30
With so little known about the fate of the flight, each meager bit of information was scrutinized by the families, the media, and governments. On March 15, for example, Malaysia’s prime minister reported that the last words from the cockpit were “All right, good night.”31 Pundits and passengers’ families tried to divine the mental state of the cockpit and assess the potential for terrorism, hijacking, or suicide that might be hidden in those terse words. But then the transcript for the air traffic control released two weeks later showed the final words as “Good night Malaysian three seven zero” with no explanation for the discrepancy with the earlier report.32 Similar inconsistencies occurred in various reports by Malaysian authorities about the timing of events,33 whether and where the plane had flown over Malaysia,34 and whether the plane had communicated at all after someone in the cockpit deliberately turned off the transponder.35 “There have been misinformation and corrections from Malaysian authorities on the whereabouts of MH370,” said Goelz, who told CNN that it was the worst disaster management he had ever seen.36
A major cause of the problem was the lack of a coordinated approach to communications. At various times, statements came from the prime minister of Malaysia, the minister of transport, the inspector general of Malaysian police, the Malaysian Maritime Enforcement Agency, the Department of Civil Aviation, and the director-general of Civil Aviation, as well as various representatives of the airline. “It seems the Malaysians internally are not talking very well to each other,” said Taylor Fravel of the security studies program at the Massachusetts Institute of Technology.37 The delayed and inconsistent information provoked mistrust of Malaysian authorities and sparked theories that the Malaysian Air Force shot down the plane or that Malaysia was conspiring to cover up the true location of the plane and the fate of the missing people.38 “We will never forgive for covering the truth from us and the criminal who delayed the rescue mission,” said Jiang Hui, the families’ designated representative.39
Of course, the Malaysian government does not have a monopoly on unhelpful communications in the face of a disaster. The US government public communications during the 2014 Ebola epidemic, Hurricane Katrina, or the obfuscating and contradictory statements following the Benghazi attack in Libya, are examples of similar shortcomings. For example, during the Ebola crisis, the US Centers for Disease Control and Prevention changed its recommendations several times.40 To add to the chaotic communications, the White House made conflicting statements, and several states took matters into their own hands and issued their own quarantine and isolation rules, only to change them later, adding to the confusion and fear.41 Similarly, the statements by various Japanese government and TEPCO officials following the 2011 Tohoku earthquake and tsunami shook confidence and contributed to public fears.
These examples of governments’ failures to communicate clearly in crises provide a warning to corporations. Mixed messages coming from different parts of a disrupted organization can add to the confusion rather than calm the fears of customers and investors. One of the most important preparatory initiatives is a crisis communications protocol. GM’s mantra of “stay in your swim lane” is just as important in terms of communications as it is in terms of operational decisions. The ubiquity of social media, round-the-clock news channels, and everybody being always “in touch” amplifies the impact of rumors and misinformation, accentuating the importance of unified crisis communications.
After AirAsia lost an aircraft flying from Indonesia to Singapore on December 28, 2014, the airline’s CEO was lauded for his consistency, openness of communications, and willingness to take responsibility from the beginning. “I am the leader of this company; I take responsibility” said AirAsia founder and CEO, Tony Fernandes.42
On Valentine’s Day, February 14, 2007, an ice storm hit the New York City region and JetBlue’s hub in New York’s JFK airport. A JetBlue spokesman said, “We had planes on the runways, planes arriving, and planes at all our gates. We ended up with gridlock.”43 Hundreds of passengers were stranded in planes on the tarmac because the planes could not take off but had no open gates to which to return. The JFK disruption plunged the airline into chaos, forcing it to cancel over a thousand flights during a six-day period, ruining the travel plans of more than 131,000 passengers.44
David Neelman, the company CEO at the time, cited multiple operational failures that compounded the crisis. Among the primary culprits: inadequate communication protocols (caused by, in his words, a “shoestring communications system”) to direct the company’s 11,000 pilots and flight attendants on when and where to go; an overwhelmed reservation system; and the lack of cross-trained employees who could work outside their primary area of expertise during an emergency.45 The CEO added, “We had so many people in the company who wanted to help who weren’t trained to help. We had an emergency control center full of people who didn’t know what to do. I had flight attendants sitting in hotel rooms for three days who couldn’t get a hold of us. I had pilots e-mailing me saying, ‘I’m available, what do I do?’ ”46 The airline lost its stellar reputation with customers, and three months later the CEO was replaced.
Of course, consistency of communications requires consistency of the policies and information that feed communications. For corporations, this means that business continuity plans and crisis management playbooks should include a plan for communicating effectively with all stakeholders during a crisis. Such a plan may include a special media center, identification and empowerment of a spokesperson, and identification of a variety of technical experts who can support the media center. In the case of major disruptions, the top executive often takes a leading role as the face of the company. For example, after the Japanese quake, GM’s CEO, Dan Akerson, spoke to reporters at various times about the company’s efforts and progress on mitigating the disruption.47,48,49,50
In handling the aftermath of hurricane Katrina, P&G’s communications delivered messages tailored to three audiences: consumers, retailers, and the general public. First, P&G needed to handle the consumer reaction to its emergency second-sourcing of coffee canning. The company ran consumer television ads to reassure the public that its high-quality coffee would be on shelves but that it may come in unfamiliar containers because P&G needed to second-source coffee canning to other companies.
Second, P&G also knew that competitors were visiting retail chain customers to try and wrest valuable shelf-space from P&G. Some competitors tried to spread fear, uncertainty and doubt (“FUD,” as P&G referred to it) about P&G’s promises to restore coffee production in light of the serious damages. For P&G, this meant reassuring its retailers, showing them P&G’s recovery plans, allocating available capacity, and staying on schedule for the recovery.
Third, P&G took an unusual step (contradictory to its typical policy) and announced all the philanthropic activities it was undertaking as part of the Katrina recovery effort. P&G is usually reticent about broadcasting its charitable efforts, but the company realized that it would need to do this, in the case of Katrina, as part of the company’s accelerated efforts to restart the New Orleans plant. The company didn’t want to be portrayed as cruel in “forcing” hurricane victims back on the job, so P&G got in front of the media with what management considered to be a balanced, forthright portrayal of its total set of efforts.
The decision by bourbon producer Maker’s Mark to handle an unsustainable surge in demand by diluting its bottled beverage (see chapter 3) sparked outrage from consumers.52,53 Jim Martel, a longtime Maker’s Mark purist and a brand ambassador since 2001, said, “My favorite bourbon is being watered down so they can ‘meet market demand.’ In other words, so Beam, Inc. [the parent company] can fatten their wallets a little more. I’ll help lower their demand by not buying any more.”54
The company was forced to address the criticism. “We’ve been tremendously humbled over the last week or so,” Maker’s Mark president Bill Samuels said about customers’ negative reactions.55 “Our focus was on the supply problem. That led to us focusing on a solution,” Samuels said. “We got it totally wrong.”56 The company reversed its decision to dilute the product. “You spoke. We listened. And we’re sincerely sorry we let you down,” the distiller wrote on its Facebook page on February 17, eight days after announcing the dilution.
The public apology and reversal of the policy helped. Nearly 28,000 people clicked “like” on the Facebook announcement of the reversal. A Forbes magazine article headline proclaimed: “Maker’s Mark’s Plain Dumb Move Proved to Be Pure Marketing Genius.”57 In fact, sales of Maker’s Mark surged 44 percent. “There’s no doubt that with the change of the proof and then the reversal of that decision, we did see sort of a buying forward from consumers,” said Matthew Shattock, chief executive of the parent company, Beam.58 Ironically, the surge in popularity exacerbated the original problem, which was a shortage of aged spirits for bottling. A bottle of Maker’s Mark takes more than five years to make, so the time-to-recovery is very long.
Similarly, JetBlue’s response to what it still calls its Valentine’s Day Massacre earned it many kudos and helped the airline recover its reputation relatively quickly. The CEO immediately apologized publicly; the airline provided refunds to any passengers stuck on the tarmac for more than three hours; and it issued a “passenger bill of rights,” providing compensation for various customer service failures, including $1,000 for any passenger bumped from an oversold flight. While the apologies helped the firm, they were not enough to save the CEO’s job.
The Maker’s Mark and JetBlue sagas played out on a modern medium. Social media such as Facebook and Twitter offer new communication channels to reach customers, mixing the properties of one-way broadcast media and two-way interactive discussion channels. This is in addition to their role in sensing and detecting brewing problems, mentioned in 2. Social media channels are likely to rise in importance in crisis communications because of several factors including: the rising penetration of smartphones; the large portion of the population on social media; the bandwidth leanness of Internet communications relative to voice or video communications; and the finer-grain mechanisms that enable both senders and receivers to control which messages they see or to whom they send them.
In many large disruptions, companies tend to help each other. Clearly, it makes business sense to help a customer in trouble. During the interviews conducted as part of the research for this book, most companies credited their suppliers for working tirelessly to help them. It also makes sense for companies to help suppliers in trouble when the company depends on the supplier’s material or parts. Such collaboration between suppliers and customers is typically referred to as “vertical collaboration”—collaborations along the supply chain.
Large disruptions, however, also bring companies in the same echelon of a supply chain—and even competitors—to work together when their resource pooling and joint actions can accelerate recovery. This is referred to as “horizontal collaboration.”
When tropical storm Lee trundled up the East Coast in September 2011, companies in Louisiana, Mississippi, Alabama, Texas, Pennsylvania, and New York prepared for torrential rains and local flooding. Automotive carpet maker Autoneum in Bloomsburg, Pennsylvania, had 72 hours to prepare its factory and inventory for the sure-to-come flood.59 Unfortunately, the rains caused a local dam to burst, and seven to eight feet of muddy water flooded the carpet factory, wrecked the equipment, and ruined the carpeting being made by the plant.
Autoneum made carpet for six different car makers, including GM, and the manufacturers flew teams of specialists to the stricken supplier. Yet different manufacturers behaved differently. GM people joined in to help on Autoneum’s muddy factory floor, while representatives from several other manufacturers stayed in their conference rooms and demanded the product. GM brought in 200 electricians and used GM’s supply chain people and suppliers to procure needed parts.
For GM’s team at Autoneum, the most gut-wrenching part of their stay was getting a tour of the town and seeing all the damaged houses. Workers were putting in 12-hour shifts to get the factory running and then going to their flooded homes to deal with that. Thus, GM also helped to build morale—buying pizza for everyone and handing out GM jackets. Autoneum repaid GM’s help with better service during the recovery. GM never missed a beat, whereas other OEMs were forced to park thousands of carpet-less vehicles and add carpet later.
On March 31, 2012, a tank filled with highly flammable butadiene exploded in a chemical factory in Marl, Germany. Intense flames and thick black smoke billowed from Evonik Industries’s cyclododecatriene (CDT) plant at the 7,000-worker chemical complex in the heavily industrialized Ruhr river valley. Roughly 130 firefighters fought the blaze for 15 hours to prevent its spread to the rest of the facility and ultimately to extinguish it. The explosion and fire killed two workers and severely damaged the plant.60
Cyclododecatriene sounds like an obscure chemical, and the fact that it’s used to synthesize cyclododecane, dodecanoic acid, and laurolactam may mean nothing to most readers. But CDT is a key ingredient in making certain polyamides, which are high-strength plastics more commonly known as nylon. In particular, CDT goes into a high-tech type of nylon—PA-12 or nylon-12—that is especially prized for its chemical resistance, abrasion resistance, and fatigue resistance. That makes PA-12 a favorite of the auto industry, which uses this tough plastic for fuel lines, brake lines, and plastic housings. And if that wasn’t enough, using nylon makes cars quieter and more fuel-efficient. The average light vehicle in 2011 used over 46 pounds of nylon, up from just 7 pounds in 1990.61,62
Nor were carmakers the only industry using these materials. PA-12 also goes into solar panels, athletic shoes, ski boots, optical fibers, cable conduits, and flame-retardant insulation for copper wire. CDT is a key precursor for making many other chemicals, such as brominated flame retardants, fragrances, hot-melt adhesives, and corrosion inhibitors.63 The March 2012 explosion and fire in Marl destroyed almost half the world’s production capacity for CDT. Worse, at the time of the explosion, CDT supplies were already tight as a result of its use in the booming solar panel industry. For automotive companies, the potential value-at-risk resulting from the Evonik fire was arguably similar to the value-at-risk during 2011 Japanese quake. Every vehicle they made depended on PA-12, and the fire threatened a significant and prolonged disruption of car production.
When a maker of fuel lines and brake lines—TI Automotive—raised the alarm about the dire implications of the Evonik fire, the entire automotive industry sprang into action. The industry convened an emergency summit on April 17 in Troy, Michigan. The summit was moderated by a neutral third party, the Automotive Industry Action Group (AIAG).64 The AIAG is a volunteer-run, nonprofit organization that provides shared expertise, knowledge, and standards on quality, corporate responsibility, and supply chain management to a thousand member firms in the automotive industry.65 Two hundred people attended the summit, representing eight automakers and 50 suppliers.66 All tiers of the affected sectors of the automotive supply chain came, including the big OEMs, their Tier 1 suppliers, component makers, polymer resin makers, and on down to chemical makers such as Evonik and BASF.67
The participants had three objectives that required the collective expertise of the entire industry. First, they wanted to understand and quantify the current state of global PA-12 inventories and production capacities throughout the automotive supply chain. Second, they wanted to brainstorm options to strategically extend current PA-12 capacities and/or identify alternative materials or designs to offset projected capacity shortfalls. Third, they wanted to identify and recruit the necessary industry resources required to technically vet, test, and approve the alternatives.
The group formed six committees to help quickly create action plans to lessen any impact of shortages on component and vehicle production.68 Each committee tackled an assigned task such as managing remaining inventories, boosting production at existing suppliers, identifying new firms to produce resins, and finding replacement materials.69,70 The group hosted multiple technical follow-up meetings during the subsequent weeks on these issues.71
This multifaceted collaboration was key to overcoming the challenge. Within a week of the meeting, the top OEMs had jointly drafted a plan to expedite their parts validation processes.72 Harmonized validation processes ensured that a supplier didn’t need a different process for each OEM customer. Suppliers from other industries lent their capacity to automotive applications. For example, Kansas-based Invista Inc., the maker of Stainmaster carpets, released capacity for production of CDT.73 In the end, cars continued to roll off the line even though the Evonik factory was offline until December 2012, nine months after the explosion and fire.74
During the evening of June 29, 2012, a destructive roiling line of thunderstorms—called a derecho—knocked out power to more than 4.2 million customers in Midwest and Mid-Atlantic states.75 One of the hard-hit utilities was Baltimore Gas and Electric Company (BGE), which saw 760,000 customers lose power. To augment its own workforce during the recovery, BGE called MAMA (Mid-Atlantic Mutual Assistance), a mutual aid network of nine utilities76 as well as the Southeastern Electric Exchange,77 another mutual aid network.
Utilities have prepared for large disruptions by creating semiformal mutual aid agreements that are made individually, through state agencies, or through the American Public Power Association (APPA). Mutual aid agreements were strengthened over the first decade of the 21st century when APPA worked with the Federal Emergency Management Agency and the National Rural Electric Cooperative Association (NRECA) to create the APPA-NRECA Mutual Aid Agreement. “Almost every co-op has signed it, about 880, and almost 1,000 public power utilities have signed it,” said Mike Hyland, senior vice president for engineering services at APPA.78 The aid networks use each other’s repair crews to move across the country to provide capacity to member utilities.
The aid networks generally activate on a concentric ring basis—a stricken utility first asks for help from its nearest aid network neighbors and then calls progressively more distant rings of utilities. For larger storms in which neighboring utilities also need aid, the ring of calls can stretch across the United States and into Canada.79 In the case of the 2012 derecho, a total of nearly 25,000 utility workers from all parts of both countries worked on restoring power.80 These networks also helped during Hurricane Irene and Hurricane Sandy.81 Other public utilities, such as water and wastewater, have analogous mutual aid networks.82
Utilities are not the only industry to cooperate to create flexibility. Railroads that share territories also have mutual aid agreements. For example, BNSF and Union Pacific are the two largest railroads serving the western United States—from the Pacific Coast to the Mississippi River extending to points further east—and they are fierce competitors in the transportation marketplace. However, when a significant service interruption occurs on either railroad, they will sometimes help each other maintain a steady flow of freight by negotiating temporary alternate routing options where they have parallel routes that can operationally support the modified movements. Such service interruptions can be the result of derailments, flooding, or other issues that affect track or train operations as well as scheduled interruptions (track maintenance and construction). In addition, as a result of regulatory conditions imposed as part of a merger or other related agreements, some railroads have obtained shared rights on limited segments of track and coordinate operations over those segments. In fact, a small permanent group of Union Pacific personnel actually sits in BNSF’s massive network operations center to provide operational coordination over the areas of the national networks where the two railroads operate jointly. So, while BNSF and Union Pacific compete for customers, they cooperate on capacity in certain situations.
During the recovery effort from hurricane Sandy, cell phone carriers were working to fix their networks and provide service to customers as quickly as possible. In an effort to accelerate the resumption of customer service, AT&T and T-Mobile agreed to let each other’s customers share both networks, because both carriers use the same GSM technology. Tim Harden, president of Supply Chain and Fleet Operations at AT&T, said, “It’s the good of the community and the good of the nation at that point, as opposed to who the competitor might be.”83 Pooling the capacities of the two networks helped cover “holes” in coverage in each network resulting from local damage. The strategy for eliminating holes in the coverage was “divide and conquer,” said Neville Ray, chief technology officer for T-Mobile, and they have “two companies working to bring up one network, rather than two companies working to bring up two networks.”84
Friedrich Nietzsche, the German philosopher, quipped, “what does not kill me makes me stronger” in his book Twilight of the Idols published in 1888. For companies experiencing operational disruptions, in particular, this is true only if they learn from their experiences, draw the relevant lessons, and improve their risk management and disaster recovery processes.
After recovering from Katrina, P&G reviewed its response to find potential improvements on two performance dimensions: the cost of recovery and the time to resuming production. P&G then evaluated hardening the facility against future hurricanes in order to reduce the cost and recovery time. One major lesson stemmed from the hot and humid climate of New Orleans. With no power and no air conditioning in the factory, mold grew very quickly, and the company spent millions of dollars to eradicate it. If they had had on-site emergency generators, they could have turned the air conditioning on immediately and prevented the mold damage.85 The same problem plagued companies hurt by the Thai floods.
Similarly, better seals on food containers were more important than reinforcing the roof. P&G also decided it needed some emergency accommodations on site so that a team could fly in and have everything it needed to get started. The company is also looking at its supply sources, increasing its network of suppliers outside of New Orleans, and is helping its suppliers have the same robustness of response as P&G’s in preparation for the next disruption.
The 2008 Sichuan earthquake near Chengdu was one of the worst earthquakes in China’s history. The 7.9-magnitude earthquake was the deadliest and strongest to hit China since the one in 1976 in Tangshan. The 2008 quake killed nearly 70,000, injured 374,000, and destroyed the homes of 4.8 million people.86 Intel’s factory and assembly and testing facility in Chengdu were largely undamaged as a result of Intel’s insistence on seismic building design principles. The company was able to quickly restore operations.
Although Intel’s preparation and response worked—Intel’s factories were at 95 percent production within seven days of the earthquake—the postevent review revealed an internal tension of priorities and scope of activities between caring for the local workers in Chengdu and managing Intel’s broader global business continuity. Jackie Sturm, Intel’s vice president and general manager of Global Sourcing and Procurement, explained that following the Chengdu quake, Intel decided to split its incident response activities into two independent streams. Emergency Management (EM) would care for people and the safety of local facilities without worrying about the broader business issues. While EM cared for the affected people, Business Continuity (BC) would focus on business issues such as the affected work-in-process inventory, alternative fabs, logistics, products, and customers. Under the new structure, EM starts immediately during the disaster (i.e., is a first responder) and BC follows quickly. EM ends as soon as Intel has ensured the safety of everyone; BC continues until the recovery of production and return to normal operations. EM operates locally, while BC may involve global activities such as shifting production to a different facility. Intel used this spilt response structure to handle the 2011 Japanese quake and tsunami, as described in chapter 1.
At GM, each major disruption of the last decade was a special project designated by a letter. “Project D” was the 2005 Delphi bankruptcy, “Project J” was the 2011 Japan earthquake, “Project T” was the late 2011 Thailand flood, “Project E” was the 2012 Evonik fire, and “Project S” was superstorm Sandy in 2012. Each created a learning opportunity. “Quite honestly, I think if we had not had Project D, we would have stumbled more,” said Rob Thom, GM’s global vehicle engineering operations manager, about Project J.87 “Once you get into a crisis, I’ve found you learn so fast [that] it makes you creative,” added Ron Mills, GM’s general manager of Components Holdings Program Management and Product Engineering.88 Thus, each crisis accumulates skills and better methods for response. After a crisis, many companies review and analyze the organization’s response to the event. This may consist of two stages; a preliminary “hotwash” performed immediately after the event, and a more careful analysis later.89 According to Thom, by the time Project E occurred, the company was much more aware of potential deep-tier issues, more realistic in its white-space analysis, better coordinated in sharing information internally, and faster in validating alternatives.
“There’s going to be another crisis,” said Carlos Ghosn, the CEO of Nissan, a few weeks after the November flooding in Thailand. “We don’t know what kind of crisis, where it is going to hit us, and when it is going to hit us, but every time there is a crisis we are going to learn from it.90”
Whereas many crises involve natural disasters or accidents that strike a particular location, some events have worldwide scope.