Certain types of disruptions arrive as long-term shifts, not transient shocks. These shifts lead to a permanent change—a new normal—for the company’s environment, its supply chain, markets, factors of production, society’s expectations, or regulatory frameworks. Some trends affect the viability of specific industries. For example, trends such as the declining use of previously successful products and shifts in norms include the use of landline phones,1 smoking2, men wearing hats,3 fans attending baseball games,4 or people playing golf.5
Other trends have broader effects on multiple industries as well as supply chain structures. These trends include the rise of megacities, the growing middle class in developing countries, the growing use of e-commerce and omnichannel retailing, the aging populations in the developed world, global climate change, the demand by civil society for corporate social responsibility, the rise of Africa as a place of business, and the availability of new energy sources in the US and elsewhere based on new extraction technologies.
Although a trend like an aging population or new energy sources might be obvious, companies face significant uncertainties regarding both the best response to the trend and the timing of that response. In many cases, there are “good” reasons not to respond immediately, such as uncertainty about the validity of the trend or short-term financial pressures that forestall any long-term investment required to adjust and respond.
Another factor that distinguishes long-term shifts from other fast-acting risks is the potential upside opportunities embedded in such trends. The long-term nature of these phenomena means that companies have opportunities to adapt and even create a competitive advantage relative to less-prescient or less-responsive competitors. Indeed, the challenges lie in deciding how and when to invest in response to these trends.
Although the number of medium- and long-term trends is large and varied—including shifting markets, societal expectations, migration patterns, regulations, and natural phenomena, among many others—this chapter is focused on one irrefutable trend, one trend under some debate, and one trend that is not usually thought of as a trend.
Much of the developed world is aging. In the United States, someone turned 65 years of age every 12 seconds during 2010.6 The fastest-growing demographic in the United States, as a percentage of the population, is the “over-85” age group. By 2025, the percentage of the population over age 60 is forecasted to be 25 percent in the United States and Asia, and 30 percent in Europe.7,8
The strength of this trend varies by country. Japan is especially hard hit as a result of its very low birth rate, long life expectancy, and anti-immigration culture and policies. Not only will Japan’s population skew much older, but its total population is expected to shrink by 30–40 percent in the coming decades. China is suffering the consequences of its one child policy: by 2044, China will have more people over age 60 than the United States has in total population.9 As the Chinese working-age population shrinks markedly, its vaunted low-cost manufacturing position will come under increasing pressure.10 Other locations, such as southern Europe, have low birth rates, too, but they have high rates of immigration from within and from outside the EU to compensate (although such immigration sometimes creates social and political tensions).
Aging demographics will have a multitude of economic effects that affect supply chains, including a dearth of workers for physical blue-collar jobs; different demand patterns (consuming fewer products and more services, especially healthcare); and significant declines in asset utilization as populations shrink. All these changes imply the rise of new markets, shifting locations for manufacturing centers, and changes to the supply networks connecting the two.
The demographic trends of an aging population will affect the composition of consumer households and the patterns of consumer demand that determine freight flows. High divorce rates among older couples11 (dubbed the 40-year itch12) as well as the discrepancy between male and female longevity will lead to more people living alone, especially women. In 2013, 42 percent of US women over 65 were living by themselves. In Denmark, the figure was 52 percent.13 Smaller households skew the retail merchandise mix toward smaller package sizes and smaller store formats, closer to where people live. In fact, Walmart, Target, and other retailers are already building small-format urban stores.
Aging in place may demand an increasing amount of home delivery and in-home services. Moreover, these delivery workers may do double duty by helping put away the delivery, install the product, train consumers in its use, and even provide supplementary home healthcare services. About 70 percent of Americans live in suburban and rural areas, and they plan to continue living there as they get older. These areas rely on personal automobiles for transportation, which becomes problematic as people age. The low density of these consumer locations and the elderly’s needs for added services present both risks and opportunities for retailers and logistics service providers.
As the baby-boom generation in the United States ages, as China’s work force diminishes, and as Japan and some European countries contract, one can expect a relative shortage of working-age adults. Deere, Caterpillar, and Toyota have already seen disruptions in their labor supply, as well as the loss of knowledge when experienced workers retire. In particular, companies lose knowledge of how to handle rare events when long-time workers retire.
Despite the lackluster economy of the early 2010s and high unemployment among younger workers, transportation companies in the United States were still facing shortages of truck drivers. This may be a sign of things to come, as these companies hire more elderly drivers who are on their second or third careers. On the positive side, many older adults will work longer past traditional retirement age to stay active, do meaningful work, and enjoy social interactions—money came in only fourth on the list of reasons why workers between age 66 and 70 keep working past retirement age.14 The implications for logistics companies include the need to accommodate a “graying” transportation and warehousing workforce and an influx of women into the field. Warehouse operators or motor carriers who can adjust over time to accommodate a 5′2ʺ (157 cm) 60-year-old female worker will be in a good position to meet their future labor needs.
The year 2013 marked the 37th consecutive year of global temperatures above the 20th-century average.15 These higher temperatures—and the likelihood that the climate will change further by growing hotter on average but also more volatile—pose some known and some unknown risks for companies and their supply chains.
Most mainstream scientific bodies agree with the UN’s Intergovernmental Panel on Climate Change (IPCC) and its five reports regarding the existence and causes of global warming. Although some climate change impacts seem undeniable, some respected scientists question the cause of climate change, arguing that it may be natural or unknown.16 Others question the projections on the basis of the inaccuracies of the underlying models. Long-range forecasting models contain many uncertainties about the rate of change, magnitude of change, and geographic pattern of changes.17 For example, the latest IPCC report shows that despite the 12 percent increase in atmospheric CO2 since 1990, the temperatures that were predicted to rise between 0.2 and 0.9 degrees Celsius rose by only 0.1 degrees, a figure that was not statistically different from zero.18 At the same time, scientists, activists, and policy makers still push for mitigation measures. For example, Connie Hedegaard, the European climate action commissioner said, “Let’s say that science, some decades from now, said ‘we were wrong, it was not about climate,’ would it not in any case have been good to do many of the things you have to do in order to combat climate change?”19 Others, such as former US secretary of state George Shultz, argue that while climate change and its causes may be less than certain, the consequences are so dire that current decarbonization initiatives should be thought of as “buying insurance against a catastrophe.”20 Such uncertainties present companies with a dilemma: when and how much should they invest in mitigation measures given the uncertainty?
Climate change brings four categories of potential supply chain disruptions. First is a long-term trend toward higher effective energy costs owing to regulations or other schemes to limit the world’s carbon footprint. Second are potential disruptions to operations and logistics resulting from rising sea levels (which may affect coastal cities that house the majority of the world’s population and seaports) or declining river flows, which affect waterborne transportation and agriculture. The third category is increased prices of climate-sensitive commodities such as food, wood-derived packaging materials, and water. Such changes raise the risks of related social and political disruptions as well a threat of drastic regulations. Fourth are reputational disruptions for companies perceived to have high environmental footprints who are operating in areas where NGOs, the media, and public opinion are concerned about climate change. In that sense, it does not matter whether company leadership believes that climate change is real, or whether human activities caused it—as long as public opinion believes it, companies have to at least be seen as aligned with the concerns of their customers. Price and availability risks, and reputational risks are described more fully in chapter 10 and chapter 11, respectively.
Greenhouse gases (GHGs) such as carbon dioxide (CO2) and methane (CH4) are believed to contribute to global climate change.21 These gases trap heat in the atmosphere, thereby increasing temperatures but also potentially accelerating the evaporation of water and the formation of storm systems. This link between emissions and climate puts a bulls-eye on the use of fossil fuels in transportation, industrial production, and energy generation. The total set of GHG emissions associated with a product, process, company, or country define its carbon footprint.
In May 2013, atmospheric CO2 reached 400 ppm.22 This concentration is about 42 percent higher than the preindustrial level, and it continues to rise at an accelerating rate as a result of the growth of many developing-world economies. Capping CO2 levels at twice the preindustrial level, which they may reach in 2050—let alone reducing them—will require aggressive reductions in fossil fuel consumption. Climate change won’t hit suddenly, however. “The math shows that we have 40–60 years left,” MIT professor (and the US secretary of energy) Ernest Moniz warned in 2007, “but making a change requires a 50-year planning horizon, so we need to start making changes now. The reason for the time delay is scale—the oil business operates on a trillion-dollar scale … and will take a long time to replace.”23
One of the impactful risks created by climate change is supply disruption based on scarcity of water. Scientists expect climate change to affect patterns of rainfall and snowfall,24 with some areas growing much drier under the confluence of less moisture and more heat. Water shortages would have an impact on agriculture, cities, and a wide range of production processes that consume large amounts of water. Affected water-using industries include food and beverage makers as well as a surprising range of other industries such as semiconductor makers, textiles, and electric power plants.
In November 2011, barge traffic on Europe’s Rhine was disrupted by drought and low river levels that were 1.5 feet below normal.25 Shippers were restricted to lightly loaded barges to avoid grounding.26 “It’s too difficult for ships to move in such low waters. Businesses are being damaged, companies are having to load 4,000 tonne capacity ships with a mere 1,000 tons of goods,” said Ralf Schäfer of the Waterways and Shipping Office for Bingen, Germany.27 Barges handle 11 percent of ton-miles of freight in the United States28 and 5–6 percent in Europe,29 and they are an especially fuel-efficient means of moving large amounts of freight. But barges require sufficient water levels.
Water scarcity will be exacerbated if upstream water users monopolize the limited rainfall or snow melt, leaving downstream agricultural users and cities even drier. Such an action took place when Switzerland restricted water flow to the Rhine after a dry year. This restriction caused the water level to fall, limiting barge traffic on the river and forcing large bulk shippers, such as chemical manufacturing giant BASF, to divert shipments in and out of its mammoth industrial complex in Ludwigshafen to rail and trucking. The result was increased shipping costs, lower customer service, new hazards along the land routes, and higher GHG emissions. BASF depends on river barges for 40 percent of its incoming and outgoing goods,30 making the company dependent on river levels.
The rise of megacities and urbanization adds to the long-term risks of disruptions in agricultural commodities. The conversion of farmlands to urban developments reduces the amount of land area under cultivation. In fact, the Chinese government is so alarmed about not having enough arable land to feed its population that during 2013 and 2014 it has been buying large tracts of land in Brazil, thereby causing Brazilian land prices to rise considerably, despite a severe drought, which usually causes arable land prices to plummet.31 In addition, urbanization in developing countries is creating worker shortages in rural areas. For example, Indonesian officials are worried about the nation’s food production because the country’s youth have left rural areas for cities, and farmers are aging and retiring.32
Much of the world’s trade travels on the oceans between port cities that are, by definition, at sea level. Some predictions of climate change hint at a 13 to 94 centimeter (5 to 37 inch) rise in sea level by 2100.33 Such a rise could threaten port infrastructure as well as the civilian populations and industry that surround major ports. Storm surge disruptions may grow increasingly likely in major ports such as Shanghai, Rotterdam, and Osaka.34 Shanghai, the largest container port in the world since 2010,35 is the most vulnerable major city in the world to serious flooding.36
With the higher probability of storms comes a higher probability of disruptions to operations and logistics. Volatile weather played a major role in the 2011 Thai floods, described in chapter 7, that caused significant supply chain disruptions in the computer and automotive sectors. A drought in 2010 in Thailand drained reservoirs and caused $450 million in crop damage.37 Memories of the drought made Thai authorities hold back water when the rainy season began in 2011, rather than steadily releasing the water in anticipation of future heavy rains.38 When a series of tropical storms hit Thailand in late 2011, the reservoirs had no spare capacity to absorb the added water and were forced to dump the flood waters on the heavily industrialized central floodplains of the country.39
Not all the impacts of global warming are negative for supply chain operations, though. For example, if the thick ice layer covering the Canadian Arctic Archipelago melts, then maritime shipping between Asia and Europe will be 2,500 miles shorter via the so-called Northwest Passage, and Alaskan oil could be moved quickly to the eastern United States and Europe.40
Limiting the rate of climate change will require significant reductions in the emissions of GHGs or increasing the absorption or sequestration of greenhouse gases. Because transportation generates 28 percent of all GHG emissions in the United States,41 supply chains are likely to come under continuing pressure to change to fuel-efficient modes, replace fossil-fuel-powered conveyances, minimize the ton-miles of goods movements, and pay for carbon sequestration or climate change mitigation.
Fuel taxes, emissions restrictions, and cap-and-trade schemes all raise the effective long-term cost of fuel and therefore of transportation, thereby increasing the costs of operating global supply chains. High transportation costs can also push companies toward using slower, bulk modes of transportation (e.g., barge and rail instead of trucking). (See chapter 10, “Efficiency: Use Less, Pay Less.”)
Finally, climate change and related commodity cost increases may affect the political stability of countries and regions. One of the contributing causes to the multicountry upheaval of the 2011 Arab Spring was a 43 percent spike in food prices between 2009 and 2011. UN research suggests that food riots are much more likely when the food price index hits 21042—the index hit 228 in 2011.43 Of course, climate change was only one of the elements contributing to the rise in food prices, the other important one being misbegotten climate-change-related ethanol policies in the United States.44 Price hikes in key food staples have led to riots in many developing countries, such as the 2006 and 2012 riots over corn prices in Mexico45,46 and the 1998 riots over cooking oil in Indonesia.47
In a 1963 speech about Charles Darwin’s seminal work On the Origin of Species, Professor Leon Megginson of Louisiana University said: “It is not the most intellectual of the species that survives; it is not the strongest that survives; but the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself.”48 Every year, companies introduce new products and processes and retire old ones. When competitors introduce superior products, higher capabilities, and better processes, companies can lose market share, experience a drop in revenues, and face existential risks if the companies fail to adapt quickly.
Nokia was a leading example of a successful organization in my book The Resilient Enterprise in 2005.49 When a Philips chip factory in New Mexico suffered a fire in a cleanroom in 2000, Nokia was the first of Philips’s major customers to detect the severity of the problem, mobilize resources to fix it, secure alternative supplies, and recover. In contrast, Nokia’s direct competitor, Ericsson, also used Philips but was much slower in detecting the problem and much less effective in responding. The resilient Nokia thrived whereas the less-responsive Ericsson suffered a US$2.34 billion loss and was forced to merge its mobile handset business with Sony.
For many years, Nokia dominated the cell phone market with a wide range of innovative handsets. Then, in 2007, Apple introduced the iPhone. In the beginning, Apple was no threat to Nokia. In 2007, Apple sold 3.7 million iPhones50 while Nokia sold 435 million phones,51 including 60 million smartphones.52
Over time, Apple’s minimalist touchscreen design proved extremely popular and became widely copied among devices such as Google’s Android system, the Blackberry Z series, and Microsoft Windows 8 devices. Moreover, Apple created a marketplace to connect a supply chain of app developers to consumers through Apple’s easy-to-use media distribution channel, the iTunes store. Initially, the app store had only a few apps, and the app model was unproven.53 But, over time, Apple’s popularity grew, although it was a relatively slow process. Even three years after the iPhone’s introduction, Nokia smartphones were still outselling Apple iPhones 2 to 1.54 By 2013, however, Nokia’s smartphone share had dropped to less than 3 percent.55 In that year, Nokia sold its handset business to Microsoft.56 It was blindsided by the iPhone in a classic case of disruptive innovation.
Nor was Nokia the only company to miss the disruptive threat of the iPhone and phones like it. Microsoft’s share of the mobile market dropped from 42 percent in 2007 to 3 percent in 2011.57 Blackberry’s worldwide market share likewise toppled from 44 percent in 2009 to less than 2 percent (including less than 1 percent in the United States) by the end of 2013, even after the introduction of its new models and new touchscreen operating system in 2012.
In theory, Nokia should have maintained its market dominance. It invested heavily in research and development and, in 1999, it even invented a phone very similar to what Apple would release seven years later. Nokia’s prototype had a touchscreen display set above a single button. In demos, Nokia’s chief engineer at the time, Frank Nuovo, showed how the phone could be used for ordering products online, locating a nearby restaurant, or playing a game. “We had it completely nailed,” Nuovo said.58 Finally, Apple was late to the market. “There are already big companies that dominate the space,” Bloomberg’s Matthew Lynn wrote, reviewing the iPhone at its introduction.59
Why didn’t Nokia beat Apple? Three types of issues often occur with disruptive innovations: the new product seems inferior; the new product does not seem to address any known unmet consumer needs; and the new product lowers the average profit margins for the incumbent.60 All these issues kept Nokia from launching a touchscreen phone.
First, Nokia, other incumbents, and technology pundits judged the iPhone to be markedly inferior to existing smartphone designs.61 The iPhone failed Nokia’s five-foot “drop test” for measuring durability. The iPhone used the slow 2G network compared to Nokia’s fast 3G phones. Worse, the iPhone had high manufacturing costs and sold at a very high price point of $499, even with a two-year contract. In “Four Reasons Why the Apple iPhone will Fail,” influential technology blogger Hung Truong wrote, “Right now, you can get a T-Mobile MDA smartphone for $0 after rebate. The mass market is not willing to pay this much for a phone.”62 The people who could afford iPhones already have “phones from their employers,” wrote Rory Prior on January 12, 2007’s ThinkMac blog.63 “These integrate with their various enterprise systems (Exchange, MS Office, IM, etc.) and, while they might be tempted buy an iPhone, the cold hard realities of non-replaceable batteries, no third party software, lack of blessing from the IT department, and the suckiness of onscreen keyboards will keep them tied to their existing Windows Mobile smartphones,” he predicted.
Second, disruptive innovations often don’t appear to address any customer-perceived shortcomings of existing products to offset their inferiority on other dimensions. That is because the invention does something entirely new or outside the usual needs of the incumbents’ customers. Prior to the iPhone, no smartphone user was complaining about the inability to swipe to move pages or delete emails. No one asked to be able to pinch to zoom images on the pre-iPhone smartphones in 2007. No one clamored for an app store. Most incumbents use market research to find out what customers want, missing the opportunity for true innovation. As Henry Ford is quoted as saying, “If I’d asked customers what they wanted, they would have said ‘a faster horse.’”
Third, an incumbent would have to disrupt its own business by investing money in making its own offerings obsolete64—something that is difficult for corporate boards and Wall Street to stomach. In 2006, Nokia’s CEO Olli-Pekka Kallasvuo merged the R&D units of smartphones and basic phones together. As a result, engineering teams vied against each other for resources and, as typically happens, profitable high-revenue current product lines won out against speculative high-cost new products. A new one-button touchscreen phone represented a risky new playing field and Nokia, the leader in the current market, had no reason to disrupt itself.
The same logic caused the Detroit automotive companies to yield market share to Japanese automakers. One of the key phenomena that make disruptive innovations so insidious is that what seems to be an inferior product at its introduction (e.g., a keyboard-less smartphone or a tiny subcompact car) becomes a superior one over time as a result of market acceptance of the new product’s unique abilities (e.g., the minimalist iPhone design or the fuel economy of the subcompact car) as well as incremental improvements to the innovation. Thus, Detroit ignored the first subcompacts but then the Japanese introduced compacts and then midsize cars to the market. At each step, the American companies focused on the high-end, profitable part of the market. With each market segment relinquishment, the average profit per product sold by the American firms went up, supporting the decision. In the end, the Japanese introduced the Acura, Lexus and Infinity as well as full size SUVs and trucks, bringing the Detroit automakers to their knees. Of course, the Koreans (Samsung and Kia) are doing this to the Japanese; and the Chinese (e.g., Geely) may be doing the same to the Koreans. Similarly, US steel mills faced the introduction of the mini-mills; and IBM suffered at the dawn of the personal computer age.
Disruptive innovations are not always products: they can be process or efficiency innovations, too. Innovative processes can allow companies to sell the same product at a lower price, offer more variety, or provide a different user experience—all of which can disrupt competitors. Although traditional retailers previously believed that Amazon’s website was an inferior shopping experience to their stores, Amazon’s huge selection, low prices, customer reviews of products, and good customer service drove multiple retail stores out of business. Amazon started with books, a product in which most titles sell at extremely low rates in any given town or city, meaning that books often sit on the shelf for long periods and most titles never make it to local bookstores.
In contrast, nationwide, centralized shipping from a warehouse took advantage of risk pooling, allowing Amazon to have “the largest bookstore on earth” in terms of the number of titles in stock. Amazon also avoided the high cost of retail space and retail employees. Amazon’s customer reviews and “click to look inside” feature gave consumers the information they needed to purchase obscure titles. As Amazon moved from books to high-tech gadgets, toys, and other products, many traditional retail chains collapsed, unable to withstand the onslaught. Yet brick-and-mortar retail still retained one advantage—shoppers in a store could get products immediately, but shoppers on Amazon had to wait one or two days for delivery.
The rise of ebooks enabled instant gratification via downloads to PCs, tablets, smartphones and Amazon’s own Kindle devices—people did not even need to drive to the mall to get a new book. Then, in 2013, Amazon began offering same-day delivery as well as in-person payment and returns in some markets. As previous barriers to online purchasing fade away, more traditional retailers may be disrupted.
Amazon is also in the process of disrupting the publishing industry by offering authors a path to direct electronic and on-demand publishing, cutting traditional publishers out of the process. The new process dramatically reduces the time from manuscript submission to publication and lets authors take a much larger fraction of the sales proceeds. A similar revolution has affected the music and movie distribution businesses, with a shift toward independent productions and digital downloading over physical manufacturing, distribution, and retailing of CDs or DVDs.
Another notable process innovation was the postponement process developed by Dell, Inc. In the early 1990s, Dell perfected its direct sales model for PCs and servers. The company kept no inventory of finished goods but rather took orders over the Internet, letting customers customize the configuration they wanted and see the prices of alternative configurations. Dell then quickly built the specified machine and sent it to the customer in a few days. In bypassing retail channels, Dell was never stuck with outdated PCs. It was able to lower prices as component prices deflated, and it could offer many more product variants than any physical retailer. Dell’s innovative process gave it increasing market share and a negative cash conversion cycle (Dell got paid by its customers before it had to pay its suppliers for components).
Yet innovation does not guarantee permanent competitive advantage. In the 2000s, the environment changed as the PC industry matured and shifted from a race for the latest desktop technology—which Dell was well suited to deliver—to an emphasis on inexpensive laptops and netbooks. Longer product lifecycles and reduced numbers of product configurations let Dell’s competitors build computers overseas at low cost and use low-cost ocean shipping.65 Computers became a basic commodity found at low-cost retailers such as Costco and Walmart.
Dell was late to react to the shift, and the company’s attempts to cope with the change paralleled Detroit’s delayed reaction to low-cost Japanese cars, except that Dell used outsourcing. Dell began outsourcing more and more of the low-value segments of its business—motherboard and computer assembly, supply chain management, and finally the design—to ASUSTek in Taiwan.66 At each stage, outsourcing to ASUSTek made financial sense because it improved Dell’s profitability (costs declined yet revenues stayed the same). At the end of the progression of seemingly rational decisions, ASUSTek started offering better computers to retail chains such as Best Buy at low costs that Dell had difficulty matching.67
The impacts of disruptive innovation can spread beyond the innovator’s competitors and even beyond the innovator’s industry. Amazon may deliver a lot of products, but it doesn’t deliver fresh hot pizza (yet). But Amazon and other e-retailers are implicated in the 2011 (and then again in 2014) bankruptcy of the 800-restaurant pizza chain Sbarro. Although Amazon and Sbarro don’t compete, Sbarro depends heavily on foot traffic in shopping malls at their food court outlets.68 Between 2010 and 2013, foot traffic in US shopping malls dropped by half—a trend blamed in part on rapidly growing e-commerce.69 The same trend contributed to the 2014 bankruptcy of another food court denizen, Hot Dog on a Stick.70
Other types of innovations can occur far upstream in the supply chain but trickle down to disrupt downstream companies. For example, many industries depend on the electronics suppliers, and those electronic suppliers have a wide range of customers in different industries outside the consumer electronic industry. For example, Verifone uses microelectronics—similar to those used in the cell phone industry—inside its point-of-sale terminals. GM makes cars with a wide variety of microprocessors for controlling vehicle systems as well as providing in-dash navigation and driver controls. BASF, the German chemical manufacturing giant, creates and operates complex, high-reliability industrial controller networks in its chemical factories.
Whereas the microelectronics suppliers innovate rapidly to serve the shifting and fickle needs of consumer electronics, not all electronics customers want to abandon last year’s chips for this year’s crop. Verifone’s pace of product turnover is linked to retailers’ multiyear capital expenditure plans for point-of-sale systems. GM’s multiyear process for designing and producing new car models plus its five-year warranty on all vehicles means that GM may need continuity of parts supply for more than a decade. BASF has a safety-driven policy of “three generations behind” in its IT implementations on production systems because the company wants the reliability and safety of tried-and-true chips, rather than incorporating the latest. All three companies share a common supply chain risk caused by upstream innovation. Their chip industry suppliers change products and design strategies much more frequently than Verifone, GM, or BASF would like.
“We’re watching that industry very, very carefully,” said Patrick McGivern, senior vice president of global supply chain of Verifone, “because the cell phone guys are driving a lot of it. And then the cell phone guys move on to a different strategy. So we’re watching to see, will this industry even be here five years from now? Because it’s critical to what we have.” The risk is that cell phone makers might move away from the current technology and abandon certain suppliers who are crucial to Verifone. Such suppliers may not survive the loss of their biggest-revenue customers, leaving Verifone without critical components.
Companies with long product lifespans or long-lived assets (e.g., cars, aircraft, factories) that use short-product-lifecycle goods (e.g., microelectronics) have two ways of coping with this obsolescence risk. First, some procure last-time-buys once a problem is identified; that is, they purchase months’ or years’ worth of future product demand and spare parts. Second, in some cases, companies can base their products on modular architecture, in which some components can be based on a faster design cycle, matching the suppliers’ cycle time. A third, future alternative may involve the use of 3-D printing, which would allow suppliers to keep the digital plans for a product and produce it in small quantities on demand after the end of mass production. Customers might negotiate “have-made” rights or escrow rights to the design to ensure ongoing supply even if the original supplier chooses to discontinue production.
In October 2007, Brian Chesky, a recent college grad, abandoned his job hunt in Los Angeles to stay with a friend in San Francisco.71 Unemployed, Chesky couldn’t help his friend pay the rent—until the roommates had an idea. They noticed that a large industrial design conference had come to San Francisco and overwhelmed the local hotel market. Many hotels were sold out and the remaining few rooms had exorbitant prices. The quick-thinking roommates laid out some inflatable mattresses, dashed down to the grocery store to buy some bacon and eggs, and turned their apartment into a bed and breakfast at $80 dollars per person per night. They called their little foray in the hospitality industry an “AirBed and Breakfast” and made enough money to pay their rent.
Realizing that others might want to offer these same services, they launched AirBnB, an online hub for home sharing that by the end of 2013 reached 10 million guest stays.72 These forms of “sharing” are becoming more prevalent in other areas, too. Sharing options include car sharing services (Zipcar, RelayRides, Car2go), alternative taxi services (Lyft, Sidecar, Uber), bike sharing (Hubway, Zagster), household goods sharing (Snapgoods, ShareSomeSugar), tools (The Southwest Portland Tool Library), and clothing (Tradesy, SwapStyle).
Direct consumer-to-consumer, “sharing economy” businesses blossomed in the wake of the second-hand economy that grew tremendously with the Internet. Small second-hand stores and lowly “want ad” circulars were supplemented by Craigslist, eBay, Secondhand Mall, Secondhand.org.uk, and dozens of others. Lingering financial hardship among consumers following the 2008 financial crisis both created a supply of stuff to rent or share and created demand from consumers looking for cheaper alternatives than full-price retail products and services. An Internet-spawned ethos of person-to-person “sharing” rather than buying, and an increasing willingness to connect to strangers, further enabled this trend.
These sharing companies threaten to disrupt existing rental firms (e.g., hotels and taxis) as well as product manufacturing companies (e.g., reducing demand for new cars). With 600,000 properties in 200 countries in its listings,73 AirBnB is larger than all but the four largest hotel chains in the world.74 RelayRides competes with rental car companies at 229 airports in the United States.75 Rather than pay to park at the airport, consumers are paid to offer their vehicles while they travel or when they don’t need the car.
Some companies are embracing the trend. Driven by environmental sustainability, Patagonia partnered with eBay in a campaign to encourage consumers to buy used Patagonia garments instead of buying new ones. While seemingly cannibalizing its own sales, the campaign cemented Patagonia’s environmental bona fides, reminded customers of the sturdiness of Patagonia products, and lowered the resistance to purchase new garments by pointing out their resale value.
Unlike a sudden and localized disaster that captures instant headlines, disruptive innovations emerge slowly and gather force over time. Whereas a hurricane makes landfall at a particular instant, Nokia’s experience with the iPhone shows that disruptive innovation has no obvious onset date. Whereas a hurricane exhausts itself in a brief but furious few days of wind and rain, disruptive innovation creates a permanent shift in customer demand and market structure that sparks the need for long-term adaptation by companies. One way of motivating such long-term changes in any organization is by changing the key performance indicators (KPIs) used to assess and reward workers’ and managers’ activities. Such KPIs drive behavior, because “what gets measured gets managed.”
Most companies recognize the potential advantages of innovation, such as reducing the chances of being disrupted, increasing the chances of taking market share from competitors, and being among the first to take advantage of long-term trends. While the uninitiated might imagine innovation as a serendipitous light bulb turning on when inspiration strikes the inventor’s mind, innovation can actually be reduced to a methodical process of generating and evaluating potential concepts. To manage this process, leading companies use innovation-related KPIs to measure and manage the rate of new product and process development. Innovation metrics can track the inputs to the innovation process (e.g., spending on R&D, percentage of employees contributing ideas), the process of bringing innovations to market (e.g., new product development cycle times, ramp-to-volume cycle time) and the outputs of the innovation process (e.g., number of patents, number of new products, percent of revenues derived from new products).76
Jeff Murphy, an executive director at Johnson & Johnson, noted that the adoption of innovation metrics should follow the steps of the deployment of an innovation strategy.77 Early metrics track engagement, training, and participation of people at the beginning of innovation activities. Next, additional metrics measure the growing or accelerating pipeline of projects. Finally, a company with a more mature innovation strategy would measure end-goal attainment, such as revenues from new products. Yet the Nokia example shows that the sheer quantity of new products may not be sufficient if the company is too protective of profits from existing product lines. New market entrants, having no incumbent products to protect, can disrupt an otherwise innovative market leader.
“We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run,” said Roy Amara, a researcher, scientist, and past president of the Institute for the Future.78 Scenario planning is a way to avoid these kinds of long-term forecasting missteps associated with thinking about long-term trends, long-term risks, and strategic responses. Rather than attempt to forecast the likelihoods of changing technologies, political realignments, urbanization, or any other trend, scenario planning asks managers and executives to envision a divergent set of “what-if” futures and how those different realities might affect the company.
Scenario planning complements business continuity planning (see chapter 6). Although both start with a “what if” and are intended to help an organization prepare for risks, the two methods differ on goals and timescales. Whereas business continuity planning tries to create ready-to-execute plans to return the organization to predisruption levels of performance, scenario planning tries to create a plan for adapting to permanent changes in the definition of performance. Whereas business continuity planning handles transient events (e.g., a hurricane), scenario planning handles long-term trends (e.g., a world where almost everything is made by in-home 3-D printers). Finally, whereas business continuity planning aims to mitigate disruptive threats, scenario planning handles both threats and opportunities. Scenario planning is not unlike the war gaming done by military planners who try to envision various new threat scenarios and play out how the military might handle them. Similar approaches are also used by political scientists in developing long-term foreign policies.
An organization begins the scenario planning process by identifying the fundamental question it wants to address. For example, in its 1997 scenario planning exercise, UPS asked “What is UPS’s global business in this ever-changing competitive environment?” And in 2010, UPS did a new scenario planning exercise with the focus of “What is the future of UPS’s world market and major regional markets in 2017?” In 2010, Cisco asked, “What forces will shape the Internet between now and 2025?”80
The next step is to create or select some scenarios. Because scenario planning is meant to spark new thinking about different futures, the effort typically involves multiple mutually exclusive and very divergent scenarios. For example, UPS in 2004 considered four scenarios derived from quadrants defined in terms of the degree of openness in business models (proprietary vs. collaborative) and the business environment (harmonious vs. chaotic). Cisco’s 2010 effort looked at four scenarios culled from eight possible scenarios defined by divergent possibilities in the world’s level of Internet infrastructure density (limited vs. extensive), patterns of innovation (incremental vs. breakthroughs), and Internet user behavior (constrained vs. unbridled).81 The different scenarios are not just some quantitative percentage-point variations up or down from an expected forecast. Instead, they are qualitatively different environments.
A well-crafted scenario needs to seem possible (even if it may be unlikely), be internally consistent, and spark strategic discussions about how the company might change to survive and thrive in that new future. Each scenario should be rich in story-like details, such as by providing example news stories from that future, so that the participants feel immersed in that future world. For example, Sainsbury and Unilever, working with Forum for the Future, developed four “consumer futures” scenarios. In each scenario, they gave a seven-year timeline of key developments, postulated trends on ten metrics, portrayed dozens of hypothetical products, described key elements of living and shopping in that possible world, and gave a hypothetical day-in-the-life of a customer called “Suzie’s shampoo story.”82
Next, the organization considers the implications of each scenario using structured and unstructured discussions guided by the purpose of the scenario planning effort and the nature of the organization. For example, a National Cooperative Highway Research Program project at MIT’s Center for Transportation and Logistics examined implications of four scenarios for future freight flows in the United States with a number of states’ departments of transportation.83,84 With each scenario, the discussions were structured around five specific types of impacts on the flow of freight. These included the volume of freight, the value density of the freight, the origins, routings, and destinations. The researchers also used a coarse-grained geographic model of the freight infrastructure of the country. During one part of the effort, the United States was segmented into five seaport zones, four regional highway corridors, four rail corridors, four aggregated airport zones, two land-port borders, intermodal, inland waterways, and “other.” Some discussions considered the impact on various categories of freight infrastructure such as gateways (airport, sea port, etc.), connectors (intermodal connection, short-line rail, secondary road, etc.), and corridors (highway, Class I rail lines, etc.). In the case of the future of freight flows project, the goal was to answer the question, “Where should investments in freight transportation infrastructure be made today [in 2011] for the year 2040?” Recommendations took the form of investment in capacity in different modes as well as restructuring initiatives such as creating freight-only lanes.85
Because many scenario planning efforts look years or decades into the future, the discussions often center on long-term strategy and capital investments rather than tactical responses. Because of the uncertain nature of the distant future, scenario planning does not attempt to estimate precise quantitative implications. Instead, it looks at qualitative differences and similarities in the organization’s potential response or adaptation to the different scenarios. One of the most valuable benefits of scenario planning exercises is to widen managers’ perspectives and socialize the organization to future possibilities that may be very different from the present.
Detection plays a key role in turning scenario planning from a tabletop exercise into a risk management tool. For example, UPS’s scenario planning exercise in 1997 made the company aware that it lacked a branded consumer-side outlet. When UPS identified an opportunity to remedy this, in 2001, it bought Mail Boxes Etc.’s network of over 4,000 retail shipping outlets for $191 million from MBE’s struggling parent company. UPS’s move forced FedEx to pay—some say overpay—$2.4 billion to purchase the smaller 1,200-outlet network owned by Kinko’s. The point was that scenario planning made UPS aware of possible shifts and opportunities that it could act on when the time was right.
Different scenarios call for different detectors. A scenario that postulates a dramatic rise in the costs and social unacceptability of long-distance transportation, for example, might affect a company’s long-range supply chain network planning and capital expenditures. Detectors for this scenario might monitor fuel prices, regulatory events, and social activist trends that seem to be reinforcing the locavore movement. If triggered, the company might, for example, delay investment in a centralized manufacturing and distribution systems in favor of local production for local customers.
Scenario planning can help companies think through the implications of projected economic trends. For example, consider the potential for reshoring—moving manufacturing back from Asia to North America and Europe. The rationale for this seems quite plausible, as Tom Linton, Flextronics’s head of procurement and chief supply chain officer, mentioned at a presentation at MIT.86 As labor costs rise in China, companies may leave China in favor of Mexico to be closer to the US and Canadian markets. Given NAFTA, leveraging Mexican labor may give a significant advantage to North America and will reduce transportation costs as well. Indeed, PricewaterhouseCoopers estimates that lower American energy prices could result in one million more manufacturing jobs as firms build new factories in the United States.87 Finally, new trade agreements in Latin America may reignite the growth of this continent and increase trade with the EU. At the 2013 Business Summit of the Community of Latin American and Caribbean States with the European Union, Latin American leaders committed to open trade and signed a joint declaration with the EU to embrace international trade.88 US companies might use reshoring as a competitive advantage to offer lower lead-times, better service, and a “made-in-America” branding.
Yet as plausible as this logic sounds, it’s not guaranteed. Other events might forestall reshoring, such as a (futuristic) direct China-US rail link via the Bering Strait,89 unrest in Mexico,90 regulation of fracking,91 or antibusiness regulations in the United States.92 Even if China becomes too expensive, companies might move to other low-cost countries, such as Vietnam, Myanmar, or any of a number of countries in Africa. Furthermore, the Chinese market itself is expanding and offering more local opportunities. Thus, a bet on massive reshoring might be a risky move.
That’s where scenario planning can help. The purpose of scenario planning isn’t to forecast whether reshoring will happen or not, but to help a company think through the implications of very different futures such as a reshoring renaissance in the United States vs. an even greater dwindling of US manufacturing. In thinking through these issues, companies can create detectors or sensors for the tipping point if the world starts trending toward one scenario or the other, to be ready to take advantage of whichever trend takes hold.
Of course, successfully detecting a qualitative change in the environment may not guarantee a successful response to that change. As the discussion of disruptive innovation implied, companies build portfolios of valuable assets (e.g., factories, products, brands, processes, core competencies), which they then utilize to generate profits and which tend to constrain the company’s choices. Companies are naturally loath to abandon these assets, even if a competitor’s innovation or an environmental change threatens to obsolete them. Scenario planning can’t guarantee a correct decision, but it can help a company think through large-scale changes, possibly even before it commits to building assets that it might later need to abandon.