Skip to main content

5. Dealing with Risk and Uncertainty

Published onApr 10, 2020
5. Dealing with Risk and Uncertainty

Quam multa fieri non posse prius quam sunt facta iudicantur?
(How many things are judged impossible before they actually occur?)

—Gaius Plinius Secundus (Pliny the Elder), Naturalis Historia, VII.i, 6

A number of potential catastrophes that could transform the world in a matter of months (extraordinarily virulent pandemics, a sequence of volcanic mega-eruptions) or even minutes (collision with a massive extraterrestrial object, accidental nuclear war), and a much longer array of worrisome trends (whose outcome can be a new world order or a historically unprecedented global environmental change) add up, even when approached with a robust belief in the problem-solving capacity of our sapient species, to an enormous challenge. Any other attitude leads to responses that differ in form but agree in their dismal substance.

Fundamentally, there is little difference between an agnostically despondent wait for a (most unlikely) miraculous delivery from these seemingly inalterable perils and a religiously charged chiliastic expectation of the Judgment Day, or between historical reprise of a sudden demise of yet another civilization (Roman and Mayan precedents are the favorite examples, albeit in many ways irrelevant) and the scientifically justified (through “terror, error, and environmental disaster”) inevitability of our final hour. All of these expectations are nothing but informal, qualitative ways of forecasting, and as I’ve said, I have a strong personal dislike of such efforts and plenty of historical evidence to demonstrate their ephemeral nature and their repeated failure to portray the complexity of future natural and human affairs.

A few of these attempts may capture important trends, but they cannot come anywhere close to the setting within which these trends will unfold. Moore’s law of transistor packing on a microchip is an example of a rare accurate quantitative forecast, albeit in its revised form: the original period for doubling the number of transistors was 12 months, later extended to 18 months (Intel 2003). But when Moore formulated it in 1965, there were no microprocessors (devices made possible by this packing), and neither he nor anyone else could anticipate that four decades later the two leading customers for these remarkable devices would be the personal computers, devices nonexistent and entirely unanticipated in the mid-1960s, and car industries. (A typical U.S. car now has at least 50 embedded microprocessors, and their cost represents about 20% of the car’s retail value.)

Ephemeral lifespans and spectacular failures are the normal lot of virtually all broader forward-looking appraisals, including those that are supported by the best scientific consensus. After all, we do not live in a society where everything is powered by nuclear fission (consensus about the early twenty-first century that prevailed into the 1970s), nor in one where global affairs are run from Moscow or Tokyo. And even widely accepted interpretations of past trends, made in order to derive lessons for the future, can suffer the same fate. I offer three illustrations, two prospective and one retrospective, all related to environmental catastrophes.

Scientific consensus during the early 1970s was that the planet faces a serious spell of cooling. In 1971, Stephen Schneider, later to become well known for his warnings about the perils of inevitable and intensive global warming, argued that even if the concentrations of CO2 were to increase eightfold (an impossibility during the twenty-first century), the surface temperature would rise by less than 2°C, and a fourfold rise in aerosol (not to be ruled out during the twenty-first century) would lower the mean planetary temperature by 3.5°C. The consequence: “If sustained over a period of several years, such a temperature decrease over the whole globe is believed to be sufficient to trigger an ice age” (Schneider and Rasool 1971, 138).

By 1975, Newsweek reflected this consensus, writing about “The Cooling World” and “ominous signs . . . serious political implications . . . the drop in food output” based on “massively” accumulating scientific evidence (Gwynne 1975, 64). This planetary cooling trend was blamed, among other trends, for “the most devastating outbreak of tornadoes ever recorded” and for declines in agricultural production to be expected during the rest of the twentieth century. Now, of course, global warming gets blamed for the increase in violent weather, and after a busy 2005 hurricane season (with a record number of named storms), capped by Katrina and the destruction of New Orleans, we were told to expect even more catastrophic events in 2006, but the year did not see a single major hurricane hit the United States.

The second example concerns what is to be one of the most consequential outcomes of global warming, the fate of the Atlantic heat conveyor (fig. 5.1). As already noted, an abrupt disruption of this flow is quite unlikely, but a longer-term risk may exist. We are told (in a doomsday book by a leading British scientist) that the Gulf Stream, powered by thermohaline circulation, will be quenched by Greenland’s melting ice sheet, and that the stream’s truncation or reversal could plunge “Britain and neighboring countries . . . into near-Arctic winters” (Rees 2003, 111).

A couple years after Rees wrote those words came the first apparent confirmation of this undesirable trend. A comparison of five surveys of ocean temperature and salinity data in the Atlantic conducted between 1957 and 2004 showed that in 1998 and 2004 the conveyor weakened significantly as more of the northward-flowing Gulf Stream was recirculated in the subtropical gyre before reaching higher latitudes, and less cold water was returning southward at depth between 3-5 km (Bryden, Longworth, and Cunningham 2005). As the media reported the coming dramatic cooling of Europe, they ignored the lead author’s caveat that the observed weakening was comparable to the estimated uncertainty of observations. Moreover, the long-term behavior of such a highly dynamic system cannot be judged by a few episodic measurements. Indeed, new data acquired between the Bahamas and the Canary Islands demonstrated that the flow reduction reported by Bryden et al. (2005) was well within the range of a very large natural seasonal variability; the conclusion: ocean circulation is noisy but it is not weakening (Cunningham et al. 2007).

And, most fundamentally, it is simply not true that the Gulf Stream is driven by thermohaline circulation, nor is it responsible for Britain’s warm winters. The Gulf Stream, much like the Kuroshio Stream in the Pacific and the Agulhas Stream in the Indian Ocean, is a wind-driven flow produced by the torque exerted on the ocean and sustained by the conservation of angular momentum. We have known for decades that its prime movers are solar radiation (via wind) and the Earth’s rotation (Stommell 1948). And Seager et al. (2002) demonstrated that the Gulf Stream is not responsible for Europe’s mild winters. Loss of this heat flow would have a marginal effect on Atlantic Europe because its climate does not require a dynamic ocean. Most of the continent’s winter heating is due to the seasonal release of heat previously absorbed by the ocean and advected by the prevailing winds, not by ocean heat-flux convergence (see also Seager 2006).

A fascinating retrospective example concerns the supposed choices societies make to engineer their own demise. Diamond (2004), in his book on collapse, uses the story of Rapa Nui (Easter Island) as one of the most prominent examples of anthropogenic environmental change that precipitated a societal collapse. The reckless deforestation of the island, he writes, caused a population crash and the disappearance of a mysterious moai-building civilization (fig. 5.2). This conveys a premonitory analogy to our reckless treatment of the environmental commons.

<p><strong>Fig. 5.1</strong><br>Atlantic heat conveyor. Warm water (<em>light path</em>), detached from clockwise circulation in the subtropics, warms the North Atlantic, sinks in the northernmost region, and then continues as a deep southerly return flow (<em>dark path</em>). Based on Quadfasel (2005).</p>

Fig. 5.1
Atlantic heat conveyor. Warm water (light path), detached from clockwise circulation in the subtropics, warms the North Atlantic, sinks in the northernmost region, and then continues as a deep southerly return flow (dark path). Based on Quadfasel (2005).

<p><strong>Fig. 5.2</strong><br>Unfinished monumental <em>moai</em> at Rapa Nui (Easter Island). Photo courtesy of David Malone, Texas Tech University.</p>

Fig. 5.2
Unfinished monumental moai at Rapa Nui (Easter Island). Photo courtesy of David Malone, Texas Tech University.

But Diamond’s narrative has ignored everything else but the choices attributed to thoughtless humans. In contrast, Hunt’s (2006) research identified Polynesian rats, which had been introduced to the island, as the real cause of destruction of most of the Jubaea palm forests, but this deforestation did not trigger the collapse of a small society (most likely just 3,000 people). The post-contact (after 1722) infectious diseases and enslavement did that. Thus a picture of man-induced environmental collapse rests on a simplistic explanation, but it will not be easily dislodged.

The point is made. There is no shortage of sudden and potentially catastrophic changes that could transform modern civilization. There is an even larger array of worrisome trends that could shape it. But none of these events and processes can be understood and appraised to a degree that would allow confident identification of future risks even when seen in a relative isolation. Our abilities to discern complex feedbacks (sometimes even just their net direction) and very long-term consequences are even weaker. Uncertainty rules, and there are no shortcuts to lead us from these dim perceptions to clearer understanding.

But we are not powerless either. Many risks can be quantified (though with significant margins of error), and many trends have relatively constrained outcomes. For example, the probability that the average fertility rate in affluent countries will double, or that the doubling of preindustrial greenhouse gase concentrations will produce a 15°C warming, is infinitesimally low. Consequently, by quantifying the odds of risky events, we can establish an approximate ranking of relative fears, and by looking at the most likely constraints of major trends, we can determine a range of possible outcomes. After that comes an even greater challenge: What can we do to lessen, if not eliminate, those risks? What steps can we take to change those worrisome trends or at least bend them in more benign directions?

Relative Fears

Appraisals of natural catastrophes that can have both a dramatic instantaneous effect and generations-long global consequences show low probabilities during the next half century, but, at the same time, such quantifications enter a realm that is alien even to those experts who routinely analyze risks. Leading hazards encountered in modern society have a fairly high frequency of fatalities, but they kill or injure people discretely and in small numbers, and many of the losses (mortality, injuries, or economic damage) are sustained through voluntary actions and exposures whose risks people almost uniformly underestimate. Annual mortality aggregates of such exposures may be relatively high, but they come to public attention only if a particular event of that kind is unusually large.

In confronting these risks we should not rely on imagination and fears. Instead, we should deploy, to the greatest extent possible, a revealing comparison of relative perils. While it is very difficult to find a uniform metric to compare risks of injuries or economic losses (both of these categories span a wide range of qualities), the finality of death makes it possible to compare the risk of catastrophic or accidental dying on a uniform basis. This evaluation is best done in terms of fatalities per person per hour of exposure, a risk assessment approach that was originally developed by Starr (1969; 1976). I use it to compare relative risks of all quantifiable events discussed in this book.

Starr concluded that acceptability of risk from an activity is roughly proportional to the third power of its benefits, and he posited a fundamental difference in risk appraisal. When people are engaged in voluntary activities—when they feel that they are in control of their actions and when repeated experiences have shown that the outcomes are predictable—they readily tolerate individual risks (driving, overeating, smoking) that are up to 3 OM higher than those arising from involuntary exposure to natural or anthropogenic catastrophes and providing a comparable level of benefits. Reexamination of Starr’s work using psychometric studies of risk perception confirmed that people will tolerate higher risks if activities are seen as highly beneficial, but it suggested that familiarity and fear (dread) rather than the voluntary nature of exposure were the key mediators of acceptance (Fischoff et al. 1978; Slovic 1987; 2000).

That all of these three factors are at play is illustrated by car accidents, perhaps the best example of that peculiar attitude with which humans treat voluntary, wellknown, and well-accepted risks that have a high frequency but a low fatality rate per event. As noted, car accidents cause worldwide nearly 1.2 million deaths per year (WHO 2004b), but more than 90% of individual events involve the demise of just one or two people and therefore do not attract media attention. Accidents are widely reported only when the per-event mortality rate suddenly rises (albeit in absolute terms it still remains fairly small). Fogor ice-induced pileups of dozens or scores of cars causing a dozen or more casualties are the most common events that are invariably reported.

In sharp contrast is the attitude toward terrorist attacks. Their instant fatalities are sometimes large, their risks should not be minimized, but it is our utter ignorance regarding the time and mode of such attacks (psychometric “unknowability” factor) and the self-inflicted terrorizing perception of it (psychometric “dread” factor) that wildly exaggerate their likely frequency and impact.

Most of the risks arising from long-term trends remain beyond revealing quantification. What is the probability of China’s spectacular economic expansion stalling or even going into reverse? What is the likelihood that Islamic terrorism will develop into a massive, determined quest to destroy the West? Probability estimates of these outcomes based on expert opinion provide at best some constraining guidelines but do not offer any reliable basis for relative comparisons of diverse events or their interrelations. What is the likelihood that a massive wave of global Islamic terrorism will accelerate the Western transition to non-fossil fuel energies? To what extent will the globalization trend be enhanced or impeded by a faster-than-expected sea level rise or by a precipitous demise of the United States? Setting such odds or multipliers is beyond any meaningful quantification.

Quantifying the Odds

The unavoidable yardstick for comparing the quantifiable odds of catastrophic events is general mortality, the crude death rate of a population measured annually per 1,000 people. Mortality figures have the advantage of being fairly accurate; only during war or famine is their accuracy questionable. During the first five years of the twenty-first century, the crude death rate in affluent countries ranged from 7.2 in Canada (thanks to its relatively young population) to 10.4 in Sweden, and the mean for all rich economies was 10.2. The year has 8,766 hours (corrected for leap years), and hence the average mortality of affluent nations (10/1,000) prorates to 0.000001, or 1 × 10-6, per person per hour of exposure (which in this case means simply being alive).

Put another way, in affluent countries one person out of a million dies every hour. Cardiovascular diseases account for about one-third of this total, so the overall risk of succumbing to them is about 3 × 10-7 per person per hour of exposure. I must reiterate that this is the risk of mortality averaged across an entire population. The age-specific risks of dying for populations of premature babies or people over 90 years of age will be higher; those for populations of grade-school girls or dietconscious adult Methodists will be lower. Few people are aware of the actual magnitude of this unavoidable risk, yet most people behave as if they were taking it into account in their everyday behavior. They have no second thoughts about routinely engaging in activities or living in environments that expose them to risks of death that are 1 OM or more lower than is the risk of general mortality. Figure 5.3 shows this for U.S. mortalities.

Scores of millions of people live in regions that are highly susceptible to such natural disasters as hurricanes or earthquakes, posing risks whose magnitude is only 10-10-10-11 per person per hour of exposure. Even in the United States, with its poor rail transport (compared to Europe and Japan), people who travel every day by train enjoy the safest form of public transportation. Traveling by train has a fatality risk of about 10-8, adding a mere 1% to the overall risk of dying while en route. Similarly, the latest generation of jet planes is so reliable that only a rare pilot error (often during inclement weather) causes major accidents. Between 2002 (when there was not a single accident) and 2005 the risks of U.S. commercial aviation were only about 1 × 10-8 (identical to the risk of suicide) as some 600 million passengers boarded planes for trips averaging 2 hours (NTSB 2006). And even during the tragic year of 2001 the annual nationwide mean was about 3.3 × 10-7, still 1 OM below the risk of general mortality.

<p><strong>Fig. 5.3</strong><br>U.S. fatalities per person per hour of exposure vs. average annual number of fatalities, 1991-2005. Both axes are logarithmic. Calculated from data published by Centers for Disease Control and Prevention, National Transportation Safety Board, National Weather Service, and U.S. Geological Survey.</p>

Fig. 5.3
U.S. fatalities per person per hour of exposure vs. average annual number of fatalities, 1991-2005. Both axes are logarithmic. Calculated from data published by Centers for Disease Control and Prevention, National Transportation Safety Board, National Weather Service, and U.S. Geological Survey.

Most people also tolerate activities that temporarily increase the overall risk of dying by 50% or that may even double it. Most notably, people drive, and many still smoke. Driving carries a risk of about 5 × 10-7 in the United States, adding on average 50% to the overall risk of dying. Smoking nearly doubles the risk of general mortality, to about 8 × 10-7 in the United States (see fig. 5.3). At the same time, most people shun actions whose relative risk is a multiple of the basic risk. Hang gliding rates more than 2 × 10-6, and only a very small group of determined risk seekers engages in such exceedingly risky activities as fixed-object jumping (be it from Yosemite’s El Capitan or from skyscrapers or bridges), whose U.S. fatalities I have calculated to be as high as 10-2 per person per hour of exposure (with individual exposure lasting usually less than 30 s).

The same statistical approach can be used for quantifying the risks of rare but highly destructive natural disasters. But all these calculations require three key uncertain assumptions regarding the frequency of specified events, the number of people exposed to these risks (this requires a highly questionable averaging over very long time spans), and total fatalities. Assuming a frequency of 100,000 years to 2 million years for the Earth’s encounter with an asteroid 1 km in diameter and a steady global population of 10 billion people, the total of 1-1.5 billion worldwide deaths would translate to 2 × 10-10 to 5 × 10-12 fatalities per person per hour of exposure. The risk of residents of Tokyo or New York dying as a result of a smaller (200-400 m diameter) asteroid’s striking their cities (assuming current populations and return frequency once in 100,000-300,000 years) would be about 3 × 10-12.

A mega-eruption at Yellowstone, with an interval of 700,000 years and 5-50 million deaths among a future stationary population of 300 million people living in the United States downwind of the hotspot, would carry a risk of 3 × 10-12 to 3 × 10-11 per person per hour of exposure. A Toba-like event, with a frequency of 700,000 years and a worldwide (mostly Asian) death toll of at least 500 million people, would have a fatality risk rate of 1 × 10-11. Various scenarios of megatsunamis generated by near-offshore asteroid impacts or volcanic mega-eruptions and destroying major coastal urban areas yield the same magnitudes (10-11-10-12) of risk to individuals. These risks are similar to those arising from such common natural disasters as Japanese or Californian earthquakes, but this comparison also shows the limits of this statistical approach. The common indicator of deaths per person per hour of exposure does not distinguish between relatively frequent lowmortality events and extremely rare events that could bring death to hundreds of millions or even more than a billion people.

Averaging makes perfect sense for such high-frequency risks as hurricanes and tornadoes (or airplane disasters) whose annual toll may differ by up to 1 OM but that take place every year. Averaging is also revealing when assessing the risks of phenomena with a recurrence rate of decades or centuries (major earthquakes, larger volcanic eruptions). But no society has any experience with preparing for a risk whose fatalities may prorate to a modest total of 1,000/year but which has an equal probability of taking place tomorrow or 100,000 years from now (or not materializing at all during the next million years) but when it does, it may kill hundreds of millions.

Different problems arise when this individual risk calculus is used to quantify the probabilities of violent deaths. In 2005 worldwide deaths in terrorist attacks totaled fewer than 15,000 (NCC 2006), which translates into 2.5 × 10-10 fatalities per person per hour of exposure, again on par with the risks of infrequent natural catastrophes. But that rate is not a meaningful average because this violence was concentrated. The rates in Finland or Japan were nil; for U.S. civilians (56 killed in Iraq), 2 × 10-11; for Colombian citizens, it was 2 × 10-9. And because more than half of all deaths due to terrorism (~8,300) took place in Iraq, the Iraqi rate was about 3 × 10-8 and the Baghdad rate (>5,000) was 1 × 10-7.

The last number illustrates the problems of comparing different risks using individual exposure metrics. The Baghdad death risk was of the same order of magnitude as the risk faced by an average American while driving. But there are critical differences of exposure, understanding, and dread. The average time spent behind the wheel in the United States, voluntarily and with statistically well-known consequences, is less than 1 hour per day (and many people actually enjoy driving), whereas most Baghdadis could become involuntary victims of dreaded violence that can strike unexpectedly and in a variety of horrible ways (kidnapping of children, sectarian beheading, suicide bombing).

Similarly, the risk calculations reveal a very low probability of dying in a terrorist attack against U.S. citizens in the United States. The World Trade Center garage bombing in 1993 killed 6 people, the Oklahoma City bombing in 1995 caused 168 deaths, and the September 11, 2001, attacks killed 3,200 people in New York, Washington, and Pennsylvania. Consequently, during the 15 years between 1991 and 2005 the overall risk of death in a terrorist attack (using 275 million as mean population for the period) was about 1 × 10-10 per person per hour of exposure, or only slightly higher than dying in a blizzard in the snowy part of the United States. When the deaths of U.S. citizens killed in terrorist attacks abroad are also included—those in the 1996 bombing of Khobar Towers in Saudi Arabia (19 deaths), the bombing of embassies in Nairobi and Dar es Salaam (301 deaths, including 12 U.S. citizens), the Yemeni attack on the USS Cole (17 deaths)— the increase is insignificant.

Finally, because the invasions of Afghanistan and Iraq would probably not have taken place absent 9/11, it is plausible to argue that all U.S. military casualties (including noncombat deaths) in those countries (3,000 by the end of 2006) should be added to that total. The overall risk from terrorist attacks and military response to them then about doubles, to 2 × 10-10, still 1 OM below the risk dying from homicide (7 × 10-9) and 3 OM below the risk of fatal car accidents averaged over the same period of time (see fig. 5.3). During the first five years of the twenty-first century, the U.S. highway death toll exceeded the 9/11 fatalities every single month; at times it was higher in just three weeks.

Even one of the worst cases from leaked terrorist attack scenarios prepared by the Department of Homeland Security (Jakes 2005) does not imply an extreme risk. The spraying of anthrax from a truck driving through five cities over two weeks was estimated to kill 13,200 people. If these actions were to take place in metropolitan areas with populations of at least 2 million people each and be repeated every ten years, then even such an unlikely recurrence would prorate to only 1.5 × 10-8 fatalities per person per hour of exposure, a risk lower than the risk of dying from an accidental fall and less than one-thirtieth of the risk due to driving.

But, again, such perfectly valid comparisons are seen as odious, even utterly inappropriate. They pit voluntary activities undertaken by people who feel in control of actions they understand (presuming they can manage any risky situation) and who believe the outcomes are predictable (driving, walking downstairs) against involuntary exposures to unknown, unpredictable, and dreaded outbursts of violence. They also compare time-limited activities with a threat that is always there. These differences explain why the two sets of risks are viewed quite differently, and why this sort of violence is always seen as more dangerous than it really is.

Of course, this perception of a much dreaded event could become entirely justified, and the calculus of risk would profoundly change if fissionable materials were used in a terrorist attack. Fortunately, such an event would have to be preceded by many steps: a terrorist’s gaining control of a state that possesses nuclear bombs, or stealing a bomb from a silo or submarine, transporting it undetected to a target, and being able to discharge it. Even so, some weapons experts argue that this could be done rather easily; others doubt it (Bunn and Weir 2005; Zimmerman and Lewis 2006).

Such irreconcilable expert appraisals are not uncommon in the assessment of risks. Some of them were noted in previous chapters, most notably, U.S. deficits as a harbinger of economic demise or a sign of economic strength, and global warming as a catastrophic or tolerable, perhaps even beneficial, change. In the absence of requisite detailed information it may be counterproductive to try to reconcile many of these disparities. What is needed instead is to have rational frameworks within which to respond to these risks and, most important, to insist on taking a number of well-known effective steps that would minimize these risks or at least moderate their consequences.

Rational Attitudes

A collective long-term probability approach must become a part of our reaction to sudden catastrophic events such as terrorist attacks or historically unprecedented natural catastrophes. Many of these events represent much lower dangers and have less profound and long-lasting consequences from the view point of national stability, economic damage, and standard of living than do many voluntary risk exposures (drinking, driving, smoking, overeating) and deliberate yet deleterious policy actions (enormous budget and trade deficits, wasteful subsidies, unchecked environmental destruction).

Indeed, Chapman and Harris (2002) argued that the disproportionate reaction to 9/11 was as damaging as the direct destruction of lives and property. In relative terms, the human toll and the economic damage were not unprecedented. As noted, discussing catastrophic premature deaths in statistical terms is a perilous enterprise, but this should not prevent us from realizing that the death of 3,200 people on 9/11 was equal to just four weeks of car fatalities averaged from the U.S. total of 38,615 road traffic deaths in 2001 (USDOT 2006) or less than two weeks’ worth of U.S. hospital deaths (44,000-98,000/year) caused by documented and preventable medical errors (Kohn, Corrigan, and Donaldson 2000).

Figure 2.24 made the same point in global terms, comparing the average annual fatalities from terrorist attacks with those from transportation accidents, major natural disasters, and errors during hospitalization for the years 1970-2005. As for the total (combat and noncombat) U.S. troop casualties in Iraq, Buzell and Preston (2007) calculated that between March 2003 and September 2006 they averaged 4.02/1,000, roughly three times the rate for a comparable age and sex cohort of young men and women in the United States (about 1.32/1,000) but significantly less than the deaths from homicide for young black males in Philadelphia.

Nor is the economic impact of the 9/11 attack entirely unprecedented. Evaluation of its costs depends on the spatial and temporal boundaries imposed on an analysis. New York City’s Comptroller reported a year after the attack, on September 4, 2002, that the attack might cost the city up to $95 billion, including about $22 billion to replace the buildings and infrastructure and $17 billion in lost wages (Comptroller 2002). A broader national perspective, taking into account the longerterm decline in GDP, declines in stock values, losses incurred by airline and tourist industries, higher insurance and shipping rates, and increases in security and military spending, leads to totals in excess of $500 billion (Looney 2002).

In his November 2004 message to Americans, Usama bin-L–din cited this total as he explained that the attacks were also chosen in order to continue the “policy of bleeding America to the point of bankruptcy,” a goal aided by “the White House that demands the opening of war fronts” (bin-L–din 2004, 3). Facetiously he added, it appears to “some analysts and diplomats that the White House and we are playing as one team” towards “the economic ruination of the country” (“even if the intentions differ”), and he cited the Royal Institute of International Affairs estimate that the attacks cost Al Qaeda just $500,000, “meaning that every dollar . . . defeated a million dollars.”

This is an interesting but inaccurate interpretation. Most of the losses proved to be only temporary. Elevated monthly U.S. stock market volatility was as transitory in the wake of 9/11 as it was with other market disruptions (Bloom 2006). GDP growth, commercial flying, and tourism recovered to pre-9/11 levels with remarkable rapidity. And five years after the attack the U.S. budget deficit was a lower share of the country’s GDP than in France or Germany. But even an overall loss of half a trillion dollars would not have been without precedent in the realm of nonviolent events. That much money was lost on a single day, October 19, 1987, when the New York stock market fell by a record 22.9%, a drop nearly twice as large as the infamous 1929 crash.

Risk studies offer revealing observations relevant to the perception and appraisal of catastrophes and trends (Morgan and Henrion 1990; Slovic 2000; Morgan et al. 2002; Sunstein 2003; Gigerenzer 2002; Renn 2006). First, in the immediate aftermath of catastrophic acts the stricken populations have strongly exaggerated perceptions of a repeat event. This has been true not only about constant post-9/11 expectations of another terrorist attack in the United States but also about almost psychotic fears of another massive hurricane in the aftermath of Katrina. Second, unfamiliar risks and those that appear to be impossible or very hard to control elicit a disproportionately high public fear. This can lead to patently excessive responses compared to reactions to old recurrent hazards or those that can produce further, indirect damage. The more dreadful and the more unexpected a risk, the more the public clamors for protection and a solution, and the more the public and governments are prepared to spend.

The post-9/11 haste to set up the Department of Homeland Security, an unwieldy and ineffective bureaucratic conglomerate of dubious utility, is a perfect example of such counterproductive public reaction, and reduced air travel and increased frequency of car trips illustrate irrational individual responses to the tragedy. Gigerenzer (2002) demonstrated that during the first 12 months after the 9/11 attack, nearly 1,600 U.S. drivers (six times the number of passengers killed on the four fatal hijacked flights) lost their lives on the road by trying to avoid the heightened risk of flying. Sunstein (2003, 121) makes another important point: “When strong emotions are involved, people tend to focus on the badness of the outcome rather than on the probability that the outcome will occur.” This “probability neglect” helps to explain why societies have excessive reactions to low-probability risks of spectacular and horrific events. Terrorists operate with an implicit understanding of these realities.

This leads to some uncomfortable conclusions. Systematic post-9/11 appraisals have left us with a much better appreciation of the variety of threats we might be facing, but have done nothing to improve our capacity for even the roughest ranking of the most likely methods of globally significant future attacks and hence for a more rational deployment of preventive resources. Nor do we get the basic directional feeling regarding the future threats: Will they be more spectacular versions of successful attacks (e.g., striking New York’s subway stations)? Or will they be copies of failed attempts, much as the prevented 2006 trans-Atlantic airliner attacks were copies of Ramzi Yusuf’s plot prepared for 1995? Or will they be dreaded premieres of new ways of spreading death and mass fear (a dirty bomb, successful large-scale release of a pathogen), or even some as-yet-unidentified attacks?

All we know with certainty is that historical lessons are clear: ending terror everywhere is impossible. Terrorism is deeply rooted in modern culture, and even a virtual eradication of one of its forms or leading groups settles little in the long run because new forms and new groups may emerge unexpectedly, as did plane hijackings in the 1960s and al-Qaeda in the 1990s. Analogies between militant Islam and the Mafia are instructive. As Cottino (1999) explained, the Mafia’s persistence is rooted in Sicily’s culture of violence, which represents the historical memory of a particular world, a mind set resistant to change.

Analogously, militant Islam is rooted in deeply persistent historical memories. Westerners do not appreciate the extent to which the Crusades, the most explicit expression of ancient European aggression against the Muslim world, are alive in many disaffected Muslim minds (Hillenbrand 2000; Andrea 2003). Add to this the feelings of more recent humiliations brought about by the post-World War I European expansion and the carving up of the Ottoman Empire, the Western (particularly U.S.) support of Israel, and a deepening perception of unjust economic exploitation. As bin-L–din (2002) said in his letter to Americans, the existing Middle Eastern governments “steal our ‘umma’s wealth and sell it to you at a paltry price.” What can the West do? Keep apologizing for events that took place 800-900 years ago, work for reinstating the caliphate, stop consuming Middle East oil, consent to the destruction of Israel? In the absence of such actions accumulated Muslim grievances will continue to spread (as Muslim populations grow) and deepen (as defeat of the West remains elusive).

And the Mafi a analogy also suggests an important conclusion. Anti-terrorism strategy should be framed not as a war but as a repressive action against a cellular, secretive, networked, violent organization. A clear collateral of this reality is that there can be no meaningful end to this effort, and no victory (Schneider and Schneider 2002). Streusand and Tunnell (2006) made a thoughtful plea that in this effort we should use language that can actually help to fight Islamic terrorists. Allah should simply be translated as God and not transliterated because that treatment exaggerates the difference among the three monotheistic faiths. Jihā d (whose primary meaning was explained in chapter 3) should not be used as a blanket label for brutal indiscriminate terrorist killings. Hirabah, sinful warfare waged contrary to Islamic law, is the proper term, and so is mufsid (evil, corrupt person), not mujā hid. Such terms, readily understood by Muslims, would remove any moral ambiguity from our dealings with al-dar al-islā m.

But perhaps nothing is more important for the exercise of rational attitudes than always trying to consider events within longer historical perspectives and trying to avoid the chronic affliction of modern opinion makers who tend to favor extreme positions. The product of these ephemerally framed opinions is a range of attitudes and conclusions that resemble the cogitations of an unstable manic-depressive mind. Unrealistic optimism and vastly exaggerated expectations contrast with portrayals of irretrievable doom and indefensibly defeatist prospects. Examples of such extreme contrasts are easy to find in any area where future risks and trends have to be assessed rationally in order to come up with rational responses.

After 9/11, Fallaci (2002), who spent decades documenting political ambitions and wars and who deeply understood violence and personal danger, was unequivocal in her passionate appeal to the West that “the worst is still to come” and that Europe is committing collective suicide. But Fallows (2006), President Carter’s former speechwriter, claiming that he voiced a broad consensus of Washington D.C. experts, concluded that the United States is succeeding in its struggle against terrorism, that “we have achieved a great victory,” and that “the time has come to declare the war on terror over.”

The fortunes of the dot-com economy exemplify how the same commentators can switch among two attitudes at the first indication of a shift. The new economy of the 1990s was to usher in an era of endless economic growth—heady valuations of companies that had nothing to sell, a bestselling book entitled Dow 40,000. But once the bubble burst, a new episode of U.S. economic difficulties (contrasted with the rise of China and the buoyant euro) swiftly became a harbinger of a collapsing dollar and an unstoppable demise. In reality, the new economy was not all that new, and it has not measured up to the truly great inventions of the past (Gordon 2000; Smil 2004). And an imminent U.S. economic collapse was yet again postponed.

While many commentators on the U.S. economy of the early 2000s indicate a system in considerable distress, there are also signs of relative resilience and even of continued unrivaled primacy. In 2005, U.S. government debt was equal to about 64% of GDP, a lower share than that of EU-12 (about 71%), slightly lower than the French or German share, and an almost 10% lower share than in 1996 (73%). A surprise: the U.S. government has actually been less reckless in living beyond its means than were the governments of the EU countries (fig. 5.4). And the United States still dominates the activities that have created the global electronics civilization. In 2005 the country had 116 of the world’s top 250 companies (ranked by revenue) in this sector (electronics, telecommunications, and information equipment, software and allied services), Europe had 42, and China and India had none (OECD 2006).

<p><strong>Fig. 5.4</strong><br>Gross government debt as a percentage of GDP in seven countries, 1990-2005. From OECD (2006).</p>

Fig. 5.4
Gross government debt as a percentage of GDP in seven countries, 1990-2005. From OECD (2006).

Moreover, by 2004 30% of the world’s scientific publications were still authored by U.S. researchers, compared to 6.5% in China (Zhou and Leydesdorff 2006). Adjusted for population size, this means that the U.S. output was 20 times higher than in China. Judged by these measures, China will not be taking over anytime soon. And by 2006 it also became quite clear that Boeing, whose fortunes fluctuated for years, was a success after all. Years of writing off Boeing and crowning Airbus as a new king of aviation are gone (for the time being). Boeing, with its new 787 and a new version of the venerable 747, is gathering record orders, while Airbus, with its superjumbo 840 schedule slipping, is sinking deeper into managerial ineptitude and technical and financial troubles.

To give a notable social example of reality that contravenes a preconceived conclusion, Huntington’s (2004) argument about the dangers of Hispanic immigration—because of the common language, proximity of mother countries, regional congregation, residential segregation, and less interest in assimilation—is not supported by research done in Los Angeles and San Diego. Rumbaut et al. (2006) found that even in the country’s largest Spanish-speaking region that is contiguous with (and historically a part of) Mexico, the preference for speaking Spanish at home declines only slightly slower than for other immigrant languages, getting close to a natural death by the third generation (fig. 5.5). So the prospect is not that good for the Mexican reconquista of the U.S. Southwest.

<p><strong>Fig. 5.5</strong><br>Proportion of immigrants in Los Angeles and San Diego who speak their mother tongue at home, by generation. Based on Rumbaut et al. (2006).</p>

Fig. 5.5
Proportion of immigrants in Los Angeles and San Diego who speak their mother tongue at home, by generation. Based on Rumbaut et al. (2006).

Rational attitudes should inform our deliberate decisions. What we get instead are increasingly splintered approaches to major challenges facing modern societies, perpetuated and accentuated by powerful special-interest lobbies and pressure groups, be they corporate relief seekers or Green warriors. The resulting adversarial and confrontational attitudes to modern policy making make it harder to formulate consensual steps. The key advantage of this prevailing approach is that it should help to minimize the probability of rash, overreaching decisions, but too many unfortunate decisions that have been made belie this hope. Consequently, I always find myself arguing that we should act as risk minimizers, as no-regrets decision makers who justify our actions by benefits that would accrue even if the original risk assessments were partial or even complete, failures.

Acting as Risk Minimizers

There is nothing we can do to avert such low-probability natural catastrophes as volcanic mega-eruptions or mega-tsunamis generated by asteroid impacts or by the most powerful earthquakes. In this sense, our civilization is no different from the cuneiform or hieroglyphic realms of the Middle East 5,000 years ago. We are getting better at anticipating some volcanic activity—clear warnings given ahead of Mount Pinatubo’s 1991 eruption enabled the evacuation of more than 50,000 people and the limitation of fatalities to fewer than 400 (Newhall et al. 1997)—but discerning the likely magnitude of an eruption remains beyond our ability.

Earthquake prediction has been no less elusive. Minimal advance warnings have been sufficient to prevent any catastrophic derailment of Japan’s rapid trains. The earthquake detection system devised for shinkansen picks up the very first seismic waves reaching the Earth’s surface, uses this information to determine risk levels, and can halt trains or at least slow them down before the main shock arrives (Noguchi and Fujii 2000). USGS now makes routine 24-hour forecasts of the probability of aftershocks in California, and it continues its multidisciplinary research in the town of Parkfield, situated on the San Andreas fault, aimed at eventual better prediction of earthquakes.

But we can do much to be better prepared for a number of anticipated catastrophes, and we can take many steps to moderate negative impacts of some of the most worrisome trends. The precautionary principle should be invoked precisely when facing those high risks whose understanding is characterized by uncertainty or outright ignorance. Preventive, preparatory, or mitigating actions are called for in order to avoid extreme consequences of unmanaged outcomes, whether a viral pandemic, global warming, or the use of weapons of mass destruction by terrorists. In such situations imperfect scientific understanding or substantial uncertainties regarding the most likely mode and intensity of the next catastrophic event should not be used as an excuse for inaction or for postponing known effective measures.

We should act incrementally as prudent risk minimizers and pursue any effective no-regrets options. We do not have to wait for the formulation and acceptance of grand strategies, for the emergence of global consensual understanding, or for the universal adoption of more rational approaches. In order to illustrate the wideranging opportunities for effective or potentially promising no-regrets measures, I give examples for such disparate challenges as preventing an unlikely encounter with a sizable asteroid, preparing for a new influenza pandemic, eliminating alien invasive species from islands, and preserving the richest realms of biodiversity.

Surprisingly, we may soon be able to defend ourselves against an extraterrestrial object that is on a collision trajectory with the Earth. Once we have surveyed all NEOs larger than 1 km (the target date is the end of 2008) we will be able to calculate the risk of a major terrestrial impact with unprecedented completeness and accuracy. Moreover, at the current rate of discovery some 90% of all NEOs large enough to pose a global risk should be detected by the year 2020 (Rabinowitz et al. 2000). A low-probability surprise of encountering a new asteroid large enough to do globally significant damage will remain, and it is for this eventuality that serious proposals have been made to destroy the object or deflect it from its potentially destructive path (Milani 2003). Only a smaller body would be a candidate for the first option. Breaking up a large object would create a rain of still fairly large and randomly tumbling pieces and actually exacerbate the problem.

Deflection would also demand much less energy. Only 3 mJ/kg would be needed to change a body’s velocity by about 10 cm/s, enough to convert a predicted hit into a miss a year later (Chapman, Durda, and Gold 2001). If we could dock with an object and push it, we could, with adequate warning, perform such tasks. For example, a few minutes’ burn of the first stage of a Delta 2 rocket could deflect a 100-m-diameter object with six months’ warning. Given the magnitude of damage that could be avoided by such a maneuver, it seems prudent for national space agencies to have well-funded long-term programs to develop and explore the most suitable techniques that might avert a potential global catastrophe (Schweickart and Chapman 2005). Compared to monies spent on risky, expensive, and repetitive manned missions, this relatively modest investment would be a most valuable form of planetary insurance.

As for a new viral pandemic, we are incomparably better prepared than in 1918. At that time there were no mechanical respirators, no readily available supplemental oxygen, and no antibiotics to treat secondary infections. And we are better off than we were even in 1968-1969, during the last flu pandemic. Our scientific understanding (virological, genetic, epidemiological) is vastly superior, new antiviral drugs afford some preventive capability, and there is a much better system for the near-instantaneous sharing of relevant information and for coordinating an effective response. Not surprisingly, major uncertainties remain.

None are more important than the pathogenicity and the morbidity profile of the next pandemic: a recurrence of the W-shaped pattern (see fig. 2.17) typical of the 1918-1919 event would have a severe impact even if overall pathogenicity remains fairly mild (MacKellar 2007). Complex mathematical models can help us to estimate the needed stockpiles of vaccines and their modification and targets as well as to optimize quarantine measures (D. J. Smith 2006). We cannot be certain about the effect of massively distributed antivirals, but stochastic simulations are encouraging. A large-scale epidemic simulation by Ferguson et al. (2006) found that international travel restrictions would only delay the spread of a pandemic (by 2-3 weeks) unless 99% effective, but if antivirals are administered to half the affected population within a day of symptom onset, they could, together with school closures, cut the clinical attack rates by 40%-50%, and a more widespread use of drugs could reduce the incidence by more than 75%.

Similarly, a simulation for Southeast Asia showed that as long as the basic reproductive number (the average number of secondary infections caused by a typical infected individual) remained below 1.6, a response with antiviral agents would contain the disease, and prevaccination could be effective even if the mean reproductive number were as high as 2.1 (Longini et al. 2005). Another simulation for the region confirmed that elimination of a nascent epidemic would be practical if the basic reproduction number remained below 1.8 and there were a stockpile of at least 3 million courses of antiviral drugs (Ferguson et al. 2005).

The limited supply of the most efficacious antiviral agent, Roche’s Oseltamivir (Tamiflu), caused a great deal of temporary concern (even panic) in 2005, but new licensing agreements and new synthetic methods increased the drug’s annual production rate by 2 OM between 2003 and 2007 (Enserink 2006). But the treatment (10 capsules) remains expensive, and it is only presumed to be as useful against an eventual pandemic strain as it is against seasonal viruses. There is also a high probability that any large-scale use for prophylaxis and treatment will lead to the evolution of drug-resistant strains (Regoes and Bonhoeffer 2006). The only effective prevention is vaccination, but because the form of the next pandemic cannot be predicted, the only foolproof strategy is the preparation of high-yield seed viruses of all 16 HA subtypes so they would be ready for potential mass production of vaccines. The ultimate, still elusive, goal—a vaccine that could protect against all human influenza strains—is being pursued by academic and big pharma researchers (Kaiser 2006).

Combating invasive species by simply removing them from infested islands is another no-regrets strategy that has already yielded some gratifying results. Eradication of alien terrestrial and aquatic species on continents is virtually impossible because the invaders readily migrate to adjacent habitats. Until the 1980s it was thought impossible to do the job even on relatively small islands. But by 2005, 234 islands had been cleared of invasive rats; 120 islands, of goats; 100 islands, of pigs; and 50 islands, of invasive cats and rabbits (Krajick 2005). A notable example of a recent success is ridding the Galápagos Islands of feral goats (already gone from Isabella, Santiago, and Pinta); now the aim includes cats and rats (Kaiser 2001; Guo 2006). Results are seen quickly as native vegetation returns and previously decimated song birds or lizards reclaim their habitats.

And it turns out that preserving the richest repositories of biodiversity does not have to be unrealistically expensive. James, Gaston, and Balmford (1999) put the annual cost of safeguarding the world’s biodiversity at about $17 billion added to the inadequate $6 billion spent currently. Needed are adequate budgets for already protected areas and the purchase of additional land in order to extend coverage to a minimum standard of 10% of area in every major biodiversity region. Total cost would be equivalent to less than 0.1% of the combined GDP of the United States and the European Union, less than 5% of the monies now spent annually on generally environmentally harmful agricultural subsidies, and less than Westerners spend annually on yachts or perfume.

By far the best example of a rich opportunity to deploy a no-regrets strategy is minimizing the future magnitude of global warming. I hasten to emphasize that this strategy should not feature the currently fashionable carbon sequestration (IPCC 2005). To keep on generating ever larger amounts of CO2 and to reduce its climate impacts by storing billions of tonnes of the compressed gas underground is decidedly a distant second-best approach. As long as we depend heavily on the combustion of fossil fuels, the reduction of atmospheric CO2 levels would best be accomplished by striving for the lowest practicable energy flows through our societies, a strategy that would result in significant cuts of greenhouse gas emissions regardless of the imminence or the intensity of anticipated temperature change.

This risk-minimizing strategy would be insurance against prevailing uncertainties and dramatic surprises, but this benefit would not be the only, not even the most important, reason for its adoption. Most important, lower emissions of CO2 would require reduced consumption of fossil fuels, a goal we should have been pursuing much more aggressively all along because of its multiple benefits. These gains include a large number of environmentally desirable changes and important health and socioeconomic benefits. Examples include less land destruction by surface coal mining, lower emissions of acid-forming gases, reduced chances for major oil spills, cleaner air in urban areas, improved visibility, and slower acidification of the ocean. By reducing the overall exposure to particulates, SO2, NOx, hydrocarbons, ozone, heavy metals, and ionizing radiation from coal burning, moderated fossil fuel combustion would lower the morbidity of exposed populations and improve their life expectancies. These changes would also improve the collective quality of life by lowering health care costs.

These conclusions are supported by large-scale epidemiological studies of excess mortality as well as by morbidity comparisons. For example, a multicity analysis of mortality found that a daily increment of 20 μg/m3 of inhalable particulate matter (produced largely by power plants and vehicles) increases mortality by about 1% (Samet et al. 2000). And the benefits of reduced photochemical smog are illustrated by comparing Atlanta’s acute asthma attacks and pediatric emergency admissions during the Olympic Games of 1996 (when measures that reduced traffic by some 30% were in effect) with the same periods during the previous and the following year (Friedman et al. 2001): asthma attacks fell by 40% and pediatric emergency admissions declined by 19%. Ask any asthmatic child or its parents if these are not good enough reasons to reduce the emissions. Moreover, lowering the energy intensity of economic output would increase a nation’s competitiveness in foreign markets, a development that would benefit the balance of payments and create new employment opportunities.

But the historical lesson of the long-term impact of more efficient energy conversion is clear: it promotes rather than reduces aggregate energy consumption. Consequently, in all affluent countries where per capita energy use is already 1 OM higher than in populous modernizing nations (U.S., 350 GJ; China, 40 GJ; France, 170 GJ; India, 20 GJ), all future efforts to reduce specific carbon emissions (per vehicle, per kilometer driven, per kilowatt hour of electricity generated, per kilogram of steel smelted) must be combined with efforts to reduce overall per capita consumption of carbon-intensive commodities and services. Otherwise more efficient conversion will merely keep expanding the affluent world’s already excessive use of energy.

I am tired of hearing that this cannot be done in a free market setting when it easily could be. The standard mantra is that one cannot regulate individual choice, that people yearning to drive a 4-tonne military assault machine to take them to a shopping center should be free to do so. This argument is risibly immature and utterly inconsistent because the purchase and use of such a vehicle is already subject to a multitude of restrictions and limits that are designed to increase safety, protect environmental quality, and promote social equity: seatbelts, airbags, unleaded lowsulfur gasoline, mandated minimum fuel consumption, scores of traffic rules, and taxes paid on the vehicle’s purchase and with every tank fill-up.

If we accept as normal and civilized stopping at red lights to safeguard the lives of people crossing, and paying more for better and more heavily taxed gasoline to eliminate lead pollution and increase the revenue for social programs, should we not accept as normal and civilized putting an absolute limit on the size of vehicles in order to preserve the integrity of the only biosphere we have? Such steps would be merely rational extensions of restrictions and limits that have already become inevitable.

Missed opportunities for higher U.S. automotive efficiency illustrate the benefits of the approach. Between 1973 and 1987 all cars sold in the country had to comply with CAFE standards, which doubled the fleet’s performance (halved the fuel consumption) to 27.5 mpg (8.6 L/100 km) (EIA 2005). The post-1985 slump in crude oil prices first stopped and then actually reversed this progress as vans, SUVs, and light trucks were exempted from the 27.5 mpg CAFE minimum. As these vehicles became more popular (by 2005 they accounted for nearly half of the passenger fleet), the nation’s average fuel rate fell to only 22 mpg (EIA 2005). But if the 1973-1987 CAFE rate of improvement had been maintained (no great challenge from the technical point of view) and applied to all passenger vehicles, the average fleet performance in 2005 would have been close to 50 mpg. This means that automotive hydrocarbon, NOx, and CO emissions as well as U.S. crude oil imports could have been cut by about two-thirds.

The Next 50 Years

I wish I could close with a crisp recapitulation that would neatly rank-order the probabilities of all plausible catastrophes during the next 50 years even as it tamed all the disparate trends surveyed in this book with a clever taxonomy offering measures of their likelihood, intensity, likely duration, and eventual impact. The former task is at least partly possible. Because I favor quantitative appraisals, I have offered as many as I could in assessing the probabilities of globally significant natural catastrophes, including a new viral pandemic. Here is a brief summation, necessarily punctuated, by many qualifying statements.

Fairly reliable judgments are possible regarding the major natural catastrophes. In order to leave a deep mark on global history they would have to be on scales not experienced during the historic era, and they would have to claim, almost instantly or within a few months, many millions of lives. Events of such a magnitude took place within the past million years, but none of them have probabilities higher than 0.1% during the next 50 years (fig. 5.6). But I must stress that the past record, while highly indicative, is basically one of singularities, events that are too few and mostly too far apart to allow for any meaningful statistical evaluation beyond the simplest calculations of highly uncertain return frequencies and approximate recurrence probabilities.

Still, we now know enough about NEOs to rank the danger from impacting asteroids as being the least likely discontinuity with the potential to change nearterm history. Moreover, the overall risk may be revised substantially downward during the coming years and decades as our classification effort is completed and future trajectories are computed. The only reason we should not dismiss this worry entirely is that while we now know a great deal about the probability of a major impact, we cannot easily translate this knowledge into the number of immediate and delayed fatalities. There are simply too many factors to consider, and hence there is always a highly improbable possibility of a relatively minor impact’s causing disproportionately consequential damage.

The Indian Ocean tsunami of December 2004 was a tragic reminder that the potential for large-scale natural catastrophes claiming 105-106 lives is always with us. But catastrophes able to kill 107 people—most likely a Toba-like mega-eruption that would affect directly (with lava flows and ejecta) not only a densely populated country but a large part of a hemisphere (with volcanic ash and possibly tsunamis) and that would cause multiyear global atmospheric cooling—appear to have a probability less than 0.01% during the next 50 years. By contrast, there is a high probability (1 OM higher than that of a new supereruption) of an influenza pandemic that would rival or surpass the greatest such event on the record. And a simple probabilistic assessment shows that the risk of a transformational mega-war is of the same order of magnitude (see fig. 5.6). But because the event of 9/11 has been the only terrorist attack that indisputably changed the course of global history, it is impossible to offer any meaningful probabilistic assessment of the frequency, intensity, and consequences of future attacks.

If we are to act as rational risk minimizers, the current preoccupation with terrorism should not blind us to what are historically two much more likely threats: another mega-war and another pandemic (possibly two) during the next 50 years. Early interventions to defuse emerging causes of potentially massive armed confrontations, and better preparedness for a major pandemic, would be the most rewarding risk-reducing steps. We can forget (relatively speaking) about near-Earth asteroids, supervolcanoes, and monster tsunamis, but we must not underestimate the chances of another mega-war, and we must remember that unpredictably mutating viruses will be always with us. That is why we should be constantly upgrading preparedness to deal with a new pandemic.

<p><strong>Fig. 5.6</strong><br>Probabilities of fatal discontinuities during the first half of the twenty-first century, from an extremely low chance of catastrophic asteroid impacts to near-certainty of another virulent influenza epidemic. All curves are approximate but show correct orders of magnitude. Calculated from data presented in chapter 2.</p>

Fig. 5.6
Probabilities of fatal discontinuities during the first half of the twenty-first century, from an extremely low chance of catastrophic asteroid impacts to near-certainty of another virulent influenza epidemic. All curves are approximate but show correct orders of magnitude. Calculated from data presented in chapter 2.

In the past we have taken many steps to lessen the risk of a thermonuclear war (those that formed a part of the U.S.-Soviet détente, and the post-1991 mutual nuclear arms reductions), and we must continue with all possible efforts of this kind. We can be better prepared for another major terrorist attack. A combination of rational steps, such as better evaluation of available intelligence, more flexible armed response (keeping in mind the Mafia analogy), and the gradual social and political transformation of Muslim societies can clearly reduce the likelihood of terrorist attacks and moderate their impacts. But we will always confront major uncertainties. We cannot meaningfully quantify either the intensity or the frequency with which organized groups can sustain terrorist attacks whose global impacts put them in the category of 9/11 events (or worse).

At the same time, the historical realities are not entirely discouraging. While there is no chance of a terror-free future, the long-term record is not one of debilitating fear and despair or ubiquitous loss of life and destruction. Although we must remain agnostic about the eventual impact of Rapoport’s fourth wave of terror, good arguments can be made for seeing it as a shocking (painful, costly) but manageable risk among other risks. We can note that the general tendency is to exaggerate the likelihood of new infrequent spectacular threats, and that the group participation, preparation, and organization, required for successful terrorist attacks could also work to prevent such attacks. At the same time, it is understandable why a responsible political leadership would tend to see these terrorist threats as an intolerable challenge to the perpetuation of modern open societies, and why it may overreact or choose (under pressure or from desperation) less than rational responses.

The necessity to live with profound uncertainties is a quintessential condition of our species. Bradbury got it right in Fahrenheit 451 (1953): “Ask no guarantees, ask for no security, there never was such an animal.” We, of course, keep asking. But we have no way of knowing if we are exaggerating or underestimating what is to come, be it from an inexplicable, accidental slide toward a mega-war, from the depth of the militant Islamic hatred, or from random mutations of viral genes. If we are grossly underestimating these risks, there is little we can do to make any fundamental difference. There is simply no way to prepare for a terrorist attack with hijacked nuclear-tipped missiles that could produce tens of millions of instant fatalities, or for a highly virulent pandemic that would produce more than 100 million deaths.

By contrast, there is a great deal of certainty regarding the duration and intensity of such fundamental trends as rapid aging of affluent populations, the need for a transition from fossil fuels, the rising economic importance of the world’s most populous modernizing economy, the continuing impoverishment of the biosphere’s diversity, or further atmospheric temperature increases. These trends continue to be probed by techniques that hope to forecast or model future events. But the greatest reward of the new quantitative models is the heuristic benefit their construction brings to the modelers, not their ability to capture anything approaching future complex realities. As for the standard applied forecasting models, they are nothing but series of ephemeral failures: thousands of forecasts are constantly issued by numerous chief economists and think tanks, only to be superseded by equally pointless exercises days or weeks later.

Exploratory scenarios are usually more sensible because they do not pretend to quantify the unquantifiable, but their limited utility springs from their inherently limited scope. A major project will issue a handful of scenarios that may make for an interesting reading but that do not add up to any helpful policymaking foundation. NIC’s (2004) mapping of the global future to 2020 is a very good example. It offers an excellent conclusion, foreseeing “a more pervasive sense of insecurity” based as much on perceptions as physical threat, but it is extremely unlikely that any of its detailed scenarios will come to pass. We will not have either a Davos World (unlimited globalization) or a new caliphate with a world run by sharīa enforcers sitting in Baghdad or Kabul.

If trends cannot be easily quantified or captured by exploratory scenarios, they cannot be meaningfully ordered or ranked either. It would be largely a matter of guesswork, not an exercise based on the frequency of past events. Even rare discontinuities are more amenable to quantification than are the intensities and durations of gradually unfolding trends. Any meaningful taxonomy (or even just a simple ranking) is undercut by two incessant processes: the changing intensities of even the most embedded trends, and the shifting significance and concerns, which result not only from complex interactions among closely allied processes but also from often stunning impacts of previously underestimated or ignored trends.

An unexpected temporary upturn of U.S. economic fortunes during the 1990s is an example of the first category. It could not alter the fundamental trend of the country’s declining weight in the global economy, but (coinciding with an equally unexpected retreat of Japan’s economy and the socioeconomic unraveling of the post-Soviet states) it briefly interrupted and temporarily reversed that slide. Examples in the second category abound because the key drivers of major trends keep shifting. Radical Islam was not on anybody’s list of factors threatening the United States during the 1980s. Indeed, during that decade, some protagonists of radical Islam were coopted by Washington’s strategists, via the Saudi-Afghan connection, to fight the Soviet Empire, and even after the first World Trade Center attack in 1993, U.S. policymakers showed remarkable reluctance to tackle al-Qaeda.

But after 9/11 the threat of terrorism has asymmetrically infected all these policymakers’ major actions, be it the setting of interest rates or lamenting the response to hurricane Katrina. As a result, a bearded, bespectacled, French-speaking Egyptian physician turned global terrorist is arguably as great a driver of the U.S. secular decline as is the excessive, deficit-raising consumption of the Wal-Mart-bound masses or the education system whose average graduates rank in problem-solving skills as poorly as they do in literacy and mathematics. How can we systematically compare or quantify these disparate, ever-shifting drivers?

Trends may thus seem obvious, but because they have a multitude of drivers whose importance shifts constantly, the resulting mix is beyond anyone’s grasp. So, when contemplating the great U.S. retreat, it is useful to recall that historians have identified scores of causes for the decline of the Roman Empire (Rollins 1983; Tainter 1989), and only a naive mind would rank German tribes ahead of debased currency, or imperial overstretch ahead of ostentatious, consumption, in that typology of the causes of Rome’s fall. This is also the reason that even when large corporations and large nations are fully aware of unfolding trends, they are rarely able to shape them to their long-term advantage.

Thus any verdicts must be circumspect, guided by an awareness of profound uncertainties. Perhaps the most assured conclusion is to say that the survival of modern civilization would be most severely tested by a nuclear war, be it through a failure of national controls, a madly deliberate launch by an irrational leadership, or a takeover of nuclear weapons by a terrorist group. The unknown risk of this catastrophe may be very low, but no anthropogenic act comes close to causing that many instant casualties. Threats from weapons of mass disruption (including any truly dangerous deployment of pathogens) would rank much lower. Risks of globally significant natural geocatastrophes should command sustained scientific attention but, relatively speaking, should not be a matter of great public concern. The threat of a new viral pandemic with its potentially massive mortality dictates the need for scientific progress and for greater public preparedness. By contrast, I would reserve the prospects of Earth’s life being crippled during the next 50 years by entirely new pathogens or devouring nanobots for sci-fi accounts.

The relative diminishment of U.S. economic power has been under way since the end of World War II, and it is an entirely expected trend, not a cause for concern. What is more important is the rapidity of this decline and its eventual extent, and the concurrently unfolding fortunes of other principal contenders for the place on top. Russia and Japan will remain major economic powers for decades to come, but their chances of claiming the dominant spot are highly unlikely because both of them are rapidly aging and significantly depopulating. Neither Russia’s vast resources nor Japan’s productive proficiency will be enough to make them contenders for global primacy.

China is now a foremost contender for this position, and many foreigners (and Chinese) see it as an inevitable winner. These are imprudent projections in the face of a multitude of long-term complications and serious checks that delimit China’s ascent: an extraordinarily aberrant gender ratio, serious environmental ills, the increasing inequality of economic rewards, and its weak soft-power appeal. India’s quest, largely a race to catch up with China, also faces many natural, social, and economic limits.

This leaves us to consider the role of the resurgent Islam, but the very heterogeneity of that faith and the major social, cultural, and economic divides among its adherents preclude the emergence of a unified Muslim power. Violence emanating from politicized Islam is a major (and frightening) challenge to the West, to a large extent because its common practice often takes the form of a sacrificial fusion of murder and suicide, a delusion that does not leave any room for compromise, negotiation, or rational arguments. Even so, in the long run this violence will be only one (and perhaps not even the most important) ingredient of a complex challenge posed by a combination of the continuing relatively fast growth of Muslim populations, the modernization deficit of their economies, and the radicalized nature of their internal affairs.

The fortunes of all countries could be greatly affected by one of the most worrisome global trends, the growing inequality of incomes and opportunities. This gap is widening even as modernization has yet to bridge the enormous gap between the affluent Western world and rapidly advancing parts of Asia on one hand, and four score of insufficiently improving or backward-moving economies, above all those in Africa. This bridging will be made more difficult because of the world’s fundamental energy problem, the stark poverty of choice for delivering large quanta of energy harnessed with high power densities to highly urbanized societies living increasingly in mega-cities.

We do not currently have any resource or conversion technique that can supply the nearly 10 Gt of fossil carbon that we now extract from the Earth’s crust every year. Neither wind nor biomass energy will do as fundamental fossil fuel substitutes. Their low power densities (and wind’s stochasticity) are utterly mismatched with the needs of today’s industries and settlements, and are unfit for ever-higher base load needs of pervasively electrified societies. Nuclear energy is the best high-powerdensity choice, but its unpopularity and the persistent absence of permanent waste storage facilities militate against its grand resurrection in the West. Our continuing reliance on fossil fuels (and there is no shortage of coal and unconventional hydrocarbons) may be leading us to an unprecedented increase in tropospheric temperatures.

Compared to this challenge, all other unwelcome trends may come to be seen as relatively unimportant. But it also may be that by 2050 we will find that global warming is a minor nuisance compared to something that we are as yet unable to identify even as a remote threat. Stepping back again, in 1955 few were concerned about the CFC-driven destruction of stratospheric ozone, which by 1985 panicked the planet. But even if global warming turns out to be a manageable challenge, its uneven regional and national consequences may weaken, change, or terminate a number of seemingly embedded economic and strategic trends.

This combination of catastrophic risks and unfolding worrisome trends appears insurmountable to many who think of civilization’s fortunes. There are, of course, irrepressible high-tech enthusiasts who keep on envisioning ubiquitous computing, maglev trains swishing past cities, or minifusion reactors distributed in basements (if not the complete takeover of humanity by sapient machines). They would do well to ponder the ubiquity of catastrophes throughout history, to think about viruses, bacteria, irrationality, hatred, the drive for power and dominance, and the different consequences of the global who’s-on-top game or what an unprecedented global environmental change could do to our species.

Doing that might lead one to conclude that despite many localized problems, the second half of the twentieth century was an exceptionally stable and an unusually benign period in global terms, and that the probabilities of more painful events will greatly increase during the next 50 years. During this time of uncertainty, trials, and doubts, when feelings of loss, despondency, and despair affect not only intellectuals but diffuse through a society, it is natural to turn to historical precedents and to parade the examples of collapsed civilizations and lost empires. The unsubtle message is, we are inevitably next in line.

The beginning of a new millennium has offered many of these prophecies (Lovelock 2006; Diamond 2004; Kunstler 2005; Rees 2003). One is Ferguson’s (2005; 2006) extended comparison of the West, particularly the United States, with the Roman Empire, a piece in the venerable tradition of Spengler (1919). Ferguson’s account lists a number of items that I introduced in my description of the U.S. retreat, but I do not share his conclusion of an early transformation of Washington’s Capitol into a picturesque ruin. I see things differently, and always less assuredly. The success of our species makes it clear that humans beings, unlike all other organisms, have evolved not to adapt to specific conditions and tasks but to cope with change (Potts 2001).

This ability makes us uniquely fit to cope with assorted crises and to transform many events from potentially crippling milestones to resolved challenges. This strength is inevitably also our weakness because it often leads us to overestimate our capacities. In the West, our wealth, the extent of our scientific knowledge, and the major areas of our lives where we have successfully asserted control over the environment mislead us into believing that we are more in charge of history than we can ever be. But this does not mean that our future is catastrophically preordained. That is why I have been deliberately agnostic about the civilization’s fortunes in this survey.

My intent was to identify, illuminate, and probe what I believe to be the key risks that global civilization faces in the coming decades, and to explain and assess some key trends that may contribute to its fate. My strong preference was to do so without engaging in counterproductive rankings, classifications, verdicts, and predictions. And my determination was to keep making clear the complexities, contradictions, and uncertainties of our understanding. The depth and extent of our ignorance make it imperative to chant a mantra that sounds discouraging but that is true and honest. There is so much we do not know, and pretending otherwise is not going to make our choices clearer or easier.

None of us knows which threats and concerns will soon be forgotten and which will become tragic realities. That is why we repeatedly spend enormous resources in the pursuit of uncertain (even dubious) causes and are repeatedly unprepared for real threats or unexpected events. The best example of this reality is the trillions spent on thousands of nuclear warheads, with not a single one of them being of any use as a deterrent or defense against the sacrificial terrorism. But the admirable human capacity to adapt and change offers a great deal of encouragement and justifies a great deal of skepticism about an imminent end to modern civilization.

<p><strong>Fig. 5.7</strong><br>Basilica of Santa Sabina, Aventine Hill, Rome. A luminous parable of an end as a new beginning: a dozen years after Alaric’s sack of Rome (410 C.E.), Petrus of Illyria initiated construction of this basilica (422 C.E.). <em>Top</em>, Santa Sabina’s main nave with a round apse; <em>bottom</em>, the basilica’s exterior. Photos by V. Smil.</p>

Fig. 5.7
Basilica of Santa Sabina, Aventine Hill, Rome. A luminous parable of an end as a new beginning: a dozen years after Alaric’s sack of Rome (410 C.E.), Petrus of Illyria initiated construction of this basilica (422 C.E.). Top, Santa Sabina’s main nave with a round apse; bottom, the basilica’s exterior. Photos by V. Smil.

As for endings, being an admirer of Kafka and Borges, I believe in the power of parables, and here is an apposite one. My favorite place in Rome is the Aventine Hill, high above the Tiber’s left bank. At its top, next to a rather unkempt orange garden, stands the basilica of Santa Sabina (fig. 5.7). Its unadorned brick exterior looks mundane, but once you approach its magnificent carved cypress doors, the perception changes. And then you enter one of the most perfect spaces ever put under a roof. The plan is simple, a wide nave and two aisles separated by rows of large fluted columns; the walls are bare, but the building is filled with light, bright but not dazzling, penetrating yet ethereal.

Construction of the basilica, built at the request of Petrus of Illyria, began in 422, just a dozen years after the Goths under Alaric sacked Rome, destroying much of its imperial and Christian splendor. Not far from the Aventine, on the Celian Hill, is Santo Stefano Rotondo, another splendid building with a central rotunda surrounded by two concentric circles of plain columns. It was completed in 483, seven years after the date that is generally considered the final end of the Western Roman Empire. These two magnificent structures remind us, gracefully and forcefully, of the continuity of history, of the fact that such terms as demise or collapse or end are often merely categories of our making, and that catastrophes and endings are also opportunities and beginnings.


No comments here