Skip to main content

2. Fatal Discontinuities

Published onApr 10, 2020
2. Fatal Discontinuities
·

Mors ultima linea rerum est.
(Death is everything’s final limit.)

—Quintus Horatius Flaccus (Horace)

Bostrom (2002) classified existential risks—those that could annihilate intelligent life or permanently or drastically curtail its potential, in contrast to such “endurable” risks as moderate global warming or economic recessions—into bangs (extinctions due to sudden disasters), crunches (events that thwart future developments), shrieks (events resulting in very limited advances), and whimpers (changes that lead to the eventual demise of humanity). I divide them, less dramatically, into (1) known catastrophic risks, whose probabilities can be assessed owing to their recurrence; (2) plausible catastrophic risks, which have never taken place and whose probabilities of occurrence are thus much more difficult to quantify satisfactorily; and (3) entirely speculative risks, which may or may not materialize.

Known catastrophic risks encompass discontinuities whose probabilities of recurrence can be meaningfully appraised because of reasonably well-understood natural realities and historical precedents. Their probabilities of nearor long-term recurrence can be quantified with a degree of accuracy that is useful for assessing relative risks and allocating resources for preventive actions or eventual mitigation. This category includes natural catastrophes such as the Earth’s encounters with extraterrestrial bodies, volcanic mega-eruptions, and virulent pandemics as well as transformational wars and terrorist attacks.

Although plausible catastrophic risks have never yet occurred, their potentially enormous impacts require that they not be excluded from any comprehensive assessment of future fatal discontinuities. Some of these catastrophes have been widely anticipated for decades. The fear of accidental nuclear war has been with us since November 1951, when the Soviets deployed their first deliverable fission bomb (RDS-1 Tatyana), although a more appropriate dating might be 1955, when both superpowers acquired their first nuclear-tipped long-range missiles, Matador and R-5M Pobeda (Johnston 2005). Unlike the strategic bombers (the first jet-powered plane, B-47, flew in 1947), the launched ballistic missiles could not be recalled, and there was no way to intercept them during the decades of the Cold War. Despite enormous expenditures initiated during the first term of the Reagan presidency, there is still no reliable antimissile defense in place.

Other events in this category have been matters of occasional speculation (e.g., a pandemic caused by a previously unknown pathogen), but overall the likelihood of occurrence and extent of impact elude any meaningful quantification.

Entirely speculative risks include both the fanciful—for instance, Joy’s (2000) vision of new omnivorous “bacteria” capable of reducing the biosphere to dust in a matter of days—and the completely unknown. Clearly, no one can give examples of the latter, but the likelihood of such unknowable surprises increases as the time span under consideration lengthens. Still, it is worthwhile to comment on key speculative unquantifiable risks and assign them to two basic categories of more and less worrisome events. This division can be based on the best relative ranking of (guess)timated probabilities, the most likely overall impact of such developments, or both.

Many critics would argue that discontinuities whose very occurrence remains speculative belong in the realm of science fiction. The rationale for addressing these matters here is captured in Tom Wolfe’s (1968) description of U.S. business leaders’ reaction to the quasi-prophetic statements of Marshall McLuhan: What if he is right?

Several of these speculative concerns were popularized by Joy’s (2000) paper about the dangers for humanity of three powerful twenty-first-century techniques: robotics, genetic engineering, and nanotechnology.

The robotics part of Joy’s publication was largely a derivative effort based on the work of two artificial intelligence enthusiasts, Hans Moravec (1999) and Ray Kurzweil (1999), who maintain that robotic intelligence will soon rival human capability (fig. 2.1). Kurzweil (2005) placed the arrival of “singularity”—when computer power will reach 1023 floating operations per second, vastly surpassing the power and intelligence of the human brain—quite precisely in 2045.

We have been promised superintelligent, omnipotent robots for several generations (Čapek 1921; Hatfield 1928). There are no such machines today; even the “intelligent” software installed in IBM’s Deep Blue II in order to play chess against the world champion Garry Kasparov in 1998 did not show the coming triumph of machines but merely that “world-class chess-playing can be done in ways completely alien to the way in which human grandmasters do it” (Casti 2004, 680). And while computers have been used for many years to write software and to assemble other computers and machines, such deployments do not indicate any imminent selfreproductive capability. All those processes require human actions to initiate them, raw materials to build the hardware, and above all, energy to run them. I find it hard to visualize how those machines would (particularly in less than a generation) launch, integrate, and sustain an entirely independent exploration, extraction, conversion, and delivery of the requisite energies.

Fig. 2.1
Faster-than-exponential evolution of computing power since 1900 (graphed here as millions of instructions per second per thousand 1997 dollars) led Hans Moravec to conclude that humanlike robots should be possible before the middle of the twenty-first century. Adapted from Moravec (1999).

Joy’s (2000) most sensational claim concerned the aforementioned omnivorous “bacteria” that could swiftly reduce the entire biosphere to dust. This claim might have been modified had Joy acknowledged some fundamental ecological realities and considered the necessary resource and interspecific competition checks on such a runaway scenario. Microorganisms have been around for some 3.5 billion years, and evolutionary biologists have difficulty envisaging a new one that could do away almost instantaneously with all other organisms that have survived, adapted, and prospered against such cosmic odds.

If the biosphere were prone to rapid takeover by a single microorganism, it could not have become differentiated into millions of species, thousands of them interdependent within complex food webs of rich ecosystems and all of them connected through global biogeochemical cycles. Symbiosis rather than interspecific competition has been the most fundamental driver of life’s evolution and survival (Sapp 1994; Margulis 1998; Smil 2002).

There are even more speculative, ostensibly science-based suggestions regarding civilization’s demise, including the idea that we are living in a simulation of a past human society run by a superintelligent entity that can choose to shut it down at any time (Bostrom 2002). Clearly, the mind running this exercise has been a very patient one because the simulation has been going on for nearly 4 billion years (unless one dismisses the evidence of the Earth’s evolution and our emergence as one of its results).

In any case, there is little we can do about the frightening (or liberating: no human worries anymore) aspects of such scenarios. If the emergence of superior machines or all-devouring gooey nanospecies is only a matter of time, then all we can do is wait passively to be eliminated. If such developments are possible, we have no rational way to assess the risk. Is there a 75% or a 0.75% chance of self-replicating robots’ taking over the Earth by 2025 or nanobots’ being in charge by 2050? And if such “threats” are nothing more than pretentious, upscale science fiction, then they have a massive amount of lower-grade company in print, film, and television and are good for little else than producing an intellectual frisson.

In this chapter, I look in some detail only at those natural catastrophes that take place rapidly, in a matter of minutes to months. Global climate change, a natural event that has commonly been posited as the most worrisome environmental crisis, can take place rapidly only when measured on an evolutionary time scale. Consequently, its assessment belongs to chapter 4, which deals with unfolding environmental trends.

And I consider only those catastrophes that do not have a vanishingly low probability of occurring during the next 50 years, that is, those that recur at intervals no longer than 105-106 years and that could change the course of global history and perhaps even eliminate the modern civilization. This is why I do not give a closer attention to such very rare events as the Earth’s exposure to supernova explosions or periods of enormous lava flows such as those that created Deccan and Siberian Traps.

Supernovae are rare, taking place only about once every 100 years in a spiral galaxy like the Milky Way (Wheeler 2000). The solar system is within 10 parsecs (3 × 1017 m) of a supernova only once every 2 billion years (2 Ga) and the explosion (typically yielding 10 billion times more energy than the Sun) would flood the top of the atmosphere with X-ray and very short UV flux about 10,000 times higher than does the incoming solar radiation. The Earth would receive in just a few hours a dose of ionizing radiation of 500 roentgens that would be fatal to most unprotected vertebrates. Their 50% effective lethal dose is mostly 200-700 roentgens, but many would survive given the differences in exposure and specific resistance. Invertebrates and microbes would remain largely unaffected. Terry and Tucker (1968) calculated that the Earth has received at least this dose ten times since the Precambrian, or roughly once every 50 million years (50 Ma), an interval that yields a negligibly low probability of occurrence during the next 50 years.

Similarly, the periods of massive and prolonged effusions of basaltic lavas accumulating in thick layers are uncommon even when measured on a geological time scale. The oldest identified episode of this kind (508-505 Ma ago) produced more than 190,000 km3 of Australia’s Kalkarindji basalts and was the most likely cause of the first major animal extinction (Glass and Phillips 2006). The past 250 Ma have seen only eight giant plumes of magma penetrating the Earth’s crust and forming massive basalt deposits. India’s Deccan Traps, containing more than 500,000 km3 of basalt, were formed over a period of 5 Ma beginning 65 Ma ago, and these effusions, rather than an impact of an extraterrestrial body, may have killed the dinosaurs or at least greatly contributed to their demise (fig. 2.2). And the Siberian Traps, covering some 2.5 million km2 with perhaps as much as 3 million km3 of lavas, were formed about 250 Ma ago (Renne and Basu 1991).

Natural Catastrophes

Natural catastrophes range from relatively common events such as cyclones, floods, and landslides to less frequent violent releases of energies associated with geotectonic processes (earthquakes and volcanic eruptions, both capable of generating tsunamis) to uncommon encounters of the Earth with large extraterrestrial bodies. Older data on the frequency and death tolls of natural catastrophes are incomplete, but recent statistics capture all major events and have fairly accurate fatality counts. Annual global compilations by the Swiss Reinsurance Company (Swiss Re 2006a) show that floods and storms are by far the most frequent events; during the first years of the twenty-first century they accounted for 70%-75% of all natural catastrophes. These are followed by earthquakes, tsunamis, and the effects of extreme temperatures, including droughts, fires, heat waves, blizzards, and frost. However, in terms of worldwide victims, earthquakes were the worst natural catastrophes between 1970 and 2005, when they killed nearly 900,000 people, compared to about 550,000 deaths from floods and cyclones (fig. 2.3).

Fig. 2.2
Exposed layers of Deccan flood basalt, more than 1 km thick, at Mahabaleshwar, Maharashtra, India. Photo courtesy of Hetu Sheth, Indian Institute of Technology, Mumbai.

These compilations also show the expected highly skewed frequency distribution of fatalities as a single event dominates the annual death toll. Most of the time this event is a major earthquake (including an earthquake-generated tsunami), and this dominance has been particularly pronounced during the recent past. In 2003, Iran’s Bam earthquake was responsible for 80% of that year’s fatalities caused by all natural disasters; in 2004 the Sumatra-Andaman earthquake and tsunami accounted for 95% of the total; and in 2005 the Kashmir earthquake’s accounted for nearly 85% of the total (Swiss Re 2004; 2005; 2006a). Relatively frequent events with localized impacts often cause tens or hundreds, and less commonly thousands, of fatalities, but the most damaging catastrophes claim hundreds of thousands, even millions, of lives. The most disastrous cyclone of the twentieth century, Bangladesh’s Bhola on November 13, 1970, killed at least 300,000 people; the most deadly earthquake, in northern China’s Shaanxi on January 23, 1556, claimed 830,000 lives; and the Huanghe flood of 1931 claimed at least 850,000.

Fig. 2.3
Death tolls from major natural disasters (at least 4,000 deaths per event), 1970-2005. Plotted from data in Swiss Re (2006b).

But the mostly deadly natural disaster of the first years of the twenty-first century, the Indian Ocean earthquake and tsunami on December 26, 2004 (Lay et al. 2005; Titov et al. 2005), illustrated that even these massive natural catastrophes do not alter the course of world history. They generate worldwide headlines, elicit humanitarian aid, and have long-term effects on the affected nations, but they are not among epoch-making events on the global scale. Indeed, one of the half dozen similarly devastating natural catastrophes that took place during the latter half of the twentieth century remained an entirely internal affair because xenophobic China did not ask for international aid following the Tangshan earthquake of July 28, 1976, which killed (officially) 242,219 people in that coal-mining city and surroundings but whose toll was estimated as high as 655,000 (Huixian et al. 2002; Y. Chen et al. 1988).

In contrast to frequent natural disasters that kill as many as 105-106 people and that have severe local and regional economic consequences, there are only three kinds of sudden, unpredictable, but recurrent natural events whose global, hemispheric, or large-scale regional impacts could have a profound influence on the course of world history. They are the Earth’s collision with nearby extraterrestrial objects that are large enough to cause death and economic damage comparable to explosions of strategic nuclear weapons; massive volcanic eruptions (with or without major tsunamis); and (possibly) voluminous tsunami-generating collapses of parts of volcanoes sliding into the ocean.

The probability of any of these events’ taking place during the first half of the twenty-first century is very low (well below 1%), but this comforting conclusion must be counterbalanced by the fact that if any one of them were to take place, it would be an event without counterpart in recorded history. The near-instant death toll would involve 106-109 people, 1-4 orders of magnitude (OM) greater than for frequent localized natural catastrophes. Moreover, if these events were to affect the densely populated core areas of the world’s largest economies, their global impact would be considerable even if the spatial extent of destruction amounted to only a tiny fraction of the Earth’s surface.

Encounters with Extraterrestrial Objects

The Earth constantly passes through a widely dispersed (but in aggregate quite massive) amount of universal debris (McSween 1999). Common sizes of these meteoroids range from microscopic particles to bodies with diameters <10 m. As a result, the planet is constantly showered with microscopic dust, and even the bits with diameter 1 mm, large enough to leave behind a light path as they self-destruct in the atmosphere (meteors), come every 30 s. This constant infall (about 5 t per day) poses virtually no risk to the evolution of life or to the functioning of modern civilization because these objects disintegrate during their passage through the atmosphere, and only dust or small fragments reach the ground. But the planet’s orbit is also repeatedly crossed by much larger objects, above all by stony asteroids with diameters >10 m and as large as tens of kilometers across (fig. 2.4), and by comets.

Fig. 2.4
Closeups of large asteroids. The Earth’s collision with asteroids of this size would almost certainly destroy civilization. Left, composite image of Ida (~52 km long); right, Gaspra (illuminated portion -18 km long). Galileo spacecraft images (1993 and 1991). From NASA (2006).

The risk of encounters with extraterrestrial bodies was first recognized during the 1940s. It began to receive greater attention during the 1980s, but until the early 1990s no systematic effort was made to comprehensively identify such objects, assess the frequencies of their encounters with the Earth, and devise possible defensive measures. Known Earth-crossing asteroids numbered 236 at the beginning of 1992 (compared to 20 in 1900), the year in which NASA proposed the Spaceguard Survey (Morrison 1992), whose goal is to identify 90% of all near-Earth asteroids (NEAs) by the year 2008. NASA funded and coordinated monitoring began in 1995, and ten years later the U.S. House of Representatives approved the Near-Earth Object Survey Act, which directs NASA to expand its detection and tracking program. These actions have been accompanied by publications assessing the threat (Chapman and Morrison 1994; Gehrels 1994; J. S. Lewis 1995; 2000; Atkinson, Tickell, and Williams 2000).

The progress in discovering new near-Earth objects (NEOs) has been rapid (NASA 2007). By the end of 1995 the total number of known objects was 386; by the end of 2000, 1,254; and by June 2007, more than 4,100, of which nearly 880 were bodies with diameters ≥1 km (fig. 2.5). As the findings accumulate, there has been an expected decline in annual discoveries of NEAs with diameters >1 km, and the search has been asymptotically approaching the total number of such NEAs. Consequently, we are now much better able to assess the size-dependent impact frequencies and to quantify the probabilities of encounters whose consequences range from local damage through regional devastation to a global catastrophe.

Fig. 2.5
Cumulative discoveries of near-Earth asteroids, 1980-2007. From NASA (2007).

There are perhaps as many as 109 asteroids orbiting the sun in a broad and constantly replenished belt between Mars and Jupiter as well as a similar number of comets moving in more distant orbits within the Öpik-Oort cloud beyond Pluto. Gravitational attraction of nearby planets constantly displaces a small portion of these bodies (remnant debris from the time of the solar system’s formation 4.6 Ga ago) into elliptical orbits that move them toward the inner solar system and into the vicinity of the Earth. Several million near-Earth objects cross the Earth’s orbit, and at least 1,000 of them have diameters ≥1 km. Because of their high impact velocities, even small NEOs have kinetic energy equivalent to that of a small nuclear bomb; larger bodies can bring regional devastation, and the largest can cause a global catastrophe.

Fig. 2.6
Oblique aerial view of Meteor Crater in Arizona. USGS photo by David J. Roddy.

Craters provide the most obvious evidence of major past impacts (fig. 2.6) (Grieve 1987; Pilkington and Grieve 1992). More than 150 of these structures have been identified so far, but it must be kept in mind that most impacts have been lost in the ocean, and the evidence of most of the older terrestrial impacts has been erased by tectonic and geomorphic processes. The largest known crater, the now buried Chicxulub structure in Yucatan with diameter 300 km (Sharpton et al. 1993), was created 65 Ma ago by an asteroid whose impact has been credited with the great extinction at the Cretaceous-Tertiary (K-T) boundary (Alvarez et al. 1980). The most recent impact of an NEO with diameter >1 km took place less than 1 million years ago in Kazakhstan (NRCanada 2007). Asteroids and short-period comets make up about 90% of NEOs; the remaining risk is posed by intermediate and long-period comets that cross the planet’s orbit only once in several decades. The frequency of NEO impacts declines exponentially with the increasing size of the impacting objects, and their kinetic energy determines the extent of damage (fig. 2.7).

Fig. 2.7
Size, impact frequency, and impact energy of near-Earth asteroids. All four axes are logarithmic; the band indicates the range of uncertainty regarding the numbers and impact intervals of objects with diameter <1 km. Based on NASA (2003), Bland and Artemieva (2003), and Chapman (2004).

Roughly once a year the Earth encounters an extraterrestrial body whose size is 5 m across and whose air burst releases nearly 21 TJ, equivalent to 5 kt TNT (explosive power of 1 t TNT is equal to 4.18 GJ). This makes it about one-third as powerful as the Hiroshima bomb; there is no definite number for the explosive yield of that bomb, but the most authoritative source (Malik 1985) puts it at 15 (±3) kt TNT. Only if this body’s center of disintegration were right above the U.S. Capitol during the President’s State of the Union speech would the effect be felt globally. But the probability of such an encounter is vanishingly small, at least 8 OM smaller than that of a similar object’s disintegrating at any time above any densely populated area.

Stony objects with diameters 10 m are intercepted by the Earth’s atmosphere every decade, and their entry (at speeds ~20 km/s) discharges energy equivalent to about 100 kt TNT, roughly seven times the energy released by the Hiroshima bomb. As these bolides disintegrate during atmospheric deceleration, a fireball and a shock wave are the only phenomena felt on the ground within a radius of 102 km around their entry point. Brown et al. (2002) used data from a satellite designed to detect nuclear explosions in order to identify light records of bolide detonations (objects 1-10 m in size) in the atmosphere. From these observations they concluded that on average the Earth is struck by an object with diameter 50 m (equivalent to 10 Mt TNT) every 1,000 years.

The probability of such an impact is thus about 5% (uncertainty band of about 3%-12%) during the next 50 years, and its effects would be similar to those caused by the famous Tunguska meteor of June 30, 1908. Atmospheric disintegration of that stony object released energy equivalent to 12-20 Mt TNT, produced a shock wave that flattened trees over an area of about 2150 km2 but killed nobody in that unpopulated region of central Siberia (Dolgov 1984). If a similar object were to disintegrate over a densely populated urban area, it could cause great damage. Its explosion about 15 km above the ground would release energy equivalent to at least 800 Hiroshima bombs and result in 105 casualties and $1011 of material damage. But the chances of such an event are roughly 2 OM smaller than the probability of hitting an unpopulated or thinly inhabited region because densely populated areas cover only about 1% of the planet’s surface.

As was clearly demonstrated by the contrast of casualties in Hiroshima and Nagasaki, the actual destruction would depend on the physical configuration of the affected area. Hiroshima, with a bowl-like setting that acted as a natural concentrator of the blast, had about 40% more fatalities and more destruction from a 15-kt blast than did Nagasaki from a 21-kt explosion (CCM 1981). Another complicating factor is that a Tunguska-like blast may not be a point-source event (similar to a nuclear bomb) but rather a plume-forming event (similar to a line of explosive charges) and hence could be caused by much less powerful objects (NASA 2003).

Asteroids with diameters ≥100 m reach the atmosphere once every 2,000-3,000 years, and their energy (equivalent to >60 Mt TNT) is as large as the yield of the largest tested thermonuclear devices. Hills and Goda (1993) calculated that stony objects with diameters up to ~150 m will release most of their energy in the atmosphere and will not hit the surface and create impact craters (however, heavier metallic objects of that diameter might penetrate). Stony objects with diameter >150 m hit the Earth once every 5,000 years, and their terrestrial impacts create only local effects, small craters with adjacent areas covered by ejecta. Using as a reference point a stony body that produces only air blast to 220 m diameter, Bland and Artemieva (2003) estimated that bodies with a larger diameter would hit the Earth once in 170,000 years. There is broad consensus that the threshold size for an impact producing a global effect is a body with diameter at least 1 km and possibly closer to 2 km.

Toon et al. (1997) concluded that only bodies with kinetic energies equivalent to at least 100 Gt TNT (diameters >1.8 km) would cause global damage beyond the historic experience, and objects with diameters between 850 m and 1.4 km (energy equivalents of 10-100 Gt TNT) would cause globally significant atmospheric water vapor injection and ozone loss but would not inject enough submicrometer particulates into the stratosphere to have major, longer-term climatic effects. A 1-km body (density 2.5-3.3 g/cm3, velocity 20-22 km/s) colliding with the Earth would release energy equivalent to about 62-105 Gt TNT, almost 1 OM more than the energy that would have been expended by an all-out thermonuclear war between the two superpowers in 1980 (Sakharov 1983). A 3-km asteroid would liberate energy equivalent to about 2 Tt TNT, possibly enough to terminate modern civilization regardless of where the asteroid hit (fig. 2.8).

The consequences of a collision with a 1-km body would depend greatly on the impact site. Odds are roughly 7 : 3 that the object would hit the ocean and damage the land indirectly by generating tsunamis, but a terrestrial impact would create a crater with diameter 10-15 times the object’s size and pose an unprecedented threat to the survival of civilization. Such a collision would vaporize and fragment both the projectile and the impacted area, and enormous masses of dust would reach the stratosphere. While the larger dust fractions would rapidly settle, submicrometersized particles would remain in the atmosphere for weeks to months.

Simulations using the global circulation model show that ocean heat storage would prevent a global freeze even if the impact were equivalent to the K-T event (with kinetic energy perhaps as high as 1 Pt TNT) but that surface land temperatures would drop by more than 10°C and still be some 6°C lower a year later (Covey et al. 1994; Toon et al. 1997). In addition, hot ejecta would produce significant amounts of nitrogen oxides, whose presence in the stratosphere would degrade (and in extreme cases, largely destroy) the ozone shield that protects the Earth against UV radiation. A 1-km object would have much less effect because it would not generate enough dust to cause temporary planetwide darkness and shut down photosynthesis.

Fig. 2.8
Expected fatalities from impacts of near-Earth objects. From Morrison (1992).

At least 10 Gt of submicrometer-sized dust would be required to make the minimum amount of light unavailable for photosynthesis (Toon et al. 1997), but using the analogy of a ground-level nuclear explosion—which produces about 25 t of submicrometer-sized dust per kt of yield (Turco et al. 1983)—means that a 1-km body would produce only about 1.5 Gt of fine dust, 4 OM less than a K-T-sized object (25 Tt). Moreover, Pope (2002) questioned the assumptions regarding the fine dust fraction in the ejecta produced by the K-T impact. Pope’s calculations, coupled with observations of the deposited coarse fraction, indicated that a minor share was laid down as submicrometer-sized dust and that little debris diffused to high southern latitudes. These conclusions invalidate the original attribution of K-T extinction to the shutdown of photosynthesis by submicrometer-sized dust. Pope calculated that the impact released only 0.1% (and perhaps much less) of the total amount as fine dust (but his conclusions were questioned as unrealistic).

In any case, it is impossible to quantify satisfactorily the actual effect because fine dust would not be the only climate-modifying factor. Soot from massive fires ignited by hot ejecta and sulfate aerosols liberated from impacted rocks could each have as much cooling effect on the atmosphere as the fine dust. However, lingering aerosols would also increase the intercept of the outgoing terrestrial radiation and contribute to tropospheric warming. A rapid reversal of ground temperatures could follow once the debris settled, and water vapor and CO2 injected into the stratosphere (from impacted carbonate rocks) would greatly enhance the natural greenhouse gas effect. With positive feedbacks (higher temperatures enhancing evaporation as well as plant respiration and the release of CO2 from the ocean and soils), this bout of global warming could persist for decades.

The only defensible conclusion is that the impact of a 1-km object would most likely not have consequences resembling the aftermath of a thermonuclear war: a drop in ground temperature severe enough to produce a nuclear winter and a temporary cessation of all photosynthesis (Turco et al. 1991). The overall effect on photosynthesis, biodiversity, agricultural production, and human survival would depend critically on the mass of ejecta and their atmospheric perseverance. Specifics are impossible to enumerate, but extensive forest and grassland destruction by fires, a temporary but substantial reduction of precipitation due to the disrupted global water cycle, sharp declines in food production, and extensive interference in industrial, commercial, and transport activities are all easy to imagine. The impact would not bring an abrupt end to modern civilization, but it could be an enormously costly setback.

Earlier estimates put the number of NEOs with diameter >1 km at about 2,000, but Rabinowitz et al. (2000) used improved detection techniques to conclude that there were only about 1,000 such objects, and Stuart (2001) put the total number of kilometer-sized NEAs at just over 1,200 (he also found them less likely to collide with the Earth than previously assumed). If 1,100 were the actual total, then 80% of them had been discovered by June 2007. Certainly the most notable outcome of this effort is the good news that the likelihood of near-term impacts has been decreasing. On a 10-point Torino scale, measuring the severity of the collision threat (Binzel 2000), 0 indicates no hazard (white zone) with effectively no likelihood of collision, and 1 (normal, green zone) indicates an object whose close path near the Earth poses no unusual danger and which will very likely be reassigned to level 0 after additional observations. Levels 3 and 4 indicate close encounters with 1% or greater chance of collision capable of localized or regional destruction; and significant threats of close encounters causing a global catastrophe begin only with level 6.

As of 2007, only two objects, 2004 VD17 and 2004 MN4, were rated 2 and all other NEAs scored 0 on the Torino scale during the twenty-first century. The first of these objects is about 580 m across; the other is 320 m across, and it became the subject of short-lived concern when initial calculations indicated its collision with the Earth on April 13, 2029. That is not going to happen, but there is still a distant possibility of an encounter with MN4 between 2036 and 2056, and VD17 may come close by 2102 (Yeomans, Chesley, and Choclas 2004). By far the highest known probability of an NEO’s colliding with the Earth is nearly a millennium away, on March 16, 2880. Analysis by Giorgini et al. (2002) suggests a very close approach by asteroid 29075, an asymmetrical spheroid with mean diameter 1.1 km that was discovered in 1950 (as 1950 DA), lost from view after 17 days, and rediscovered in 2000 (fig. 2.9). The impact probability was put as high as 0.33%, but because of the unknown direction of the asteroid’s spin pole, the range of the actual risk may be closer to 0.

While it is very likely that we have already discovered all existing NEAs with diameter >2 km, we can never be quite sure that we know of every large NEA that is already on an Earth-crossing orbit and we will not be able to identify promptly every new addition to this dynamic collection of extraterrestrial objects. Consequently, assessing the risks of collision will always require assumptions regarding the impact frequency of various-sized objects. The general size frequency distribution of NEOs is now fairly well known (see fig. 2.7), but there are different assumptions about the most likely frequency of impacts; the estimates differ by up to 1 OM. For example, Ward and Asphaug (2000) assume that an object with diameter 400 m hits the Earth once every 10,000 years, and with diameter 1 km, once every 100,000 years. By contrast, Brown et al. (2002) would expect a body with diameter 400 m to hit once every 100,000 years, and with diameter 1 km, once every 2 million years; Chapman (2004) would expect an object with diameter 400 m to hit once every 1 million years; and Jewitt (2000), an object with diameter 400 m, once every 400,000 years.

Fig. 2.9
A collision that is not going to happen: the orbits of four planets and asteroid 29075 (1950 DA). Based on NASA (2007).

Another important consideration enters at this point: even impacts of bodies with diameter <1 km could have global consequences if they destroyed a core area of a major nation. For example, an object 500 m across would devastate an area of about 70,000 km2; Tokyo and its surrounding prefectures cover less than half that area (~30,000 km2) and are inhabited by about 30 million people. Alternatively, calculations by Ward and Asphaug (2000) show that if an asteroid with diameter 400 m were to hit a 1-km-deep ocean site at 20 km/s, the maximum amplitude of a tsunami generated by this impact would be more than 200 m at a distunce of 100 km and 20 m at a distance of 1000 km. A near-shore impact off eastern Honsh– or in the North Sea between London and Amsterdam would instantly obliterate core regions of the world’s two leading economies, Japan and the EU, and unlike with tsunami generated by a distant earthquake, there would not be sufficient time for mass population evacuation.

Naturally, the probability for such a site-specific impact (PS) is a small fraction of that for an unspecified location on the Earth (PE): PS = PE(AE/AS). Assuming that an object with diameter 400 m arrives once every 100,000 years (PE = 1-5) then the probability of its destroying the Tokyo area (AS = 310 m2) would be no more than 6~10 (AE = 5.114 m2), an annual probability of about 1 in 1.66 billion. Ward and Asphaug (2000) calculated specific probabilities of a 5-m-high tsunami wave’s hitting Tokyo and New York at, respectively, 4.2% and 2.1% during the next 1,000 years, or roughly 0.2% and 0.1% during the next 50 years. In contrast, Bland and Artemieva (2003) estimated the frequencies of bolides that would most likely cause hazardous tsunami at only about one-fiftieth of the rate reported by Ward and Asphaug (2000). Chesley and Ward (2006) calculated the overall long-term casualties that would be caused by impacts of objects with diameters 200-400 m at fewer than 200 deaths per year (or fewer than 10,000 total during the next half century).

The highest risk of collision-related fatalities comes from the land impact of smaller, and hence more common, bodies, with a more than 1% chance that such an impact will kill about 100,000 people during the twenty-first century, whereas somewhat larger objects (diameter 150-600 m) will pose the greatest tsunami hazards (Chapman 2004). By contrast, probabilities of encounters with NEOs with diameter ≥1 km are orders of magnitude smaller. If the average recurrence interval for a 1-km asteroid were 400,000 years, then the probability of impact during the next 50 years would be 0.0125%; bracketing this by uncertainties of 100,000 years to 2 million years gives a range of 0.0025%-0.05%. The minimum size of an asteroid whose impact would have severe global consequences depends not only on its diameter but also on its density and speed (an asteroid traveling at 30 km/s would have 2.25 times more kinetic energy than an equally massive counterpart moving at 20 km/s) and on the impacted area.

If a large asteroid were to enter the ocean, such an impact would generate tsunamis that would hit even distant shores with high-amplitude waves: the impact’s location would determine the extent of global fatalities and economic damage. For example, 2.15 Ma ago the Eltanin asteroid, whose diameter may have been as much as 4 km, entered a deep (about 5-km) spot in the Pacific Ocean off southern Chile without forming a seabed crater and without resulting in a mass extinction (Mader 1998). The resulting tsunami (total energy of 200 EJ) would have completely destroyed the South Pacific islands, but the wave height along the coasts of North America and East Asia would have been less than 15 m.

But even if we were to discover all NEAs and determine that none of them is on a collision course with the Earth, we would still face an inherently much more difficult challenge of identifying cometary impactors. These bodies, made of rocky material and volatile ice, account for no more than 10% of all NEOs, but because they have higher encounter velocities (as much as 60 km/s compared to 15-25 km/s for asteroids) their kinetic energy is much higher, and they have been responsible for some 25% of all craters with diameters >20 km (Brandt and Chapman 2004). Fortunately, these more powerful impacts are rarer than the encounters with similarsized asteroids. The closest approaches by historic comets missed the Earth by 3.7 lunar distance (LD = 384,000 km) in 1491, 5.9 LD in 1770, and 8.9 LD in 1366; all other misses were >10 LD (NASA 2006). Consequently, probabilities of the Earth’s catastrophic encounter with a comet are likely less than 0.001% during the next 50 years, a chance approaching the level of 1 out of 1 million.

Volcanic Mega-eruptions and Collapses

About a half billion people live within a 100-km radius of a volcano that has been active during the historical era, but the number of fatalities and the extent of material damage caused by volcanic eruptions have been highly variable (fig. 2.10). Fortunately, even with eruptions as large as the largest historic events, the potential for immediate fatalities is relatively limited. Hot lava usually spreads only over several square kilometers, ballistic projectiles fall on areas ≥10 km2, and tephra deposits affect areas of 102-106 km2. But tsunamis generated by large eruptions can cross an ocean, and volcanic dust from a major eruption can be transported worldwide. Loss of life and property depends on the prevailing form of energy release: slow-flowing, glowing Hawaiian lavas give plenty of time to evacuate houses, whereas the pyroclastic flows, such as these that swept down Vesuvius in 79 C.E. and buried Pompei and Herculaneum, or the flows from Mount Pelée in 1902, which killed all but 2 of 28,000 residents of St. Pierre on Martinique, produce instant mass burials (Sigurdsson et al. 1985; Heilprin 1903).

Fig. 2.10
Volcanic eruptions and fatalities, 1800-2000. Plotted from data at <http://www.volcanolive.com/>.

Because of larger populations the frequency of eruptions involving fatalities rose from fewer than 40 per century before 1700 to more than 200 during the twentieth century (Simkin 1993; Simkin, Siebert, and Blong 2001). Nearly 30% of the ~275,000 fatalities between 1500 and 2000 were due to pyroclastic flows and 20% to tsunamis. The four greatest disasters in terms of fatalities were Tambora (92,000), Krakatau (36,000), Mount Pelée (28,000), and Colombia’s Nevado de la Ruiz in 1985 (23,000). As for the frequency of eruptions, it rose from fewer than 20 per year before 1800 to more than 60 per year by the late twentieth century, largely because of improved reporting. Ammann and Naveau (2003) analyzed sulfate spikes in polar ice and discovered a strong 76-year cycle of tropical explosive volcanism during the last six centuries.

The most common way to measure the magnitude of eruptions is the volcanic explosivity index (VEI), devised by Newhall and Self (1982). This logarithmic scale combines the volume of ejecta and the height of ash column. VEI values less than 4 include eruptions that take place somewhere on the Earth daily or weekly and that produce less than 1 km3 of tephra (airborne fragments ranging from fairly large blocks to very fine dust) with maximum plume heights below 25 km. Mount St. Helens (1980) had VEI 5 (paroxysmal eruption, the same magnitude as Vesuvius in 79 c.e.) and produced just 1 km3 of ejecta (Lipman and Mullineaux 1981).

Krakatau (1883) had VEI 6 (colossal eruption), and Tambora (1815), had VEI 7 (supercolossal eruption). The Bronze Age Minoan eruption in the Aegean Sea, about 3,650 years ago, was the largest release of volcanic energy (100 EJ) during the historic period, and it created the great Santorini caldera (surrounded by islands Thera, Therasia, and Aspronisi). But its total volume of ejecta, about 70 km3, was less than half of Tambora’s volume (Friedrich 2000), a comparison that exemplifies the lack of clear correlation between spectacular ash plumes and the total energy released by a volcanic eruption.

Historic eruptions are dwarfed by VEI 8 events variously labeled as gigantic, megaor supereruptions (Sparks et al. 2005; Mason, Pyle, and Oppenheimer 2004). The two most recent ones created the Taupo caldera in New Zealand (26,500 years ago, VEI 8.1, 530 km3) and the giant Toba caldera in northern Sumatra (fig. 2.11) (74,000 years ago, VEI 8.8, 30 km × 100 km), an oval now filled by a lake (Rose and Chesner 1990). The Toba event produced about 2800 km3 of ejecta, and La Garita (27.8 Ma ago, VEI 9.1), the largest supereruption identified so far, which produced the Fish Canyon Tuff in Colorado, ejected about 4500 km3.

Fig. 2.11
Toba caldera, and comparison of Toba’s volcanic ash volume with the largest nineteenthand twentieth-century eruptions. Plotted from data in Mason, Pyle, and Oppenheimer (2004) and USGS (2005).

Toba’s eruption sent trillions of tonnes of volcanic ash thousands of kilometers downwind and dispersed in both westerly and easterly directions, a pattern suggesting that the eruption happened during the summer monsoon (Bühring and Sarnthein 2000). Ash fall covered most of Southeast Asia, reached as far west as the northeastern Arabian Sea, and deposited several centimeters over the South China Sea and parts of southern China (Pattan et al. 2001; Ambrose 2003). Its greatest terrestrial impact was on the Indian subcontinent, where cores show layers of 40-80 cm in central India and very thick (2-5 m) deposits, possibly reworked by redeposition, close to the eastern coast (Acharya and Basu 1992). Rose and Chesner (1990) estimated that 1% of the Earth’s surface was covered with more than 10 cm of Toba’s ash.

Toba’s impact must have been quite severe. It is perhaps the best explanation for a genetically well-documented late Pleistocene population bottleneck, when small and scattered groups of humans were reduced to a global total of fewer than 10,000 individuals and our species came very close to ending its evolution (Rampino and Self 1992; Ambrose 1998). This explanation relies on studies of mitochondrial DNA that indicate severe population shrinkage between 80,000 and 70,000 years before the present (Harpending et al. 1993); as with any reconstructions of this kind, it has been criticized and defended (Ambrose 2003).

Quantifying the probabilities of future supereruptions is a highly uncertain undertaking. Their frequency cannot be extrapolated from the power law relation, which is based on much better records of size and frequency of smaller events. Such an extrapolation would suggest a recurrence interval on the order of 1,000 years, whereas the most recent event (Taupo) was 26,500 years ago. Our enumeration of supereruptions is certainly incomplete, but the best available account lists 42 events with VEI 8 or above during the past 36 million years, with two distinct pulses. The first one peaked about 29 million years ago, the other one began about 13.5 million years; analysis indicates that an eruption with VEI 8 or above could be expected to take place at least once every 715,000 years and that there is a 1% probability of such an eruption during the next 460-7,200 years (Mason, Pyle, and Oppenheimer 2004). This would translate to a 0.007%-0.1% probability during the next 50 years.

Impacts of supereruptions must be deduced from the effects of smaller events described in historic records and studied by modern volcanology. These extrapolations are also subject to many uncertainties, as is the use of modern global climate models to simulate the effect of high loads of stratospheric ash. Besides the highly variable gas composition (some eruptions produce relatively little or almost no SO2, the precursor of sulfates) and different magma/ash ratios, the severity of regional impact and the overall extent of climatic effects would also be determined by the location of the event. Supereruptions close to densely populated areas would cause many more instant casualties and more physical destruction. Of the 14 supereruptions younger than 10 million years, 6 took place in the western United States close to or in California and upwind from prime agricultural regions (fig. 2.12).

Virtually instant fatalities would be caused by pyroclastic flows, heavy ash fall, and inhalation of highly acidic gases and aerosols, all effects well documented from Vesuvius and a number of modern eruptions. Severe damage to plants and acute respiratory effects would be limited to areas of relatively high concentrations of acidic and halogen gases. By far the most important global impact of supereruptions would be their short-to medium-term climatic consequences: dust injected all the way into the stratosphere produces hemispheric or even global temperature changes during the subsequent months or years (Angell and Korshover 1985; Robock 1999; Robock and Oppenhemier 2003). Sulfate aerosols, formed from the released SO2, have a dual effect on the atmosphere: they reflect the incoming solar radiation, thereby cooling the troposphere, but they also absorb both the short-wave solar radiation and the outgoing long-wave terrestrial radiation and warm the stratosphere above the tropics.

Fig. 2.12
Largest flood basalt provinces created during the past 250 million years, and locations of volcanic supereruptions during the last 5 million years. Based on Coffin and Eldholm (1994), Sparks et al. (2005), and Mason, Pyle, and Oppenheimer (2004).

Moreover, the stratospheric sulfates take part, together with emitted HCl, in complex reactions that destroy ozone. The outcomes are not easily predictable. Statistical analyses by Angell and Korshover (1985) indicated that out of 96 studied eruption events only 27 were followed by significant temperature declines. Because the maximum distance to the tropopause is 15-16 km in the tropics (compared to 9-11 km near the poles), tropical eruptions have to be more powerful in order to inject their plumes all the way into the stratosphere (Halmer and Schmincke 2003). But tropical eruption plumes that inject the ash into the atmosphere within ±30° of the equator produce a worldwide cooling effect more readily (as the general atmospheric circulation spreads the aerosols fairly rapidly over the both hemispheres) than those taking place in high latitudes.

Fig. 2.13
Enormous eruption plume of Mount Pinatubo, June 15, 1991. USGS photo by Dave Harlow.

This effect was closely observed and successfully modeled after the eruption of Mount Pinatubo (VEI 5-6) on the Philippines island of Luzon on June 15, 1991 (fig. 2.13) (Soden et al. 2002). This eruption was the twentieth century’s largest injection of SO2 into the stratosphere: about 20 Mt in a matter of days, and reaching as high as 45 km (McCormick, Thomason, and Trepte 1995; Newhall and Punongbayan 1996). Satellite monitoring showed that the resulting sulfate aerosols cooled the lower troposphere globally by about 0.5°C. Reduced global water vapor concentrations closely tracked this temperature decrease. But some regions experienced pronounced seasonal warming against the global background of cooling; during winter of 1991-1992 parts of North America and Western Europe were up to 4°C warmer than normal (Robock 2002).

Examination of tree ring densities indicates that the strongest Northern Hemisphere summer anomaly of the past 600 years was -0.8°C in 1601, most likely as the consequence of Peru’s Huaynaputina eruption in 1600 (Briffa et al. 1998). Ash from Tambora’s Plinian eruptions in April 1815 reached up to 43 km (Sigurdsson and Carey 1989), and during 1816 its global acid fallout of some 150 Mt caused average temperature deviation of -0.7°C in the Northern Hemisphere, producing the famous year without the summer in 1816, with reduced harvests, spikes in food prices, and localized famines (Stothers 1984). The Toba supereruption is credited with temperature decreases of up to 15°C below normal in latitudes between 30° N and 70° N, and with hemispheric cooling of as much as 3°C-5°C that may have persisted for several years, intense and long enough to trigger a “volcanic winter,” a worldwide phenomenon akin to the effects of a nuclear winter, which was hypothesized to be (next to the radiation hazard) the most debilitating consequence of a thermonuclear war (Rampino and Self 1992; Turco et al. 1991).

Today a Toba-sized eruption in a similar location would, besides killing tens of millions of people throughout Southeast Asia, destroy at least one or two seasons of crops needed to feed some 2 billion people in one of the world’s most densely populated regions. This alone would be a catastrophe unprecedented in history, and it could be compounded by much reduced harvests around the world. Compared to these food-related impacts, the damage to machinery, or the necessity to suspend commercial flights until the concentrations of ash in the upper troposphere returned to tolerable levels, would be a minor consideration.

But the VEI >8 supereruptions are not the only events with global impacts large enough to affect the course of the modern world. A moderate VEI 7 eruption would almost certainly have a global effect if it were located within ±30° of the equator and if it ejected at least 100 km3 of magma, that is, 250-300 km3 of ash. Sparks et al. (2005) estimated its frequency once every 3,000 (1,700-10,000) years. Its probability would then be 1.7% (0.5%-2.9%) during the next 50 years. VEI 7 eruptions releasing at least 300 km3 of magma (750 km3 of ash) could take place as often as once every 10,000 years, and their probability range for the next 50 years would be between 0.005%-0.5%. Even with all these uncertainties it is clear that with the (rounded) probabilities of 0.01%-0.1% for a supereruption (VEI >8) and 0.01%-3% for smaller events (VEI 7), globally significant volcanic eruptions are at least 1 or 2 OM more frequent than impacts of extraterrestrial bodies releasing comparable amounts of energy (Mason, Pyle, and Oppenheimer 2004).

For North America the most likely threat is presented by recurrent eruptions of the Yellowstone hotspot (Smith and Braile 1994). Past eruptions of this supervolcano left behind nine massive calderas during the last 15 million years. The last three eruptions took place 2.1 million, 1.3 million, and 640,000 years ago, and the last one produced about 1000 km3 of volcanic ash (USGS 2005). There are three ways to interpret this sequence. First, it has too few members to allow for any conclusions. Second, the interval between the Yellowstone hotspot eruptions has actually decreased from about 800,000 years to 660,000 years; a repeat of the last interval leaves only 20,000 years before the next event is due. Third, the three events had an average interval of 730,000 years, and hence there are still some 90,000 years to go before the most likely repeat.

In either case, another such event has a very low probability of occurring (~0.007%) during the next 50 years. The overall impact of a Yellowstone eruption would depend on the prevailing winds. Dominant wind direction in the Yellowstone region is northwesterly flow; based on the effect of previous eruptions (Fisher, Heiken, and Hulen 1997), the area most affected by ash fall would encompass Wyoming, Colorado, Nebraska, Kansas, Oklahoma, and parts of South Dakota, Texas, New Mexico, and Utah. If a new eruption were to produce as much ash as the last one and affect approximately 2 million km2, then all of the leading wheat-producing states would be buried under half a meter of ash. This calculation assumes an even downwind distribution; the actual ash cover would range from several meters in central Wyoming to a few centimeters in Texas.

But the past eruptions show that ash fall could affect all states west of the Mississippi (fig. 2.14). Thinner layers of volcanic ash could be incorporated into soils by plowing (and might actually improve productivity in years ahead), but even the most powerful tractors could not handle deposits of 0.5-1 m, and an inevitable consequence would be at least temporary abandonment of cropping on large areas of the Great Plains. Moreover, unstable ash layers would be easily dislodged by heavy rains and spring snow melt, creating enormous flooding and stream silting hazards. The economic costs of such an event could fully assessed only generations later.

There are two sets of circumstances when even a volcanic event of less than supereruption magnitude could have enormous socioeconomic consequences. The first would be if the eruption were to produce huge volumes of acidic gases upwind from a major, densely populated region whose economy would be severely damaged by the effects of sulfate aerosols. The absorption and scattering of visible light would create atmospheric haze, temporarily cool the troposphere and reduce photosynthesis, and cause damage to human and animal health. The second would be if an eruption were to cause a massive collapse of volcanic flanks into a nearby ocean and hence generate an extraordinarily large tsunami.

By far the greatest risk of the first event is presented by a repeat eruption of the Laki fissure (Skaftár Fires) in Iceland. The last episode, in 1783-1784, produced nearly 15 km3 of lava and released about 122 Mt of SO2 in eight months (for comparison, the global emissions of the gas from the combustion of fossil fuels were about 150 Mt SO2/year during the early 2000s) as well as about 7 Mt of hydrochloric acid and 15 Mt of hydrofluoric acid; these emissions were carried by eruption columns to altitudes of 6-13 km (Thordarson et al. 1996; Thordarson and Self 2003). The emissions were then dispersed eastward across the Atlantic by the prevailing westerlies. The oxidation of SO2 eventually produced some 200 Mt of H2SO4 aerosols, and nearly 90% of these particulates were removed as acid precipitation, resulting in heavy and extensive atmospheric haze (dry fog) locally and downwind in Atlantic Europe as well as subsequent severe winter and reduced temperatures (by as much as -1.3°C) for the next two or three years.

Fig. 2.14
Approximate volcanic ash fall zones from the two most recent Yellowstone eruptions, Lava Creek (640,000 years ago) and Huckleberry Ridge (2.1 million years ago), and for comparison, the area of heavy Mount St. Helens ash fall in 1980. Based on USGS (2005).

Nearly a quarter of Iceland’s population (about 9,000 people) died because of the haze-induced famine, and deposited hydrofluoric acid contaminated the island’s food and water. Volcanic fog over parts of Europe increased local mortality, caused respiratory complications, and damaged vegetation (Stothers 1996; Durand and Grattan 1999). The health impact was particularly pronounced in France and England. In France the excess deaths added 25% to the expected mortality between August 1973 and May 1784—more than the 16,000 premature deaths from the extreme heat wave of 2003.

Examination of English parish records by Witham and Oppenheimer (2005) concluded that there were nearly 20,000 excess deaths in the Laki eruption’s aftermath. The odds of another eruption like it during the next 50 years are very low (<0.05%), but there is a high probability that a similar event will take place again during this millennium. Today its most immediate effects (besides the fatalities in Iceland) would be a temporary shutdown and extended disruption of the world’s busiest intercontinental air routes between North America and Europe. Eastbound flights take advantage of the shifting jet streams, whose course often puts them close to Iceland; westbound flights approximate the great circle route, which puts them close to or right above Iceland. Given the chronically precarious financial situation of most airlines operating these routes, such an event would precipitate a number of major bankruptcies.

Huge volcano-triggered landslides, recurring roughly once every 100,000 years and creating waves in excess of 100 m, were first documented in the early 1960s as massive hummocks of debris on the sea floor surrounding the Hawaiian Islands (Moore, Normark, and Holcomb 1994). Subsequent research uncovered catastrophic landslides associated with eruptions off the Canary Islands, the Reunion, and Tahiti-Nui. Total volume removed by several episodes (870,000-850,000 years ago) of the north-directed Tahiti-Nui landslides is 400-450 km3; the south-directed slides amount to about 300 km3 (Hildenbrand, Gillot, and Bonneville 2006). Perhaps the greatest risk of this kind is now posed by a future eruption of Cumbre Vieja volcano, which created La Palma in the Canary Islands about 3 million years ago (Ward and Day 2001). During the 1949 eruption, the seventh known since 1470, the volcano’s western half moved some 4 m oceanward; another eruption could cause a catastrophic failure of the entire western flank.

The resulting landslide of up to 500 km3 of volcanic rock would generate a rapidly moving (up to 350 km/h) mega-tsunami, which would, after crossing the Atlantic, hit the eastern coast of North America with repeated walls of water up to 25 m high, assuring the destruction of Miami and severe damage to all coastal U.S. cities, including New York and Boston (fig. 2.15). Trombley and Toutain (2001) used all available historic and current eruption data and seismic, deformation, and thermal analyses to predict a >50% probability of another eruption by 2027, a nearcertainty (>95% probability) by 2214. But we do not know which future eruption will cause a catastrophic landslide. The best evidence regarding the frequency of Canary Islands landslides would indicate intervals of about 70,000 years and hence a probability no higher than 0.07% during the next 50 years.

A 500-km3 landslide is the worst-case scenario, which also assumes that the slide would take place instantaneously during a single event and enter the sea at high velocity. Halving the sliding mass would reduce the maximum waves to 5-10 m, and tsunamis from a landslide of 150 km3 would reach only 3-8 m along the eastern cost of the United States. Wynn and Masson (2004) argued, on the basis of their studies of offshore deposits, that if each landslide were to be composed of multiple stages of gradual failure, then the average collapsed mass could be as low as 10-25 km3 and the resulting tsunami would not inflict severe damage on the eastern coast of North America.

Influenza Pandemics

Modern hygiene, nationwide and worldwide inoculation, constant monitoring of infectious outbreaks, and emergency vaccinations have either completely eliminated or drastically reduced a number of previously lethal, deeply injurious, or widely discomforting epidemic diseases, including cholera, diphtheria, pertussis, polio, smallpox, tuberculosis, and typhoid. I hasten to add that these have been battles with no assured permanence. Pertussis (whooping cough) is coming back among children too young to be vaccinated (Tozzi et al. 2005). More than 10 million people worldwide still contract tuberculosis every year. The number of multidrug-resistant cases is increasing, and four decades have passed since the introduction of the last new effective anti-TB drugs, rifampicin in 1965 and ethambutol in 1968 (Glickman et al. 2006; Murray 2006).

The eradication goal has been particularly elusive in the case of polio. The number of cases dropped worldwide from 350,000 in 1988 to only about 500 in 2001. The next year the number of cases rebounded to some 2,000 a year and after another drop returned to nearly 2,000 in 2005 because of the suspension of vaccination in northern Nigeria, the virus’s persistence in the slums of India, and a sudden increase of infections in Yemen, Somalia, Indonesia (Roberts 2006). In 2005 active transmission of polio virus took place in 16 countries, with endemic presence in Afghanistan, Pakistan, India, and Nigeria, and many polio experts have concluded that, unlike smallpox, the disease cannot be eradicated, only controlled (Arita, Nakane, and Fenner 2006).

Fig. 2.15
Massive collapse of western flank of Cumbre Vieja volcano on La Palma, Canary Islands, would generate a tsunami that would hit the eastern coast of North America with a sequence of waves 10-25 m high. Tsunami progress across the Atlantic would allow for ample warning, and a staged collapse of a volcanic flank would produce much smaller trans-Atlantic waves. Based on Ward and Day (2001).

A new journal, Emerging Infectious Diseases, published by the Centers for Disease Control and Prevention, is devoted to this global challenge. Since 1975 more than 40 new pathogens (mostly viruses) have been added to the ever-growing list of contagious diseases. They include such scary but limited outbreaks as Ebola hemorrhagic fever in Africa, Nipah virus in Malaysia, Singapore, and Bengal, and hantavirus pulmonary syndrome in the U.S. Southwest (Yates et al. 2002) as well more widespread and hence more worrisome cases of variant Creutzfeldt-Jakob disease (the human form of mad cow disease, bovine spongiform encephalopathy), cryptosporidiosis, cyclosporiasis, SARS and HIV/AIDS (Morens, Folkers, and Fauci 2004).

None of these new threats, with the exception of HIV/AIDS, appears capable of changing the course of world history, and AIDS could do so only if new, more virulent strains were to afflict significant shares of populations outside sub-Saharan Africa, where the highest rates of infection now surpass 20% and where the disease has its most widespread and most debilitating (social, mental, economic) impacts. During the early 2000s the annual global death toll from AIDS was about 2.8 million people, less than mortality due to diarrhea and tuberculosis, two diseases that we know perfectly well how to eradicate at an acceptably low cost and that now claim annually about 3.4 million lives (WHO 2006). Moreover, low and steady rates of HIV infection in many countries, falling rates in some previously badly affected nations (particularly Uganda and Thailand), new multiple drug regimens that extend productive lives (and the hopes for eventual vaccination) show that the disease can be managed.

As far as unpredictable discontinuities are concerned, only one somatic threat trumps all of this: we remain highly vulnerable to another episode of viral pandemic. High-frequency natural catastrophes have their somatic counterpart in recurrent epidemics of influenza, an acute infection of the respiratory tract caused by serotype A and B viruses belonging to the family Orthomyxoviridae. Influenza epidemics sweep the world annually, mostly during the winter months, but with different intensities. In the United States there are between 250,000-500,000 new cases every year, about 100,000 people are hospitalized, and 20,000 people die (less than 0.01% of the U.S. population).

Infection rates are by far the highest in young children (10%-30% annually) and in people over 65 years of age (Harper et al. 2000). Influenza pandemics occur when one of the 16 subtypes (H1-H16) of serotype A viruses, different from strains already present in humans, suddenly emerges, rapidly diffuses around the world (usually within six months), and afflicts between 30%-50% of people. The illness, with its characteristic symptoms of fever, myalgia, headache, cough, coryza, debilitation, and discomfort, spreads rapidly (latent period is just 1-4 days), and it is often complicated by bacterial or viral pneumonia. The former can be treated by antibiotics, but because there is no treatment for the latter, it becomes a common cause of death during influenza epidemics.

The first fairly well-documented influenza pandemic occurred in 1580, and there have been six known episodes during the last two centuries (Gust, Hampson, and Lavanchy 2001). In 1830-1833 an unknown subtype originated in Russia; in 1836-1837 another unknown subtype originated possibly in Russia; in 1889-1890 subtypes H2 and H3, originated possibly in Russia; in 1918-1919 subtype H1 (despite its common name “Spanish flu”) originated most likely in the United States; in 1957-1958 subtype H2N2 originated in southern China, with total global excess mortality of more than 2 million people; and in 1968-1969 subtype H3N2 originated in Hong Kong, with excess worldwide mortality of about 1 million people. This low death rate was attributable to protection conferred on many people by the 1957 infection. None of the post-1969 epidemics reached virulent pandemic status (Kilbourne 2006).

All of the nineteenth-century pandemics as well as the 1957 and 1968 events were relatively mild and hence did not make any noticeable upticks in the secular trend of declining mortality. By contrast, the 1918-1919 pandemic was by far the largest sudden infectious burden in modern times (fig. 2.16). A common assumption is that its first, moderately virulent wave began in early March 1918 with the first infections at the U.S. Army Camp Funston in Kansas, but Langford (2005) proposed an origin in China. By May the virus had spread throughout most of the United States, Western Europe, north Africa, Japan, and the eastern coast of China; by August it was in Australia, Latin America, and India (Patterson and Pyle 1991; Davies 1999; Kolata 1999; Phillips and Killingray 2003; Barry 2004). The second wave, between September and December 1918, was responsible for most of the pandemic’s deaths, with mortality as high as 2.5% (fig. 2.17); the third wave (February to April 1919) was less virulent.

Fig. 2.16
Emergency hospital during the 1918 influenza pandemic at Camp Funston, Kansas. Image 1603, National Museum of Health and Medicine, Washington, D.C.

Scientific advances of the 1980s (polymerase chain reaction, permitting replication of genetic material) made it possible to identify the virus, which was initially retrieved from formalin-fixed, paraffin-embedded lung tissue samples and used to sequence first the fragments of viral RNA and then the complete genome (Taubenberger, Reid, Krafft et al. 1997; Taubenberger, Reid, Laurens et al. 2005). It characterized the pathogen’s extraordinary virulence (Tumpey et al. 2005). Statistical analyses of the best available data confirm a peculiar mortality pattern: in contrast to annual epidemics characterized by typical U-shaped mortality patterns, the 1918-1919 pandemic inflicted high mortality on people aged 15-35; years; 99% of all deaths were in people younger than 65 years (WHO 2005). Many of these deaths were due to viral pneumonia, which caused extensive hemorrhaging of the lungs, with death taking place within 48 hours.

There is little certainty regarding the total global death toll of the 1918-1919 influenza pandemic. Perhaps the most commonly cited worldwide aggregate 20-40 million, but a key World Health Organization document refers to “upwards of 40 million people” (WHO 2005), and the best updated account puts the total at 50 million (Johnson and Mueller 2002). Even the lowest estimate is higher than all military and civilian casualties of World War I (~15 million). The total of 50 million deaths from the influenza pandemic would be much higher than the global deaths from the great 1347-1351 plague and almost equal the uncertain grand total of deaths among the populations of the world’s two largest Communist regimes of the twentieth century, the Stalinist USSR and Maoist China (White 2003). By any standard, the 1918-1919 influenza pandemic was the deadliest in history. The fairly reliably documented U.S. death toll of 675,000 people (Crosby 1989) was higher than all the deaths sustained by the country’s servicemen in all twentieth-century wars.

Fig. 2.17
Upper, total mortality in New York and London, 1918-1919. Based on image 3143, National Museum of Health and Medicine, Washington, D. C. Lower, influenza and pneumonia mortality in the United States, 1911-1917 and 1918. Plotted from data in Linder and Grove (1943).

During the late 1990s, two decades after the last and relatively mild pandemic, new concerns arose because of the emergence of new avian influenza viruses that besides infecting birds and pigs could be transmitted to people. In December 1995 a meeting in Bethesda, Maryland, on pandemic influenza heard from one of the world’s leading experts that “at this time, there is no evidence for or against the direct spread of avian influenza viruses to humans” (Webster 1997, S18). By the time this presentation was published, the subtype H5N1 mutated in Hong Kong’s poultry markets to a highly pathogenic form, first identified in April 1997, that could kill virtually all affected chickens within two days, and in May 1997 came the first human death, of a three-year-old boy (fig. 2.18) (Sims et al. 2002).

The virus was eventually transferred to at least 18 people, causing six deaths and bringing about the slaughter of 1.6 million birds (Snacken et al. 1999). This episode showed for the first time that avian influenza viruses could infect humans directly, without passing through pigs or other intermediate hosts. Two years later Hong Kong had two poultry-to-people transfers of subtype H9N2, and in 2003 H5N1 strains were isolated from two of the city’s SARS patients. Late in 2003 a highly pathogenic subtype H5N1 began to appear again in poultry in East and Southeast Asian countries. During the next three years the virus was found repeatedly in domestic poultry and wild birds in Japan, South Korea, China, Taiwan, Hong Kong, Vietnam, Laos, Cambodia, Philippines, Indonesia, Malaysia, and Thailand, and it spread westward to Mongolia, Kazakhstan, Turkey, and a number of European countries.

Between December 2003 and the end October 2007 there were 329 laboratoryconfirmed cases of human H5N1 infection and 201 deaths, indicating a highly virulent pathogen with mortality of 61% (WHO 2007). The highest number of infections and deaths were in Indonesia (107, 86), Vietnam (100, 46), and Thailand (25, 17). Fortunately, the H5N1 virus circulating between 2003 and 2006 was not easily transmissible to humans, and the main impact of its spread was economic, necessitating mass slaughter of infected poultry and the stockpiling of vaccines and antiviral medicines. The Thai outbreak in 2004 was particularly widespread, requiring a slaughter of 40 million chickens in 41 provinces (Chotpitayasunondh et al. 2005).

Fig. 2.18
Transmission electron micrograph of avian influenza H5N1 virus. From Centers for Disease Control and Prevention, Atlanta, Ga.

There is no way to eliminate the natural reservoirs of this virus. South China’s high densities and ubiquitous proximities of people, poultry, and pigs make the region a perennial source of new viruses, and studies show that domestic ducks in China’s southern provinces are the key reservoir of H5N1 (H. Chen et al. 2004). The serotype is now also widely present in wild migratory fowl; ducks, geese, and swans were credited with spreading it across much of Eurasia. However, that assumption may not be correct because there was no such transmission during the eight years after the virus was identified in Hong Kong in 1997, with billions of birds using the same flyways. Poultry shipment and the movement of contaminated materials and wastes may be the primary routes (Fergus et al. 2006).

Because the H5N1 serotype is highly pathogenic and has become ineradicable throughout large parts of Asia, it clearly has a pandemic potential (Li et al. 2004). Heightened awareness of the risks posed by H5N1 led epidemiologists to predict a high probability of pandemic influenza in the not-too-distant future. The following realities indicate the imminence of the risk. The typical frequency of influenza pandemics was once every 50-60 years between 1700 and 1889 (the longest known gap was 52 years, between the pandemics of 1729-1733 and 1781-1782) and only once every 10-40 years since 1889. The recurrence interval, calculated simply as the mean time elapsed between the last six known pandemics, is about 28 years, with the extremes of 6 and 53 years. Adding the mean and the highest interval to 1968 gives a span between 1996 and 2021. We are, probabilistically speaking, very much inside a high-risk zone.

Consequently, the likelihood of another influenza pandemic during the next 50 years is virtually 100%, but quantifying probabilities of mild, moderate, or severe events remains largely a matter of speculation because we simply do not know how pathogenic a new virus will be and what age categories it will preferentially attack. Assessing the most likely extent of morbidity and mortality is even more challenging. Despite the enormous advances in virology and epidemiology, many fundamental scientific questions concerning the origins, virulence, and diffusion of influenza remain unanswered (Taubenberger and Morens 2006). The origin of the 1918-1919 pandemic remains unidentified, and that virulent strain itself was genetically unlike any other known virus examined since that time. Consequently, the current concern about avian H5N1 may be entirely misplaced, whereas a new strain may turn out to be pandemic. We also do not know how influenza A viruses switch to new avian hosts and, most important, what induces them to change so that they can propagate among humans.

The isolated cases of known human-to-human transmissions of H5N1 do not answer that key question. Certainly the most encouraging fact is that the widespread human exposure to infected poultry has produced only a few hundred human infections. This means that the species barrier to H5N1 diffusion is considerable (Hou and Shu 2006). Moreover, innate immune response (including elevated levels of inflammatory mediators among patients who died) may have been responsible for some of the human disease clusters.

Optimistic assessments of the next influenza pandemic put the infection rate at 20% of the world’s population, with 1 in every 100 ill people requiring hospitalization (providing the beds will be available) and 7 million deaths in a few months (Stöhr and Esveld 2004). But the morbidity rate may actually be 25%-30%, and the World Health Organization believes that a new pandemic may affect 20%-50% of the world’s population. Its toll, however, cannot be definitely predicted because we have no way of knowing the eventual virulence of new infectious strains. What is certain is that the appearance of subtype H5N1 has brought us closer to the next pandemic and that, whatever its actual magnitude, we are not adequately prepared for it and for its consequences (WHO 2005).

If the eventual death toll were to resemble those of the last two pandemics, with just a few million excess deaths, there would be no global consequences. If it were merely a repeat of the 1918-1919 event, with mortality of no less than 20-25 million and no more than 50 million people, the relative global toll would be obviously much smaller (only about one-fourth as large) than it was four generations ago. But the overall mortality could also be a proportionally potentiated replica of 1918-1919. In the early 2000s we had a 3.4 times larger global population, and an at least eight times larger (nearly 20 billion compared to less than 3 billion) worldwide inventory of poultry (the main reservoir of lethal viruses), with a large share of these birds in large feeding facilities housing tens of thousands of birds.

There are other obvious reasons that could make the next pandemic more costly even if the virulence of the pathogen and the relative mortality rate were no greater than in the 1918-1919 episode. By 2007 the world’s cities, the environment that affords much faster spread of infection, housed 50% of all people (76% in affluent countries), compared to only about 30% in 1918 (Brockerhoff 2000). Moreover, the global economy is incomparably more integrated, and modern travel and traffic in goods (including live animals and other agricultural products) are several orders of magnitude faster and more voluminous than nine decades ago, a near-ideal condition to spread infections around the world.

In 1918 it took six days to cross the Atlantic on a liner; now it takes six hours on a jetliner, and there is no doubt that air travel plays an important role in the diffusion of annual epidemics (Grais et al. 2004). The spread of SARS from Hong Kong to Toronto illustrated how unpredictably and rapidly such diffusions can take place (Abraham 2005). The extent and intensity of global links also make it impractical to adopt rapid and effective quarantine measures. Additional human factors facilitating the spread of infectious diseases include the rising demand for wild meat (common in both Africa and parts of Asia), drug addiction (including intravenous injections), and mass urban prostitution.

Even assuming “only” 25 million deaths in 1918-1919, we could see a proportionally increased global mortality surpassing 80-100 million people; with 50 million deaths in 1918-1919, the proportional total would rise to 150-200 million. With a slightly more than 5% mortality rate (that was the well-documented U.S. mean in 1918-1919) there could be at least 1.5-2 billion people contracting the infection, and it would be clearly beyond the capacity of health services to cope effectively with such a sudden burden of mass sickness. On the other hand, there are positive factors of generally better nutrition, much better hospital care, and incomparably greater virological understanding. Even so, the overall enormity of ubiquitous morbidity and greatly multiplied mortality would pose challenges unseen in most countries for generations.

In addition, many specific impacts would complicate our ability to deal with immediate challenges and would have long-term consequences. A smaller herald wave of infections several months ahead of the main event (as happened in 1918 in the United States) might not be helpful: rather than giving us more time to prepare, it might actually cause more helplessness and fear because any development of a new vaccine, which would begin only once the pandemic virus started its diffusion, would not be completed before the virus covered the world (Stöhr and Esveld 2004). But if the pandemic resembled the 1918-1919 episode, then the event might not be over in six months, substantial mortality could continue during the second season, and many cities and countries might find it particularly difficult to cope with the second wave.

That phenomenon was illustrated in the case of Toronto’s second wave of SARS in May 2003, minuscule in terms of total numbers but extremely taxing due to mental burdens and logistical problems cause by quarantined hospitals where none but emergency operations were done and no visits, not even to terminally ill patients, were allowed. The mortality burden might shift, as it did so stunningly during the 1918-1919 pandemic, to younger people (see fig. 2.17). Repetition of this pattern could strain the availability and effectiveness of caregivers (health professionals ranging from physicians to staff at retirement homes) and substantially worsen dependency ratios, particularly in Europe’s aging populations, where they have already risen to unprecedented levels.

The massive mortality of people in their prime would also strain the life insurance industry and depress real estate values. And what would the 24-hour news media, so adept at flogging a few accidental deaths in all-day marathons of despair, do with so many deaths that would just keep coming, day after day, week after week? How would the financial markets react to this massive and indiscriminate dying? More important, what would be the long-term economic cost in fear and depression on top of the immediate social and economic insults to the previously insulated Western way of life?

To what extent would Europe’s ravaged countries become open to virtually unchecked Muslim immigration, speeding up the conversion of the continent into Eurabia? What would the collapse of global trade do to the lives of hundreds of millions of factory workers in Asia whose wages depend on exports to affluent economies? How would it affect inequality among the already highly economically polarized populations of Africa and Latin America? Until a new pandemic unfolds in its unique way, we will have more uncertainty than assurance about our chances of coping with what might be a truly unprecedented public health and socioeconomic challenge.

Violent Conflicts

While trying to assess the probabilities of recurrent natural catastrophes and catastrophic illnesses, we must remember that the historical record is unequivocal: these events, even when combined, did not claim as many lives and have not changed the course of world history as much as the deliberate fatal discontinuities that Rhodes (1988) calls man-made death, the single largest cause of non-natural mortality in the twentieth century. Violent collective death has been such an omnipresent part of the human condition that its recurrence in various forms conflicts lasting days to decades, homicides to democides, is guaranteed. Long lists of the past violent events can be inspected in print (Richardson 1960; Singer and Small 1972; Wilkinson 1980) or in electronic databases (White 2003; IISS 2003; PRIO 2004).

Even a cursory examination of this record shows yet another tragic aspect of that terrible toll: so many violent deaths had no or only a marginal effect on the course of world history. Others, however, contributed to outcomes that truly changed the world. Large death tolls of the twentieth century that fit the first category include the Belgian genocide in the Congo (began before 1900), Turkish massacres of Armenians (mainly in 1915), Hutu killings of Tutsis (1994), wars involving Ethiopia (Ogaden, Eritrea, 1962-1992), Nigeria and Biafra (1967-1970), India and Pakistan (1971), and civil wars and genocides in Angola (1974-2002), Congo (since 1998), Mozambique (1975-1993), Sudan (since 1956 and ongoing), and Cambodia (1975-1978). Even in our greatly interconnected world, such conflicts can cause more than 1 million deaths (as did all of the just listed events) and go on for decades without having any noticeable effect on the cares and concerns of the remaining 98%-99.9% of humanity.

By contrast, the modern era has seen two world wars and interstate conflicts that resulted in long-lasting redistribution of power on global scales, and intrastate (civil) wars that led to the collapse or emergence of powerful states. I call these conflicts transformational wars and focus on them next. I then examine the most threatening category of asymmetrical conflicts, terrorist attacks used by small groups or loosely connected networks to challenge even the most powerful nation-states, and in so doing, changing the course of the world history. Determined, protracted terrorist activities on the local or national level are not new, but after 9/11 there can be no doubt about their impact on global history.

Transformational Wars

There is no canonical list of transformational wars of the nineteenth and twentieth centuries. Historians agree on the major conflicts that belong in this category but differ as to others. My own list is fairly restrictive; a more liberal definition of worldwide impacts could extend the list. A long-lasting transformational effect on the course of world history is a key criterion. And most of the conflicts I have called transformational share another characteristic: they are mega-wars, claiming the lives of more than 1 million combatants and civilians. By Richardson’s (1960) definition, based on the decadic logarithm of total fatalities, most would be magnitude 6 or 7 wars (fig. 2.19). Their enumeration starts with the Napoleonic wars, which began in 1796 with the conquest of Italy and ended in 1815 in a refashioned, and for the next 100 years also remarkably stable, Europe. This stability was not basically altered, either by brief conflicts between Prussia and Austria (1866) and Prussia and France (1870-1871) or by repeated acts of terror that killed some of the continent’s leading public figures while others, including Kaiser Wilhelm I and Chancellor Bismarck, escaped assassination attempts.

The next entry on my list of transformational wars is the protracted Taiping war (1851-1864), a massive millennial uprising led by Hong Xiuquan (Spence 1996). This may seem like a puzzling addition to readers not familiar with China’s modern history, but the Taiping uprising, aimed at achieving an egalitarian, reformist kingdom of heaven on earth, exemplifies a grand transformational conflict because it fatally undermined the ruling Qing dynasty, enmeshed foreign actors in China’s politics for the next 100 years, and brought in less than two generations the end of the old imperial order. With about 20 million fatalities, its human costs were higher than the aggregate losses of combatants and civilians in World War I.

Fig. 2.19
Wars of magnitude 6 or 7, 1850-2000. Boldface font indicates wars that the author considers transformational. Plotted fatalities are minimal to average (heavily rounded) estimates from sources cited in the text.

The American Civil War (1861-1865) should be included because it opened the way toward the country’s rapid ascent to global economic primacy (H.M. Jones 1971). U.S. GDP surpassed that of Great Britain by 1870; by the 1880s the United States became the technical leader and the world’s most innovative economy, firmly set on its rise toward superpower status.

World War I (1914-1918) traumatized all European powers, utterly destroyed the post-Napoleonic pattern, ushered in Communism in Russia, and brought the United States into global politics for the first time. And—a fact often forgotten—it also began the destabilization of the Middle East by dismembering the Ottoman Empire and creating the British and French Mandates whose dissolution led eventually to the formation of the states of Jordan (1923), Saudi Arabia and Iraq (1932), Lebanon (1941), Syria (1946), and Israel (1948) (Fromkin 2001).

World War II (1939-1945) is, of course, the quintessential transformational war, not only because of the sweeping changes it brought to the global order but also because of the decades-long shadows it cast over the rest of the twentieth century. Virtually all the key post-1945 conflicts involving that war’s protagonists—the USSR, the United States, and China in Korea; France and the United States in Vietnam; the USSR in Afghanistan; superpower proxy wars in Africa—can be seen as actions designed to maintain or challenge the outcome of WW II.

Arguably, other conflicts might seem to qualify, but closer examination shows that they did not fundamentally alter the past but rather reinforced the changes set in motion by transformational wars. Two cases are the undeclared but no less fatal wars waged by a variety of means from outright mass killings to deliberate famines against the people of the USSR by Stalin between 1929 and 1953, and against the Chinese people by Mao between 1949 and 1976. The actual toll of these brutalities will be never known with any accuracy, but even the most conservative estimates put the combined death toll at above 70 million (White 2003). Objections can be made to the durations of the listed transformational wars. For example, 1912, the beginning of the Balkan wars, and 1921, the conclusion of the civil war that established the Soviet Union, may be more appropriate dating of World War I. And one could say that World War II started with Japan’s invasion of Manchuria in 1933 and ended only with the Communist victory in China in 1949.

Even a rather restrictively defined list of transformational wars adds up to 42 years of conflicts in two centuries, with conservatively estimated total casualties (combatant and civilian) of about 95 million (averaging 17 million deaths per conflict). The mean recurrence rate is about 35 years, and an implied probability of a new conflict of that category stands at roughly 20% during the next 50 years. All these numbers could be reduced by including the wars of the eighteenth century, a time of remarkably lower intensity of all violent conflicts than the two preceding and two subsequent centuries (Brecke 1999). On the other hand, most of that century belonged distinctly to the preindustrial era, and most of the then major powers (e.g., Qing China, enfeebled Mughal India, and weakening Spain) were nearing the end of their influence. Thus the exclusion of eighteenth-century wars makes sense.

Three important conclusions emerge from the examination of all armed conflicts of the past two centuries. First, until the 1980s there was an upward trend in the total number of conflicts starting in each decade; second, there was an increasing share of wars of short duration (less than 1 year) (Kaye, Grant, and Emond 1985). Implications of these findings for future transformational conflicts are unclear. So is the fact that between 1992 and 2003 the worldwide number of armed conflicts declined by 40%, and the number of wars with 1,000 or more battle deaths dropped by 80% (fig. 2.20) (Human Security Centre 2006). These trends were clearly tied to declining arms trade and military spending during the post-Cold War era; it is thus unclear if the decade was a welcome singularity of reduced violence or a brief aberration.

Fig. 2.20
Worldwide conflicts, 1946-2003. From Human Security Centre (2006).

The most important finding regarding the future likelihood of violent conflicts comes from Richardson’s (1960) search for causative factors of war and his conclusion that wars are largely random catastrophes whose specific time and location we cannot predict but whose recurrence we must expect. That would mean that wars are like earthquakes or hurricanes, leading Hayes (2002,15) to speak of warring nations that “bang against one another with no more plan or principle than molecules in an overheated gas.” At the beginning of the twenty-first century it could be argued that new realities have greatly diminished the recurrence of many possible conflicts, thus, to continue the metaphor, greatly reducing the density and the pressure of the gas.

The European Union is widely seen as a near-absolute barrier to armed conflicts involving its members. America and Russia may not be strategic partners, but they surely do not take the same adversarial positions that they held for two generations preceding the fall of the Berlin Wall in 1989. The Soviet Union and China came very close to a massive conflict in 1969 (a close call that prompted Mao’s rapprochement with the United States), but today China buys the top Russian weapons and would gladly buy all the oil and gas that Siberia could offer. And Japan’s very constitution prevents it from attacking any country. This reasoning would negate, or at least severely undercut, Richardson’s (1960) argument, but it would be a mistake to use it when thinking about long spans of history. Neither short-term complacency nor understandable reluctance to imagine the locale or cause of the next transformational was a good argument against its rather high probability.

Fig. 2.21
Napoleon Bonaparte, First Consul of France, 1799-1802. Painting (ca. 1802) by AntoineJean Gros.

In 1790 no Prussian high officer or Czarist general could suspect that Napoleon Bonaparte (fig. 2.21), a diminutive Corsican from Ajaccio, who became known to his troops as le petit caporal, would set out to redraw the map of Germany before embarking on a mad foray into the heart of Muscovy (Zamoyski 2004). In 1840 the Emperor Daoguang could not have dreamt that the dynastic rule that had lasted for millennia would come close to its end because of Hong Xiuquan, a failed candidate of the state Confucian examination who came to think of himself as a new Christ and who led the protracted Taiping rebellion (Spence 1996). And in 1918 the victorious powers, dictating a new European peace at Versailles, would not have credited that Adolf Hitler, a destitute, neurotic would-be artist and gassed veteran of the trench warfare, would within two decades undo their new order and plunge the world into its greatest war.

New realities may have lowered the overall probability of globally transformational conflicts, but they have not surely eliminated their recurrence. Causes of new conflicts could be found in old disputes or in surprising new developments. During 2005-2007 the probabilities of several new conflicts rose from vanishingly low to decidedly non-negligible as the North Korean threat led Japan to raise the possibility of an attack across the Sea of Japan; as the chances of a U.S.-Iran war (nonexistent during the Pahlavi dynasty, very low even after the Revolutionary Guards took the U.S. embassy hostages) were widely discussed in public; and as China and Taiwan continued their high-risk posturing regarding the fate of the island.

Richardson’s (1960) reasoning and the record of the past two centuries imply that during the next 50 years the likelihood of another armed conflict with potential to change world history is no less than about 15% and most likely around 20%. As in all cases of such probabilistic assessments, the focus is not on a particular figure but rather on the proper order of magnitude. No matter whether the probability of a new transformational war is 10% or 40%, it is 1-2 OM higher than that of the globally destructive natural catastrophes that were discussed earlier in this chapter.

Before leaving this topic I must note the risks of an accidentally started, transformational mega-war. As noted, we have lived with this frightening risk since the early 1950s and during the height of the Cold War. Casualties of an all-out thermonuclear exchange between the two superpowers (including its lengthy aftermath) were estimated to reach hundreds of millions (Coale 1985). Even a single isolated miscalculation could have been deadly. Forrow et al. (1998) wrote that an intermediate-sized launch of warheads from a single Russian submarine would have killed nearly instantly about 6.8 million people in eight U.S. cities and exposed millions more to potentially lethal radiation.

On several occasions we came perilously close to such a fatal error, perhaps even to a civilization-terminating event. Nearly four decades of the superpower nuclear standoff were punctuated by a significant number of accidents involving nuclear submarines and long-range bombers carrying nuclear weapons, and by hundreds of false alarms caused by malfunctions of communication links, errors of computerized control systems, and misinterpretations of remotely sensed evidence. Many of these incidents were detailed in the West after a lapse of time (Sagan 1993; Britten 1983; Calder 1980), and there is no doubt that the Soviets could have reported a similar (most likely, larger) number.

The probabilities of such mishaps escalating out of control rose considerably during periods of heightened crisis, when a false alarm was much more likely to be interpreted as the beginning of a thermonuclear attack. A series of such incidents took place during the most dangerous moment of the entire Cold War, the October 1962 Cuban Missile Crisis (Blight and Welch 1989; Allison and Zelikow 1999). Fortunately, there was never any accidental launch, either attributable to hardware failure (crashing nuclear bomber, grounded nuclear submarine, temporary loss of communication) or to misinterpreted evidence. One of the architects of the Cold War regime in the United States concluded that the risk was small because of the prudence and unchallenged control of the leaders of the two countries (Bundy 1988).

The size of the risk depends entirely on the assumptions made in order to calculate cumulative probabilities of avoiding a series of catastrophic mishaps. Even if the probability of an accidental launch were just 1% in each of some 20 known U.S. incidents (the chance of avoiding a catastrophe being 99%), the cumulative likelihood of avoiding an accidental nuclear war would be about 82%, or, as Phillips (1998,8) rightly concluded, “about the same as the chance of surviving a single pull of the trigger at Russian roulette played with a six-shooter.” This is at once correct reasoning and a meaningless calculation. As long as the time available to verify the real nature of an incident is shorter than the minimum time needed for a retaliatory strike, the latter course can be avoided and the incident cannot be assigned any definite avoidance probability. If the evidence is initially interpreted as an attack under way, but a few minutes later this is entirely discounted, then in the minds of decision makers the probability of avoiding a thermonuclear war goes from 0% to 100% within a brief span of time. Such situations are akin to fatal car crashes avoided when a few centimeters of clearance between the vehicles makes the difference between death and survival. Such events happen worldwide thousands of times every hour, but an individual has only one or two such experiences in a lifetime, so it is impossible to calculate the probabilities of any future clean escape.

Fig. 2.22
Number of U.S. and Soviet/Russian strategic warheads, 1945-2005. Plotted from data in NRDC (2006).

The demise of the USSR had an equivocal effect. On the one hand it undoubtedly diminished the chances of accidental nuclear war thanks to a drastic reduction in the number of warheads deployed by Russia and the United States. In January 2006, Russia had approximately 16,000 warheads compared to the peak USSR total of nearly 45,000 in 1986, and the United States had just over 10,000 warheads compared to its peak of 32,000 in 1966 (Norris and Kristensen 2006). Totals of strategic offensive warheads fell rapidly after 1990 to less than half of their peak counts (fig. 2.22), and the Strategic Offensive Reductions Treaty, signed in May 2002, envisaged further substantial cuts. On the other hand, it is easy to argue that because of the aging of Russian weapons systems, a decline in funding, a weakening of the command structure, and the poor combat readiness of the Russian forces, the risk of an accidental nuclear attack has actually increased (Forrow et al. 1998).

Moreover, with more countries possessing nuclear weapons, it is reasonable to argue that chances of accidental launching and near-certain retaliation have been increasing steadily since the beginning of the nuclear era. Since 1945 an additional nation has acquired nuclear weapons roughly every five years; North Korea and Iran have been the latest candidates. Even as the concerns about nuclear proliferation have been rising, an old kind of violence has assumed new and unprecedented prominence: terrorist attacks are rising to a level of globally transformative events.

Terrorist Attacks

I should start by emphasizing that this discussion is not about state-operated terror aimed at a ruling regime’s domestic enemies, a bloody ingredient of modern history that was elevated to governmental policy in revolutionary France (Wright 1990) and perfected during the twentith century in many countries on four continents, but surely to the greatest extent and with the most ruthless reach in Stalinist Russia (Conquest 1990). My focus is on global transformations that can be effected by relatively small groups of internationally operating terrorists, that is, on global impacts arising from asymmetrical violence whereby a few can inflict serious damage on the many. Although possessing enormous arsenals of armaments and astonishing technical prowess, states find it very difficult to eliminate such threats or even to keep them within acceptable bounds.

There has been no shortage of terrorist actions (“politics by other means”) in modern history (Lacquer 2001; Carr 2002; Maxwell 2003; Sinclair 2003; Parry 2006), but as with classical armed conflicts, most of them have not risen to the level of globally transforming events. Following Rapoport’s (2001) division of the history of terrorism into four waves, it is clear that neither the first wave, begun in 1879 and dominated by nearly four decades of Russia’s narodnaya volya (the people’s will) assassinations (Geifman 1993; Hardy 1987), nor the second wave, extending from the 1920s to the 1960s and characterized primarily by terror in the service of national self-determination, had globally transformative effects.

More important, in the long run those terrorist actions did not make a great deal of difference even to the collective fortunes of the afflicted societies. In Russia, the narodnik killings were less important in the demise of the Czarist Empire than tardiness of internal reforms, imperial overreach leading to conquest of the Muslim populations in Central Asia and in the Caucasus region between 1817-1864 (Yemelianova 2002), and the socioeconomic impacs of World War I.

As for the process of decolonization, it was driven primarily by the colonizers themselves, not by terrorist groups. And in Israel it was the Haganah, the predecessor of the Israel Defense Forces, rather than terrorist groups (Stern Gang, Irgun) that created modern Israel (Allon 1970).

Rapoport’s (2001) third terrorist wave, in the 1960s and 1970s, was much more far-reaching. It included the PLO (Palestine Liberation Organization), PFLP (Popular Front for the Liberation of Palestine), IRA (Irish Republican Army), Basque ETA, Italian Brigatte Rosse, French Action Directe, German Rotte Armee, and the U.S. Weather Underground (Alexander and Myers 1982; Lacquer 2001; Parry 2006). These terrorists shared the rhetoric of struggle and violence and a predilection for airline hijacking and for obtaining weapons and explosives from the Soviet Union and its allies. Their exploits included such well-publicized actions of international terrorism as the PFLP’s hijackings of planes to Amman in September 1970, the Munich Olympics massacre of Israeli athletes in 1972, and the kidnapping of OPEC ministers in Vienna in 1975.

Yet, these terrorist actions did not have any collective global impact. They did not inflict irreparable economic damage on Ireland, the UK, Germany, Spain, or Italy (where the country’s Prime Minister, Aldo Moro, was kidnapped and murdered in 1979); did not bring about drastic social or political shifts within these societies; and did not prevent the countries’ successful integration into the European Union. The PLO was actually weakened by its radical stance; what looked like the pinnacle of its influence—hijacked airplanes, Arafat’s UN speech with holstered gun—was actually the beginning of its road to negotiations in Oslo and to the Madrid meeting of 1991.

Rapoport (2001) sees 1979 as the beginning of a fourth, still unfolding, wave of modern terrorism. That year saw the downfall of the Pahlavi dynasty and the rise of Ayatollah Khomeini’s theocracy in Iran as well as, symbolically, the beginning of a new century in the Islamic calendar—year 1400 since the hijra began at sundown of November 19, 1979, fifteen days after the Khomeini-inspired mob took over the U.S. embassy in Teheran. These events radicalized other shī’a societies and led directly to the establishment of Hizbullah in 1982, to its mass murder of U.S. marines in Beirut in 1983, and to its campaign of assassinations, kidnappings, and prolonged hostage takings.

And there was no lack of religiously motivated sunnī terrorism, beginning with a violent temporary takeover of the Grand Mosque of Mecca in 1979 and extending to bombings in North Africa, the Middle East (above all, in Egypt), the Philippines, and Indonesia. This wave was greatly reinforced during the decade of the struggle to remove the Soviet forces from Afghanistan. During the 1990s religiously inspired terror was behind the most frequent kind of indiscriminate attacks, suicide bombings in Israel (perpetrated usually by young men whose designation shahīd conveys both “witness” and “religious martyr”). Al Qaeda’s first suicide attacks, on U.S. embassies in Kenya and Tanzania, killed 301 people and injured more then 5,000 on August 7, 1998.

But, once again, by the year 2000 it would have been difficult to argue that these terrorist attacks had changed (or had a major role in changing) the fundamental ways of the Western world. I have always found it extraordinarily remarkable that Hizbullah’s suicidal attack on the U.S. marines was met with no retaliation whatsoever, had no political domestic consequences, and led to no court-martials; the U.S. response was only a rapid retreat. (In retrospect, if that event had been seen as the opening blast of modern anti-Western Islamic violence, history might have unfolded differently). Similarly, the reaction to the 1993 World Trade Center bombing was near-instant forgetting (despite the discovery of manuals detailing future attacks). And, remarkably, Israeli society has shown an enormous resilience through the years of random suicide bombings; if the U.S. population were attacked with such frequency (proportionately) more than 70,000 people would have died between 1968 and 2004 (fig. 2.23).

Fig. 2.23
Equivalent lifetime (70-year) risk of casualty or death from terrorist attacks in Israel, the Near East (excl. Israel and Palestine), and the rest of the world, 1968-2004. Based on Bogen and Jones (2006).

The attacks of September 11, 2001, when 19 Islamic terrorists hijacked four Boeing 767s and steered two of them into the Twin Towers of the World Trade Center and one into the Pentagon (the fourth crashed because of passenger resistance), changed everything. They elevated terrorism to the class of global catastrophic events. Through the world’s omnipresent visual media, they produced a horrific (and endlessly replayed) spectacle whose impact resides primarily in unforgettable impressions created by the attack’s execution rather than in total number of deaths or lasting economic impact. (I address comparative fatalities and economic consequences in some detail in chapter 5).

Similarly, the main reason for the post-9/11 response was not the ensuing economic damage, not even the tragic human toll, but the shock of the experience (the first attack on the U.S. mainland since the British raids during the War of 1812); the unmistakable symbolism of the attacks (striking at the seat of the global economy as well at the power centers of the superpower); and the attendant fear of possible repeats and even more deadly strikes. The consequences of 9/11 can be understood only when looking far beyond the immediate casualties and near-term economic losses, and the same kind of broader perspectives are needed to assess future threats. Unfortunately, only a single conclusion can be reached with certainty, namely, that the oft-repeated post-2001 aspiration to eliminate terrorism (“winning the war on terror”) is unachievable. As Rapoport (2001,424) put it, “Terrorism is deeply rooted in modern culture. Even if the fourth wave soon follows the path of its three predecessors, another inspiring cause is likely to merge unexpectedly, as it has too often in the past.”

The real questions thus concern the likely extent, frequency, and impact of continuing terrorist attacks. Put differently, what is the likelihood that terror will become a recurrent factor (albeit at irregular and relatively lengthy intervals) in shaping global affairs or perhaps even a dominant preoccupation of the modern world? In the wake of the 9/11 attack, the modalities and consequences of future terrorist actions that would fall into the same class have been explored—from different perspectives, at once excessively yet insufficiently, with exaggeration as well as with inadequate appreciation of possibilities (Carr 2002; Silvers and Epstein 2002; Calhoun, Price and Timmer 2002; Bennett 2003; Flynn 2004; Fagin 2006)— but their probability remains beyond useful quantification.

While nobody can assign meaningful numerical probabilities regarding the extent, frequency, and impact of future attacks, there is a great deal of historical and statistical evidence that offers some comforting and some disturbing conclusions. At the outset of such an assessment is the troubling array of choices available to determined terrorists: cyberattacks on modern electronic infrastructures, poisoning of urban water or food supplies, decapitation of national leaders, dirty bombs, and release of old or new pathogens. Public policy and precaution dictate that none of these incidents, no matter how low their probabilities might seem, can simply be dismissed as unlikely. This very reality makes it very difficult to assess the relative likelihood of these different modes of attack.

But a skeptical appraisal must point out at least two important facts. Many of these attacks are not as easy to launch as media coverage would lead us to believe, and many of them, even if successfully executed, would have relatively limited impacts and would not rise to the level of global transformational events. A notable example is the use of nerve gas. As the fanatics of Japan’s Aum Shinriky– discovered, it is not easy to disperse a nerve gas (sarin in their case) and kill a large number of people even in such a densely populated setting as Tokyo’s subway system, even if the people selected for dispersing the gas were well trained. A total of 12 people died as the result of the March 20, 1995, attack (Murakami 2001).

And the U.S. anthrax scare of October and November 2001 was primarily a matter of irresponsibly exaggerated fears. Indeed, many experts argue, the entire threat of bioterrorism has been vastly overblown (Enserink and Kaiser 2005). Given the large number of pathogens that could be used by terrorists (smallpox, plague, anthrax, botulism, tularemia, Venezuelan equine encephalitis, transmissible by mosquitoes from horses to humans) and the large number of ways in which the diseases can be spread (spraying viruses in shopping malls, using crop duster planes over cities) means that a continuous effort to reduce such risks to a negligible level is beyond our health and security resources.

And given the enormous amount of money that is now pouring into U.S. research on anthrax and smallpox (more than a half billion dollars in FY 2004 and FY 2005), the work on common pathogens that annually claim millions of lives worldwide is already being shortchanged. There is a much greater danger (counterintuitive but credible) that a bioterror agent could be released by a disgruntled employee of one of the 14 new research superlabs built to handle the most dangerous pathogens than that it would be carried to the United States in a suitcase by a fundamentalist zealot from the caves of Waziristan.

Worries about pathogens deployed by terrorists could be multiplied by considering the attacks with plant and animal diseases that offer perfect opportunities for low-tech, high-impact (in terms of economic cost) terrorism (Wheelis, Casagrande, and Madden 2002; Gewin 2003). Again, there is no shortage of possible pathogens, from Phytophthora fungus to Brucella suis to foot-and-mouth disease, but it might be more rewarding to ensure first that our food supply is free of unnecessary, preventable risks caused by dubious commercial choices (e.g., turning herbivorous cattle into cannibalistic carnivores, an insanity that brought us mad cow disease).

In general, the arguments about the high risks of terrorist actions, which are supposedly much easier to launch than spectacular mass murders but which might ultimately prove costlier and deadlier—attacks that Homer-Dixon (2002) defined as ingredients of complex terrorism—should be seen with a great deal of skepticism. Rather than killing people, terrorists would exploit the growing complexities of modern societies and deploy their “weapons of mass disruption” in attacking nonredundant nodes of critical infrastructures like electricity networks, other energy supply systems, chemical factories, or communication links. A skeptical riposte to this scenario is, given the fact that these attacks are so easy to launch, why are we not seeing scores of them every month?

After all, it is impossible to safeguard against explosives every one of hundreds of thousands of steel towers that carry a large nation’s high-voltage lines; or to protect round-the-clock every one of tens of thousands of transforming substations; or to detect every poisoned kilogram among millions of tonnes of harvested crops; or to secure thousands of reservoirs and rivers that furnish drinking water.

A relatively rich experience with accidental large-scale electricity supply outages (caused by weather, human error, or technical problems) demonstrates that similar or even larger failures would not rise to the level of historic milestones. For example, who now remembers the great U.S. blackouts of 1965 and 1977? And the great U.S.-Canadian blackout of August 2003 provided a remarkable illustration of technical and social resilience. Caused by a series of preventable technical failures, it affected some 50 million people in the northeastern United States and Ontario, and it extinguished all traffic lights and stopped all subway trains in New York (Huber and Mills 2004). But the trading on Wall Street continued, the blackout did not lead to any catastrophic disruption of the region’s business or economic growth, and hospitals were able to maintain adequate care (Huber and Mills 2004). And despite hot weather, the crime rate had actually dropped. In general, people made the best of their often taxing experiences, and if they looked upward, they were rewarded by a rare sight, dark Manhattan under a starry sky.

Use of the weapons of mass disruption would most likely amount to nothing but a spatially and temporarily limited (though fairly expensive) nuisance, which would remain far below the threshold of events capable of modifying the course of global history. This may be the most important reason why terrorists have little interest in such attacks. The globally reverberating shock they seek can hardly be achieved by a temporary disruption of a city’s electricity supply or by killing a few thousand pigs. Launching new spectacular mega-attacks is clearly not easy; a detailed summary of worldwide terrorist attacks during the fourth year after 9/11 reinforces the conclusion that the global impact of terrorism in a “normal” year is at best marginal.

According to the National Counterterrorism Center (NCC 2006), the year saw 11,111 terrorist attacks, but in 5,980 of them (54%) there were no fatalities, 2,884 had a single fatality, and 1,617 had two to four fatalities. This means, fortunately, that in nearly 95% of all terrorist events the human impact was akin to the level of moderate to serious car accidents. The country with the second highest number of fatalities (after Iraq) was India, with 1,357 deaths, but this toll had no effect on the country’s overall economic performance and led to no discernible changes in its policies. Western Europe’s most deadly attack targeted three subway trains and a bus in London on July 7, 2005, killing 52 people, but except for some new policing and counterterrorism measures, it had a surprisingly small effect on the country’s affairs. Finally, given the prominence of the United States and its citizens as the targets of Islamic terror, it is notable that in 2005 only 0.4% (56 people) of all deaths due to terror attacks were U.S. citizens.

An extended perspective also does not support an image of terrorism as an extraordinarily risky, world-changing phenomenon. Analysis of 40 years of about 25,000 worldwide terrorist events shows about 34,000 deaths and 82,000 nonfatal injuries (Bogen and Jones 2006), a small fraction of the nearly 20 million fatalities caused by traffic accidents, and an annual average close to deaths from volcanic eruptions or airline accidents (fig. 2.24). But this comparison could change with a single new terrorist attack using a nuclear weapon or other effective means of mass destruction. Here we must confront several uncomfortable realities. The first is that even the most assiduous deployment of the best available preventive measures (smart policing, clever informants, globe-spanning electronic intelligence, willingness to undertake necessary military action) will not be able to thwart all planned attacks.

The second reality is that the most dangerous form of terrorist attacks cannot be deterred because the political and ideological motivations for terrorist attacks that characterized Rapoport’s (2001) first three waves of terror have blended with religious zealotry and become one with the Muslim concept of martyrdom, providing the perpetrators with an irresistible reward: instant access to paradise. Murder by suicide has deep roots in Muslim history, going back to the shī’a Nizari state of the eleventh through thirteenth centuries, which perfected the practice of skilled sacramental suicidal assassins (Lewis 1968; Andriolo 2002). Modern revival of the practice first sent thousands of Iranian boys and young men to their deaths during the war with Iraq in the early 1980s (Taheri 1986); at the same time it was adopted by the Lebanese shī‘ī Hizbullah as the principal tool of its terrorist attacks; and soon it was copied by the sunnī Hamas and Fatah in their fight with Israel; and by al-Qaeda in its quest for a new caliphate. Fighting this kind of commitment to terror is exceedingly difficult: “The suicide bomber’s thumb pressing the detonator simultaneously clocks him into paradise” (Andriolo 2002, 741).

Fig. 2.24
Global fatalities due to terrorist attacks compared to mortality from traffic and airline accidents, major natural disasters, and errors during hospitalization. Annual averages for 1970-2005 calculated from data in Bogen and Jones (2006), WHO (2004a), Boeing (2006), Swiss Re (2006b), and Kohn, Corrigan, and Donaldson (2000).

The third sobering consideration is that neither personal instability nor an individual’s hopelessness or overt personal defects, factors that come immediately to mind as the most likely drivers, are dependable predictors of candidates for suicidal martyrdom (Atran 2003; 2004). Nor are such indicators as poverty, level of education (a typical shahīd is not an illiterate simpleton), or religious devotion (before their indoctrination, many youngsters are initially only moderately religious or even secular-minded). Institutional manipulation of emotional commitment (by organizing fictive kin groupings) seems to be a key factor, and one not easily eliminated. Other obvious contributions are rapidly rising youth populations in countries governed by dictatorial regimes with limited economic opportunities, and the disenchantment of second-generation Muslim immigrants with their host societies. But none of these factors can offer any selective guidance to identify the most susceptible individuals and to prevent their murderous suicides.

The fourth consideration is that our understandable fixation on suicidal missions may be misplaced. A dirty bomb containing enough radioactive waste to contaminate several downtown blocks of a major city and cause mass panic (as anything invisible and nuclear is bound to do) can be positioned in a place calculated to have a maximum impact (a building roof, a busy crossroad) and then remotely detonated. And while Hizbullah’s more than 30 days of rocket attacks on Israel in the summer of 2006 were not particularly deadly, they paralyzed a large part of the country and demonstrated how more conventional weapons could be used in the service of terrorism.

Imaginable Surprises

This is a nebulous category of events whose number is limited only by one’s inventiveness—and by a mesh of the sieve used to separate plausible events from imaginary constructs. Still, this category should not be dismissed as unhelpfully speculative but probed, within reason, in order to go beyond the appraisal of wellappreciated risks. Rather than following Joy (2000) or Rees (2003), with his concerns about the annihilation of the Earth through particle physics experiments, I engage in a much more plausible stretching of past events and do so, as with the just completed appraisals of probable catastrophic discontinuities, within two distinct categories of natural and human-made events.

I believe that our understanding of natural catastrophes leaves little room for events that were not already covered earlier in this chapter. Perhaps the only important exception is unusually rapid climate change—rapid in climatic terms, lasting years rather than centuries or millennia, and unusually because major uncertainties exist in the prevailing conclusions regarding past abrupt climate changes, and today’s biospheric conditions are not conducive to similarly abrupt shifts.

When the δ18O (normalized ratios of two oxygen isotopes, 18O and 16O) derived from Greenland ice cores are taken as proxies for local temperature change (they are so only within a factor of ~2), there is an unmistakable pattern of large, positive spikes correlated with episodes of rapid warming during the last glacial period (Stuiver and Grootes 2000). A widely held assumption is that these DansgaardOeschger (D-O) events were hemispheric or global in extent and that they originated because of shifts in North Atlantic ocean circulation (Broecker 1997). But this genesis is dubious, and there is little solid evidence that they were anything more than local phenomena restricted to Greenland (most likely due to the interaction of the wind field with continental ice sheets) that ceased abruptly about 10,000 years ago with the progressing deglaciation of the Northern Hemisphere (Wunsch 2006). Ever since that time, δ18O variations have been remarkably stable, and there appears to be virtually no possibility that another D-O warming could take place during the next 50 years.

But is a reverse shift, rapid global cooling, any more plausible? The last rapid temporary cooling of the Younger Dryas period took place 12,700-11,500 years ago at the end of the last glacial cycle as the Earth was warming (Dansgaard, White, and Johnsen 1989). Because we are not sure how this cooling was triggered we should include its possibility among imaginable natural discontinuities that could change the course of global history. For decades consensus posited a massive flood of fresh water (~9500 km3) pouring from the glacial Lake Agassiz via the Great Lakes to the North Atlantic and disrupting the normal thermohaline circulation (see fig. 5.1) (Johnson and McClure 1976; Teller and Clayton 1983).

This cool period, with temperatures ~15°C lower than today’s, began and ended abruptly, with warming of ~5°C-10°C spread over ~40 years, most of it over 5 years (Alley 2000). There is today no similarly massive body of fresh water that could suddenly spill into the Atlantic, but there is also no clear proof of that past massive eastward outflow from Lake Agassiz and no clear conclusion about the cooling’s trigger (Broecker 2003; 2006). There is no obvious geomorphic evidence for the postulated outlet toward the Great Lakes; a northern route (via Athabasca) appears very unlikely, as does a massive escape beneath the ice. Another explanation finds the source of fresh water in a precipitous melting of massed icebergs, and yet another rejects that trigger entirely and attributes the sudden cooling to a shift of wind patterns caused by a tropical temperature anomaly.

In the absence of any reliable explanation, it is easy to imagine that an as-yetunidentified trigger might act once more to cause cooling of 5°C-10°C over several decades. This change would pose many challenges for the functioning of a modern civilization whose centers of economic activity and large shares of affluent populations are between 35°N and 55°N. As with the D-O-type warming episodes, such cooling must be seen as highly unlikely during the next 50 years. Our major preoccupation with climate will remain focused on a much slower (though rapid in geological terms) process of global warming.

Of imaginable catastrophic human-made surprises, none is as worrisome as the accidental or deliberate use of nuclear weapons. Simple logic dictates that the probabilities of nuclear accidents should rise as the number of nuclear nations increases and some of them fail to enforce strict precautions to avoid such mishaps. Although the probabilities of such accidents remain uncertain, they may surpass by several orders of magnitude the likelihood of any known global natural catastrophes. It is the same with the probabilities of deliberate nuclear war among minor (or future) nuclear states. Inevitably, India and Pakistan come to mind.

Additional frightening nuclear scenarios can be imagined: Pakistani bombs fall into the hands of extremists bent on realizing Usama bin-Lādin’s favorite project of a Muslim caliphate extending from Spain to Indonesia; or the Russian nuclear arsenal (perhaps after a collapse of global energy prices, prolonged worldwide recession, and a drastic pauperization of Russia that would be simultaneously terrorized to an unprecedented extent by attacks from its Muslim fringe) is controlled by a reckless, nationalist, anti-Western regime (remember Vladimir Zhirinovsky?). But the most worrisome among the imaginable futures of deliberate nuclear use is the Iranian determination, voiced in no uncertain terms by Ayatollah Khomeini (cited in Lewis 2006):

I am decisively announcing to the whole world that if the world-devourers wish to stand against our religion, we will stand against the whole world and will not cease until the annihilation of all of them. Either we all become free, or we will go to the greater freedom which is martyrdom. Either we shake one another’s hands in joy at the victory of Islam in the world, or all of us will turn to eternal life and martyrdom. In both cases, victory and success are ours.

This could be dismissed as just another example of hyperbolic, apocalyptic preaching, but it may be prudent to take it seriously. After all, Hitler’s Mein Kampf (1924) turned out to be a programmatic statement. Khomeini’s version of freedom means turning the whole world into a medieval theocracy, and his definition of Islamic victory leaves no space for compromise arrangements or the fear of mutually assured destruction that has restrained the two superpowers. Even Stalin did not court death by U.S. bombs. But to Khomeini and the president of Iran, Mahmoud Ahmedinejad, assured death in a retaliatory nuclear strike appears to be a shortcut to martyrdom.

A new threat may come from accidental escapes or unintended mutation of bacteria or yeast that will be engineered from life’s fundamental genetic components in order to create artificial species with superior abilities for energy generation, enzymatic processing, or food and drug production. As an editorial in Nature (Futures of artificial life 2004) put it, “This is no longer a matter just of moving genes around. This is shaping life like clay,” and such manipulations must evoke unease and concern, Craig Venter’s boisterous pronouncements (about writing and not just reading the code of life) notwithstanding.

Finally, musings on imaginable catastrophes should not fail to note that given the multitude of viruses and their inherent mutability, there is always the possibility of a new pathogen as potent as HIV (or even more virulent) and as easily transmissible as influenza. And given the history of bovine somatotrophic encephalopathy, or mad cow disease, it is also possible to posit new prions (proteinaceous infectious particles) that might spread a new version of transmissible spongiform encephalopathy. The consequences of such eventualities are truly frightening to contemplate.

Comments
0
comment
No comments here
Why not start the discussion?