Mihi contuenti semper suasit rerum natura nihil incredibile existimare de ea.
(When I have observed nature, she has always induced me to deem no statement about her incredible.)
—Gaius Plinius Secundus (Pliny the Elder), Naturalis Historia, XI.ii, 6
There is nothing new about large-scale impacts of human activities on the biosphere. Conversion of forests, grasslands, and wetlands to crop fields, and deforestation driven by the need for wood and charcoal to heat homes and smelt metals, and for lumber to construct cities and ships, changed natural ecosystems on a grand scale in preindustrial Europe and Asia. Even the pre-1492 American societies had a greater impact on their environment than previously surmised. An assumption has been that these changes transformed the environment only on a local or regional scale, deforestation in the Mediterranean countries or in North China being perhaps the best-known examples.
But Ruddiman (2005) argues that human beings began to influence global climate as soon as Neolithic populations adopted shifting agriculture. Biomass burning released CO2, and later settled agricultures, through flood irrigation and animal husbandry, became a major source of methane, a gas whose atmospheric warming potential is 1 OM greater than that of CO2. Despite their relatively small numbers, humans thus hijacked the entire global climate system, and they have remained in control ever since. A more conventional interpretation dates the onset of significant human interference in global climate to the latter half of nineteenth century, when rapidly rising combustion of fossil fuels and massive conversion of natural ecosystems to farmlands in the Americas, Asia, and Australia led to large emissions of CO2. In any case, by the 1990s global warming driven by anthropogenic emissions of greenhouse gases had become the most prominent environmental concern because of its historically unprecedented nature and its likely impacts.
There are many other worrisome large-scale environmental changes. Excessive soil erosion is increasing even as most countries keep losing prime farmland to nonagricultural uses. The global water cycle will always be dominated by massive evaporation from the oceans, but human actions have altered its local, regional, and national flows to such an extent that water supplies are near or below adequate levels in many densely populated countries. The most uncertain impact of human activities on the ocean is the extent to which global warming will reduce the polar ice cover and the great ice sheets of Greenland and Antarctica.
The biogeochemical carbon cycle has received the most attention because of its role in global warming, but human activities have changed the biospheric cycle of nitrogen much more. Losses of biodiversity could be aggravated by global warming, but they would proceed even with a stable climate, as would the spread of invasive species.
Finally, I note an invisible but fatal environmental change. Pandemic-carrying viruses (see chapter 2) are not the only constantly mutating microorganisms. Bacteria, their more complex allies, are nearly as adept at defeating our controls, and they, too, add to a growing list of emerging infections (Morens, Folkers, and Fauci 2004). The most worrisome result of bacterial mutations is the emergence and diffusion of antibiotic-resistant strains of potentially lethal bacteria. In most hospitals there are now only one or two antibiotics that stand between us and an incurable attack of antibiotic-resistant microbes.
The greenhouse gas effect is indispensable for life on the Earth; it is the weakness or excessive strength of the effect that is a matter of concern. The effective radiative (blackbody) temperature of a planet without an atmosphere is simply a function of its albedo (the share of incoming radiation that is directly reflected to space) and its orbital distance. The Earth (albedo 30%) would radiate at -18°C, compared to -57°C for Mars and -44°C for Venus, and all these planets would have permanently frozen surfaces. A planet ceases to be a perfect radiator as soon as it has an atmosphere some of whose gases—above all, water vapor, CO2, CH4, and nitrous oxide (N2O)—can selectively absorb part of the outgoing infrared radiation and reradiate it both downward and upward. Such an atmosphere is highly (though not perfectly) transparent to incoming (shortwave) solar radiation, but it is a strong absorber of certain wavelengths of the outgoing (longwave) infrared radiation (IR) that is produced by the reradiation of absorbed sunlight. This absorption of IR is known as the greenhouse effect.
In the absence of water vapor on the Earth’s two neighbors, it is the presence of CO2 that generates their greenhouse effect, a very strong one on Venus (average surface temperature 477°C) and a very weak one on Mars (average ?53°C). The Earth’s actual average surface temperature of 15°C is 33°C higher than its blackbody temperature, with the absorption by water vapor accounting for almost two-thirds of the difference (fig. 4.1). CO2 accounts for nearly one-quarter of the temperature difference; CH4 and N2O come next, with minor natural contributions from NH3, NO2, HNO3, and SO2 (Ramanathan 1998). A greenhouse effect amounting to at least 25°C-30°C had to operate on the Earth for nearly 4 billion years for life to evolve.
But water vapor, the Earth’s most important greenhouse gas, could not have been responsible for maintaining relatively stable climate because its changing atmospheric concentrations amplify rather than counteract departures of the surface temperatures. Water evaporation declines as the atmosphere cools, and rises as the climate warms. In addition, changes in soil moisture do little to chemical weathering. Only long-term feedbacks between fluctuating CO2 levels, surface temperature, and the weathering of silicate minerals explain a surprisingly limited variability of the mean tropospheric temperature. Lower tropospheric temperatures and decreased rates of silicate weathering will result in gradual accumulation of the emitted CO2 and in subsequent warming (Gregor et al. 1988; Berner 1998).
Scientific understanding of the greenhouse gas phenomenon goes back to the 1820s (Fourier 1822), and the consequences of human interference in the process were grasped correctly by Svante Arrhenius, one of the first Nobel prize winners in chemistry, during the 1890s. Remarkably, his conclusions contained all the key qualitative ingredients of modern understanding: geometric increases of CO2 will produce a nearly arithmetic rise in surface temperatures; the resulting warming will be smallest near the equator and highest in polar regions; the Southern Hemisphere will be less affected; and the warming will reduce temperature differences between night and day (Arrhenius 1896).
Modern interest in this fundamental biospheric process began with a classic paper by Revelle and Suess (1957), which characterized fossil fuel combustion as “a largescale geophysical experiment of a kind that could not have happened in the past nor be reproduced in the future” (18). An important response to this concern was the setting up of the first two permanent stations for the measurement of background CO2 concentrations, near the top of Mauna Loa in Hawaii and at the South Pole. Measurements from these baseline monitoring stations began showing a steady rise of atmospheric CO2, but a global cooling trend continued during the 1960s and early 1970s. The possibility of rapid anthropogenic global warming became a matter of considerable public attention only during the late 1980s, and it is now undoubtedly the most prominent worldwide environmental concern.
This attention has not been able to eliminate many fundamental uncertainties. The phenomenon itself is complex (the relation between the atmospheric concentration of greenhouse gases and the mean tropospheric temperature is nonlinear, and it is subject to many interferences and feedbacks), and any appraisals of future impacts are immensely complicated by two key uncertainties: because the future rate of greenhouse gas emissions is a function of many economic, social, and political variables, we can do no better than posit an uncomfortably wide range of plausible outcomes. And because the biospheric and economic impacts of higher temperatures will be both counteracted and potentiated by numerous natural and anthropogenic feedbacks, we cannot reliably quantify either the extent or the intensity of likely consequences.
Arrhenius (1896) predicted that once the atmospheric levels of CO2 (at that time at about 290 ppm) doubled, to nearly 600 ppm, the average annual temperature increase would be 4.95°C in the tropics and just over 6°C in the Arctic. His findings applied to natural fluctuations of atmospheric CO2. He concluded (correctly) that future anthropogenic carbon emissions would be largely absorbed by the ocean and (incorrectly; he grossly underestimated the rate of future fossil fuel combustion) that the accumulation would amount to only about 3 ppm in half a century. Just before World War II, Callendar (1938) calculated a 1.5°C rise with the doubling of preindustrial CO2 levels and documented a slight global warming trend of 0.25°C for the preceding half a century. The first computerized calculation of the radiation flux in the main infrared region of CO2 absorption indicated an average surface temperature rise of 3.6°C with the doubled atmospheric CO2 (Plass 1956).
Atmospheric CO2 levels are now known for the past 650,000 years thanks to the ingenious analyses of air bubbles from ice cores retrieved in Antarctica and in Greenland. During that period CO2 levels never dipped below 180 ppm and never rose above 300 (fig. 4.2) (Raynaud et al. 1993; Petit et al. 1999; Siegenthaler et al. 2005). More important, during the time between the rise of the first advanced civilizations (5,000-6,000 years ago) and the beginning of the fossil fuel era, atmospheric CO2 levels fluctuated within an even narrower range of 250-290 ppm. Continuous measurements at Mauna Loa began with an annual mean of about 315 ppm in 1958, and by 2005 the atmospheric CO2 level surpassed 380 ppm, nearly 20% above the 1958 level (see fig. 4.2) (Blasing and Smith 2006).
By contrast, worldwide instrumental temperature measurements (with large spatial gaps) are available only since the 1850s (more extensive coverage began only after WW II), and hence all pre-1850 reconstructions of global temperature means have to rely on a variety of proxy variables such as tree rings, ice cores, bore hole temperatures, glacier lengths, and historical documents. The best available reconstructions indicate broadly similar anomalies during the past 1,000 years. Warming during the early Middle Ages (900-1100 C.E.) was followed by noticeable temperature declines culminating in a relatively cool period (Little Ice Age), 1500-1850. Subsequent instrumental records confirm a relatively rapid global warming; the gain was nearly 0.8°C (0.57°C-0.95°C) between 1850 and 2000 (IPCC 2007). The temperature increase has been accelerating. The updated 100-year trend for 1906-2005 is 0.74°C, and during the past 30 years the gain has averaged 0.2°C/decade (Hansen, Sato et al. 2006).
Consequently, it can be stated with a high degree of confidence that the mean temperatures during the closing decades of the twentieth century were higher than at any time during the preceding four centuries, and it is very likely that they were the highest in at least the past 13 centuries (NRC 2006b; IPCC 2007). However, the NRC (2006b) review of temperature reconstructions also concludes that little confidence can be placed in the conclusion by Mann et al. (1999) that the 1990s were the warmest decade and that 1998 was the warmest year in at least a millennium. The unreliability of all pre-1600 temperature reconstructions does not allow such a claim. This conclusion is supported by the best available reconstruction of temperature variations from China’s rich proxy indicators, which shows that the closing decades of the twentieth century were no warmer than the warmest decades of the medieval warm period during the eleventh century (B. Yang et al. 2002).
Looking ahead, recent emission scenarios (assuming various rates of fossil fuel combustion) result in a wide range of CO2 concentrations, 540-970 ppm by the year 2100. These would not be the highest-ever concentrations of CO2. Boronisotope ratios of planktonic foraminifer shells point to levels above 2000 ppm 60-50 Ma ago (with peaks above 4000 ppm), followed by an erratic decline to less than 1000 ppm by 40 Ma ago. But since the early Miocene era of 24 Ma ago atmospheric levels of CO2 have been relatively stable and low, remaining below 500 ppm (Pearson and Palmer 2000). Consequently, continued large-scale combustion of fossil fuels could increase atmospheric CO2 to levels unseen since large herds of horses and camels grazed on grassy plains of America.
This preoccupation with CO2 misses nearly half of the problem. In 2005 slightly over half (1.66 W/m2) of the post-1880 anthropogenic radiative forcing, averaging globally about 3 W/m2, has been due to CO2, with CH4, chlorofluorocarbons (CFCs), O3, and N2O (all less common but more potent greenhouse gases) contributing the rest (fig. 4.3) (Hansen, Nazarenko et al. 2005; IPCC 2007). Leaving CFCs aside (their production was outlawed by the 1967 Montreal Protocol), the other greenhouse gases accounted for at least one-third of all positive forcing, so we cannot talk only about CO2 doubling but must express the overall warming potential in CO2 equivalents.
The anthropogenic forcing process is also heavily influenced by nongaseous drivers of climate change as well as by intricate feedbacks. During the past 150 or so years the opposite effects of land use changes (mainly deforestation, which increases albedo) and reduced snow cover (which reduces it) have nearly canceled each other. But higher solar irradiance added 0.12 (0.06-0.30) W/m2, black carbon from combustion contributed roughly twice as much, and reflective tropospheric aerosols lowered the overall forcing (directly and by changing cloud albedo) by about ~1.2 W/m2. Since 1750 the net anthropogenic radiative forcing has thus amounted to about 1.6 W/m2, and it raised average global temperature by about 0.8°C compared to the preindustrial level (see fig. 4.3).
In order to forecast the additional warming that might take place by the year 2050 we must rely on a set of highly uncertain assumptions. We do not know the positive forcing, the future rates of fossil fuel combustion, land use changes, fertilizer use, and meat production. They will depend on the continuing increases of energy use, the extent of discoveries of new hydrocarbon deposits, the rates of penetration of nonfossil energy conversions, national land use policies, disposable incomes, and the overall vitality of the global economy. A multitude of possible outcomes based on these variables opens an unhelpfully wide range of possibilities.
Whatever the outcome, it must first be adjusted for the effects of future negative forcing caused by anthropogenic aerosols. The latest satellite measurements put them at -1.9 W/m2 in 2005 (Bellouin et al. 2005). This rate, higher than assumed by most of the climate change models, implies future warming greater than predicted as worldwide particulate emissions continue to decline (as expected). A complete elimination of today’s aerosol effect would boost the positive forcing by about 60%, and hence even partial controls will have a major effect. Thus we are reduced largely to speculation when thinking about the net effects of numerous feedbacks unleashed by rising temperatures.
To what extent will higher temperatures (accompanied by an intensified water cycle) boost global photosynthesis, and to what extent will this additional production be sequestered in long-lived carbon compounds (tree trunks, decay-resistant soil organic matter)? How much additional carbon will be released from soils whose organic matter will be subject to faster rates of decomposition in a warmer world? Data from two National Soil Inventories (1978 and 2003) in England and Wales have already found that annual carbon losses (relative to the existing soil carbon) average 0.6% regardless of land uses, with the maximum rate of more than 2% in carbon-rich soils (>100 g C/kg) (Bellamy et al. 2005). Perhaps even more critically (given the fact that CH4 is a much more powerful greenhouse gas than CO2), how much methane will be released from warmer sub-Arctic and Arctic wetlands and lakes? In one Siberian region these rates may already be five times higher than previously estimated (K. M. Walter et al. 2006).
We cannot do better than offer the most plausible ranges of outcomes based on the best available global climate models and increasingly complex simulations that incorporate interactions between the atmosphere, hydrosphere, and biosphere. Their results are very similar to those simple calculations offered by Arrhenius during the 1890s, by Callendar in the late 1930s, and by Plass in the mid-1950s. The first report by the Intergovernmental Panel on Climatic Change put the increase of average surface temperature at 1°C-5°C by 2100 (IPCC 1990), the second narrowed the range to 1°C-3.5°C (IPCC 1995), the third widened it to 1.4°C-4.8°C (IPCC 2001), and the latest narrows it to 2°C-4.5°C with a best estimate of about 3°C, temperature increases less than 1.5°C very unlikely, and values substantially above 4.5°C impossible to exclude (IPCC 2007).
Some studies conclude that climate sensitivity, defined as the response to a doubling of CO2 (or equivalent) may be substantially higher than the IPCC’s upper bound, perhaps even higher than 11°C (Stainforth et al. 2005). Considering that these are global means, the differences between the ecosystemic, health, and economic impacts of 1.5°C and 10°C warming would be enormous. In the first case, warming of ~0.15°C/decade would actually be a bit lower than the mean since the 1970s (?0.2°C/decade), and most societies could cope well with such a relatively moderate change through gradual adaptation. In the second case, the mean decadal warming would surpass the warming experienced during the entire twentieth century (0.74°C), and most societies would find it impossible to adapt to changes precipitated by such a rapid and pronounced temperature rise.
Recent studies relying on several types of evidence (better reconstructions of past mean annual temperature, density of tree rings, temperature changes following volcanic eruptions) were able to constrain the most likely range of climate sensitivity. Hegerl et al. (2006) reduced the 5%-95% range of climate sensitivity to 1.5°C-6.2°C and the 20%-80% range to about ~1.6°C-3.8°C with the median value of 2.6°C. Annan and Hargreaves (2006) suggested an even tighter range, 1.5°C-4.5°C for the 5%-95% range and found it impossible to assign a significant probability to any warming exceeding 6°C. If these findings hold, the prospects appear much more clearly defined. There would be a very low probability that twenty-first-century warming would be comparable to the twentieth-century temperature increase (<1.5°C), but highly worrisome increases in excess of 5°C would be almost as unlikely, and the range of 2.5°C-3°C would be most probable (fig. 4.4).
The qualitative consequences are clear. This climate change would cool the stratosphere and raise the tropospheric temperatures, with the warming more pronounced on land (and at night), and increases about two to three times the global mean in higher latitudes in winter than in the tropics, and greater in the Arctic than in the Antarctic. But even a perfect knowledge of climate sensitivity would not make it possible to quantify many potential effects of global warming because it could not eliminate a multitude of uncertainties concerning the eventual climatic, ecosystemic, health, and economic impacts of the change. There are many other variables and complex feedbacks that will codetermine the eventual consequences of global warming. Consequently, even our most complex models are only elaborate speculations. We may get some particulars right, but it is beyond us to have any realistic appreciation of what a world with average temperature 2.5°C warmer than today’s would be like.
If one were to assess the future of global warming based on recent headlines—“Be worried, Be very worried,” “Earth at the tipping point,” “Climate nears point of no return”—the outlook would be dismal indeed. Unfortunately, many scientists have gone along with this sensationalizing wave and have not stressed enough the limitations of our knowledge. Some researchers have gone even farther by presenting extreme possibilities as unavoidable futures. I do not wish to add to this dubious genre and thus present only the best available evidence and point out important uncertainties regarding the major impacts of future global warming.
Ocean covers roughly 70% of the Earth’s surface. It is the key regulator of global climate and the dominant source of the planet’s precipitation. It also provides a convenient medium for inexpensive, long-distance transportation and supplies nearly 10% of the world’s food protein. An increasing share of humanity lives along or near its shores. Clearly, any significant changes to ocean’s properties and to its mean level are bound to have a major impact on the fortunes of modern civilization. A warmer atmosphere will inevitably produce a warmer ocean, and this transfer is already measurable.
Barnett et al. (2005) found that since 1960 about 84% of the total human-induced heating of the Earth (oceans, atmosphere, cryosphere, uppermost crust) has gone into warming the oceans. The strength of this warming signal varies by ocean and depth. North and South Atlantic warming, by as much as 0.3°C, reaches as deep as 700 m, whereas Pacific and Indian Ocean warming is mostly limited to the top 100 m. This warming may strengthen the ocean’s natural thermal stratification and hence weaken the global overturning circulation, but its most immediate consequence is the thermal expansion of water, the most important contributor to mean sea level rise. A higher sea level would not only encroach directly on existing infrastructures but also accelerate the rates of coastal erosion, increase the damage due to storm surges, and contaminate coastal aquifers with salt water.
The mean sea level has been rising ever since the last maximum glaciation 21,000 years ago, when it was 130 m lower than today (Alley et al. 2005). Tide gauge measurements indicate a total twentieth-century rise of 17 (12-22) cm, an increase in the average annual rate compared to the nineteenth-century. The rate was about 1.8 mm/year between 1961 and 2003, and it accelerated to 3.1 mm/year between 1993 and 2003 (IPCC 2007). Thermal expansion of the ocean currently accounts for a rise of at least 0.5 mm/year, and ocean mass change contributes 1.4 ± 0.5 mm/ year (Lombard et al. 2005). IPCC models have a range of 18-59 cm of additional sea level rise during the twenty-first-century, with most of increase coming from thermal expansion and the rest from the melting of mountain glaciers and ice sheet. Narrowing the estimates of future sea level rise is extremely difficult. The entire process exemplifies the many uncertainties facing assessments of environmental change produced by global warming.
Sea level rise during the coming generations will be determined by the uncertain rate at which polar ice sheets retreat or parts of them actually grow. Among a few things about the consequences of global warming that we know with absolute certainty is the fact that higher tropospheric temperatures will increase evaporation and hence intensify the global water cycle and result in an overall (but unevenly distributed) increase in precipitation. Polar regions should be major beneficiaries of this trend because they will receive more snow.
Some recent observations and simulations show that this is exactly what is taking place, whereas others obtain exactly opposite results. While Greenland’s ice sheet margins have been thinning and its coastal glaciers receding, altimeter data for the decade 1992-2003 confirm an average ice increase of 6.4 cm/year in Greenland’s vast interior areas above 1500 m (fig. 4.5) (Johannessen et al. 2005). Satellite altimetry indicates that between 1992 and 2003 increased precipitation added 45 billion t/year to the East Antarctic ice sheet interior, enough to slow sea level rise (currently now ~1.8 mm/year) by 0.12 mm/year (Davis et al. 2005).
By contrast, Doran et al. (2002) found that between 1966 and 2000 the Antarctic continent experienced net cooling of 0.7°C/decade, and Monaghan et al. (2006) concluded (based on ice core observations and simulations) that there has been no significant change in Antarctic snowfall during the past 50 years, which means that there is no mitigation of global sea level rise. But these findings are hard to reconcile with a comprehensive evaluation by Zwally et al. (2005), which shows that between 1992 and 2002 the thinning of Greenland’s marginal ice sheet (–42 Gt/ year) and the growth of inland ice (+53 Gt/year) resulted in an overall mass gain of about 16 Gt/year and that western Antarctica was losing mass at -47 Gt/year while eastern Antarctica was gaining 16 Gt/year, for a net loss of 31 Gt/year.
Velicogna and Wahr (2006b) used scaled and adjusted satellite gravity surveys to show the Antarctic ice sheet losing 138 ± 72 Gt/year, an annual equivalent of 0.4 ± 0.2 mm of global sea level rise. In another study, Velicogna and Wahr (2006a) used satellite gravity surveys to conclude that Greenland is losing ice mass at a rapidly increasing rate (up to 250% when comparing the 2002-2004 period with 2004-2006) and that the 2006 rate of 224 ± 33 Gt/year is equivalent to a global annual sea level rise of 0.5 mm. Even assuming that the complex corrections needed to convert the gravity data (whose spatial resolution is no better than a few hundred kilometers) into mass equivalents do not introduce major errors, these results, showing much higher losses than comparable surveys, cannot be interpreted as a definite trend. Many more years of unassailable observations are needed for that.
And to make matters even more confusing, Zwally et al. (2005) also found that while the western Antarctic shelf was losing 95 Gt/year, the eastern Antarctic shelf was gaining 142 Gt/year, for an annual net gain of 32 Gt. This mass is virtually identical to the annual net loss from the continent’s ice sheets and hence an overall neutral balance for Antarctica’s ice sheets and shelves. Clearly, some of these contradictory conclusions must be wrong. All of them require careful interpretations and adjustments (a tricky example is correcting the radar altimetry data for changes in temperature-driven firn compaction), and all of them have large margins of error.
According to Zwally et al. (2005), Antarctic ice sheets may be shedding as little as 19 Gt/year and as much as 43 Gt/year, and they put the net gain for the Greenland ice sheet at 11 Gt/year. Box, Bromwich, and Bai (2004) have the Greenland ice sheet (for the nearly identical period 1991-2000) losing 70 Gt/year. Luthcke et al. (2006) found an overall 113 Gt/year ice loss for Greenland. In addition, these processes have large interannual to decadal variability, and spans of 10 or 11 years are insufficient to confirm any significant long-term trends. simulations by Box, Bromwich, and Bai (2004) put the interannual variability for Greenland’s entire ice sheet at ±168 Gt, a rate much larger than the best estimates of recent net flows.
I review these complex, uncertain, and contradictory results in such detail in order to make a strong point: in this case (and in many other similar instances) it is irresponsible to draw any definite long-term conclusions from these conflicting findings. Predicting the future of ice sheets is inherently difficult, and the uncertainty about their state may persist for some time (Vaughan and Arthern 2007). Not surprisingly, the IPCC offers a rather broad range for the mean sea level increase during the twenty-first-century; its most likely estimate (+50 cm) is bracketed by a range that is 80% as large as the mean (±40 cm). About 10 cm of this rise would come from the melting of mountain glaciers and a limited retreat of ice caps. However, Raper and Braithwaite (2006) reexamined the evidence and concluded that mountain glaciers and ice caps might contribute only about 5 cm, or about half the previously projected rate. Given these manifold uncertainties, a cautious conclusion would be for a mean sea level rise of ~15 cm (maximum 20 cm) by 2050, clearly a noncatastrophic change.
And here is just one more detail on the sea level rise. The Maldives, a group of small islands comprising some 20 atolls in the central Indian Ocean, have been used as a prominent example of the imminent danger of submergence. Because their elevation is only 1-2 m above sea level it was expected that they would disappear in 50, or at most 100, years. However, a detailed on-site examination found no sea level rise, and the conclusion was, “The people of the Maldives are not condemned to become flooded in the near future” (Mörner 2004, 149). Just the opposite is true because the Maldives have experienced a sea level fall since the early 1970s, most likely due to increased evaporation and intensified monsoon flow across the central Indian Ocean. But White and Hunter (2006) found no evidence of the sea level fall at the Maldives and confirmed that at 2 mm/year the mean sea level rise for tropical Pacific and Indian Ocean islands is close to the best estimate for the global mean.
By far the ocean’s most important compositional change is increasing acidity, caused by rising emissions of CO2 and falling carbonate ion concentrations (Royal Society 2005). Since the onset of industrialization the oceans have absorbed about half of the CO2 emitted from fossil fuel combustion, and the dissolution of this gas and formation of carbonic acid lowered their average pH (8.2 ± 0.3 due to local and seasonal variations) by about 0.1 unit (Caldeira and Wickett 2003). As the surface ocean absorbs additional CO2, its pH will keep falling, and it may decline by as much as 0.4 units by 2100 (Orr et al. 2006). Acidity is measured on a logarithmic scale; the decline so far equals a 30% rise in the concentration of hydrogen ions, and the eventual maximum increase would correspond to a 150% increase and a simultaneous decline of carbonate ion concentrations.
Carbonate ion concentrations have already dropped by about 10% compared to preindustrial levels, and their continuing decline would eventually undersaturate the surface water, first with respect to aragonite, later with respect to calcite in some cold seas. These shifts would slow down calcification, the biogenic formation of CaCO3, and they would have serious consequences for biomineralizers ranging from low-latitude corals (which form reefs out of aragonite) to high-latitude pteropods (phytoplankton that forms its shells out of calcite) as well as for such ubiquitous coccolithophorids as bloom-forming Emiliania huxleyi (fig. 4.6) and Gephyrocapsa oceanica, which dominate the transfer of CaCO3 to the sediments. A negative impact on coral growth would have cascading effects on one of the Earth’s richest ecosystems, and reduced productivity of phytoplankton would affect the ocean’s entire trophic chain. But because calcification releases CO2 to the surrounding water, and hence to the atmosphere, its reduced rate may act as a negative feedback in the future high-CO2 world (Riebesell et al. 2000; Gattuso and Buddemeier 2000).
Warmer oceans will also affect the dynamics of the atmosphere. Climate models indicate that weaker surface winds have already changed the thermal structure and circulation of the tropical Pacific Ocean (Vecchi et al. 2006). Perhaps most important, the twentieth-century warming has increased the west to east temperature gradient in the Pacific Ocean, a change that might increase the probability of strong El Niños (Hansen, Sato et al. 2006). These periodic westward expansions of warm surface waters begin off the coast of South America as the westward trade winds relax, and by early winter they often extend all along the equator to join warm water off Australasia. Their consequences include high rainfall and destructive flooding, not only in coastal Peru but also in the southern United States, while the western United States and Australia experience prolonged droughts. More intensive and more common El Niños could have a serious economic impact on water availability, forest and shrub fires, and crop yields in all regions affected by these climatic teleconnections.
Temperature is a key determinant of the extent of life on the Earth, and of its intensity, because temperature highly correlates with the metabolic rates of decomposers, photosynthetic rates and zonal ranges of plants, and stocks, growth rates, habitat ranges, and migration patterns of heterotrophs ranging from zooplankton to top predators as well as with the diffusion rates of pathogens and disease resistance of hosts (Stenseth et al. 2002; Harvell et al. 2002). Biota will react to rising temperature in so many ways that a mere listing of plausible changes would fill many pages. Many direct, first-order ecosystemic responses to recent climate change are already obvious (Walther et al. 2002). Global meta-analyses of nearly 1,500 (Root et al. 2003) and 1,700 (Parmesan and Yohe 2003) responses of wild animal and plant species to warmer temperatures demonstrate that a significant impact of the process is already discernible among species ranging from marine zooplankton to woody plants and birds, with poleward range shifts averaging 6.1 km per decade and advancement of spring events surpassing two days per decade.
I give just three examples of these responses, one clearly positive, one neutral, and one clearly undesirable. The warming of the Eurasian land mass has intensified summer monsoon winds, and these in turn have enhanced upwelling and boosted the average summertime phytoplankton mass in the Arabian Sea by over 300% offshore and more than 350% in coastal regions. Warming has made the Arabian Sea more productive (Goes et al. 2005). Warming has a predictable effect on plant flowering, growth, and maturation. In Britain the first flowering dates of plants (very sensitive to the mean temperature of the previous month) advanced by an average of 4.5 days for 385 species during the 1990s (Fitter and Fitter 2002).
One of the most undesirable consequences of earlier melting of mountain snowpacks, of warmer and drier springs, rain-deficient summers, and reduced soil moisture will be a higher frequency and longer duration of wildfires. Westerling et al. (2006) found such a trend for the western United States. By 2003 (compared to the 1970s) the length of the active wildfire season increased by 78 days, and the average burn duration quintupled from 7.5 to 37.1 days. But many second-order changes may not be discernible for decades. For example, many regions have naturally variable precipitation, not just on annual but on decadal scales, and hence it is not easy to discern a warming-induced trend among the natural fluctuations.
And not all suspected effects can be tested experimentally. For example, a common assumption that the effects resulting from exposures of experimental ecosystems to a large single-step rise in CO2 can mimic well the responses to a gradual increase unfolding over many decades may often be wrong because some biota are clearly sensitive to abrupt shifts but can cope quite well with incremental changes (Klironomos et al. 2005). The inherent complexities of these effects are even more difficult to unravel than the dynamics of future ice sheet melting. I illustrate these complexities by focusing on two key determinants of carbon flow in the biosphere: Is global vegetation a net sink or source of carbon? and Will future warming turn soils into a major carbon source? In both cases, the answers make a critical difference: Will soils and biota help to counteract future global warming, or will they intensify it?
Because we do not have an accurate carbon budget for the biosphere and are not certain about the long-term consequences of CO2 enrichment for plants and soils, we cannot make any reliable appraisals of future interactions between the carbon cycle and other biogeochemical and climatic processes (Falkowski et al. 2000; Smil 2002). We know with a high degree of accuracy (because the emissions are known with an error smaller than 10% and atmospheric concentrations are measured fairly precisely) that during the 1990s roughly 6.3 ± 0.4 Gt C/year were emitted from the combustion of fossil fuels and 3.2 Gt ± 0.2 Gt C were retained in the atmosphere. Absorption by the ocean cannot be quantified as accurately (-2.4 ± 0.7 Gt C), and emissions from land use changes (2.2 ± 0.8 Gt C, mainly deforestation) were even more uncertain (Houghton 2003).
The two sources and the three sinks leave a residual of 2.9 ± 1.1 Gt C/year. Carbon transport by rivers may account for 0.6 Gt C/year, and the only plausible sink for the remainder (equal to one-third of all carbon emissions from fossil fuel combustion) is the Earth’s vegetation. However, numerous studies of national and continental plant carbon balances (usually using satellite observations as input into several models of net primary productivity) offer annual fluxes that not only differ by as much as 1 OM but that cannot be interpreted often to conclude whether a particular ecosystem is a significant sink or a major source of carbon. On the global level the calculated net terrestrial fluxes are substantially smaller than the just noted residual of at least 2 Gt C/year, typically about 0.7-1.5 Gt C/year. Moreover, all of these fluxes have a significant temporal variability.
North American vegetation appears to be a consistent carbon sink (?0.2-0.3 Gt C/ year), but the highest estimate of carbon sink for the continent (mostly south of 51°) has been as much as 1.7 Gt C/year (Fan et al. 1998). Carbon storage in Eurasian forests, about 0.3-0.6 Gt C/year (Potter et al. 2005), has been generally increasing, thanks to forest expansion and better management in Europe (fig. 4.7) (Nabuurs et al. 2003), net uptake of the Russian boreal forests (Beer et al. 2006), extensive replanting in China (Pan et al. 2004), and significant phytomass accumulation in Japan (Fang et al. 2005). But it is not certain whether Amazon forests are a large net source of carbon (-3 Gt C/year) or a major sink, sequestering annually perhaps as much as 1.7 Gt C (Ometto et al. 2005). The letter conclusion received new support from the studies of vertical profiles of CO2, which indicate that northern lands take up carbon at rates lower than previously thought, while the undisturbed tropical ecosystem appears to be major carbon sinks (Stephens et al. 2007).
There is no consensus about net global values. Measurements of atmospheric O2 concluded that the terrestrial biosphere and the oceans sequestered annually 1.4 (±0.8) and 2 (±0.6) Gt C between mid-1991 and mid-1997 and that this rapid storage of the element contrasts with the 1980s, when the land biota were basically neutral (Battle et al. 2000). Additional photosynthetic sequestration of carbon due to CO2 fertilizing effect has been estimated at 1.2-2.6 Gt C/year (Wigley and Schimel 2000).
But Potter et al. (2005) concluded that between 1982 and 1998 the carbon flux for the terrestrial biosphere ranged widely between being an annual source of ?0.9 Gt C/year to being a large sink of 2.1 Gt C. Houghton (2003) put the net terrestrial flux at -0.7 Gt C during the 1990s. The latest regional flux estimates for the 1990s credit the continents with the net uptake of between 1.1-2.4 Gt C/year and the oceans with being a nearly identical sink of 1.2-2.1 Gt C/year (Baker 2007). Moreover, global terrestrial carbon fluxes appear to be about twice as variable as ocean fluxes (precipitation and surface solar irradiance being the key drivers), and at different times they can be dominated by either tropical or midand high-latitude ecosystems (Bousquet et al. 2000).
Several important considerations will influence the future net trend. The carbon sequestration potential of croplands has most likely been overestimated (Smith et al. 2005), whereas the role of old forests, generally thought to be insignificant sinks of carbon, has almost certainly been underestimated (Carey et al. 2001). Because the availability of nitrogen is a key factor for carbon sequestration in many forests and forest soils (De Vries et al. 2006), it is not surprising that in the absence of further specific land management the current European net carbon intake is expected to decline soon (Janssens et al. 2005).
Net fluxes in the Amazon and Congo basins will be driven primarily by future rates of deforestation. The capacity of many forests to act as large sinks of carbon is limited by water and nutrient constraints on photosynthesis, and some ecosystems may experience considerable carbon losses if global warming results in a higher regional frequency of wildfires and longer duration of droughts. At the same time, warming will lengthen the growing seasons everywhere, and because it will intensify global water cycling, many regions will receive higher precipitation. In many ecosystems these changes will result in higher annual productivity, a trend that has been already strongly evident in most of the United States during the latter half of the twentieth century (Nemani et al. 2002).
A warmer world could also produce a worrisome feedback by releasing significant amounts of soil carbon. Globally, soils store more than twice as much carbon as do plants or atmosphere, and because warming will accelerate decomposition and bring additional releases of CO2 from soils, this process could reinforce warming to an uncomfortable degree. This result was confirmed not only by models and small-scale experiments but also by some large-scale analyses (Bellamy et al. 2005). But it would be premature to extrapolate it worldwide. Few natural processes are as complex and intricately interactive as the stores and pathways of organic carbon in soils (Shaffer, Ma, and Hausen 2001; Davidson and Janssens 2006).
Cycling rates of soil carbon range from minutes to centuries, and different soils have different shares of slowand fast-cycling carbon stocks. Root respiration and microbial decomposition are, like all chemical and biochemical reactions, temperature-dependent, so warming must be expected to increase their intensity. But no simple relation governs these processes because the compounds involved exhibit an enormous range of kinetic properties. Large, complex, insoluble molecules tend to be resistant; the requisite enzymes may not be available; or environmental constraints (including physical and chemical protection, drought, and flooding that produces anaerobic conditions) may prevent their action. Not surprisingly, there is no agreement regarding the direction of soil carbon change. Will a warming-driven acceleration of soil carbon release create a significant positive feedback, or will a major share of additional photosynthesis be stored in long-lived soil organic matter and thus moderate future atmospheric carbon increases?
All economies are only subsystems of the biosphere, as any significant perturbations of ecosystems must have a multitude of economic impacts. But these are second-order effects. Because we do not have a tightly constrained range of future temperature increases, we cannot quantify with reasonable certainty the potentially most damaging ecosystemic consequences of this rise, and hence we cannot offer good approximations of the ensuing economic impacts. Again, it is safe to outline a number of the most likely qualitative changes that would call for new capital investment (both preventive and after the fact) and higher operating costs. But different assumptions yield economic penalties ranging from globally trivial to locally crippling, and some appraisals actually see major benefits accruing from this challenge.
The latter claims are easy to question. Hawken and colleagues (1999) have suggested that global warming could be controlled for fun and profit by improving overall energy efficiency and material use: “Within one generation nations can achieve a ten-fold increase in the efficiency with which they use energy, natural resources and other materials” (11). This is nonsense, not, as the authors write, “prophetic words.” The factor 10 improvement in a single generation would magically erase population size as a factor in economic development and global environmental degradation, and instantly modernize the entire world. Five billion people in Asia, Africa, and Latin America now claim only about one-third of the world’s resources, but wringing ten times as many useful services out of their current resource consumption would make their countries into developed nations, by any definition.
North America and the European Union, content with maintaining their present high standard of living, would have no need for 90% of their current resources, a shift resulting in plummeting global commodity prices. Poor countries could then snap up these give-away commodities (the capacity to produce them will not disappear), and using them with ten times greater efficiency, could support another 7 billion people enjoying a high standard of living—all this with no increase in the global use of energy and materials. Who would care if the global population total kept or rising up by another 2 or 3 billion before eventually stabilizing?
Other uncritical appraisals see a profitable solution in renewable conversions. Gelbspan (1998) wrote that reducing reliance on carbon for energy would not require any personal deprivation but usher in a worldwide economic boom. But any such boom is unlikely during the next few decades (see chapter 3). Yet, dismissing claims of cost-free or inexpensive adaptation is not to say that the economic cost of global warming will be cripplingly high. Some of the most realistic assessments of the cost should be expected from the world’s leading reinsurance companies, and reinsurance companies have been analyzing closely the rising frequency of weatherrelated disasters and studying the likelihood of a warmer atmosphere’s producing more extreme weather events (Swiss Re 2003).
Munich Reinsurance Company produced an itemized appraisal of the eventual costs of global warming that would result from the doubling of preindustrial greenhouse gas levels (UNEP 2001). This put the worldwide total at just over $300 billion/year, with the largest share due to additional mortality (27%), demands of water management (15%), losses in agriculture (~13%), and damage to coastal wetlands and fishing (~10%). This is a surprisingly small sum even for today’s global economy. In 2005 the global world product was (in PPP) about $55 trillion, and hence the burden of global warming would be only about 0.5% of that. There is no need to contrast this burden with the enormous worldwide annual military expenditures in order to make the $300 billion total even less onerous. In 2005 global business spent in excess of $1.2 trillion on advertising, public relations, and communications (WPP 2006).
A different approach obtained a very similar result. Edenhofer et al. (2006) used ten global economy-energy-environment models to explore the implications of stabilizing greenhouse gas concentrations at different levels and found that even for the lowest limit (450 ppm) average discounted abatement costs were no higher than 0.4% of the gross world product. Nordhaus and Boyer (2001) used two integrated models of climate and economy to estimate impacts of a 2.5°C warming at -0.2% and -0.4% of global output for, respectively, output and population weights. However, a reevaluation by Nordhaus (2006) raised these impacts to -0.93% and ~1.73%. Even so, the market impacts of a moderate warming appear to be relatively small. The highest costs would arise from catastrophic events precipitated by the process, and predictably the modeling results showed marked regional variation, ranging from slight economic gains in Russia and Canada to losses of about 5% in India and Africa. If a maximum cost of 2% were applied to the 2005 gross world product, it would be about $1.2 trillion, the total slightly larger than that year’s Canadian GDP. In per capita terms it would prorate to about $180/year (50 cents/ day), a trivial sum in all affluent countries but a substantial burden for hundreds of millions of poor peasants and marginalized urban dwellers in Asia, Africa, and Latin America.
If these estimates turned out to be close to the actual cost, the economic consequences of global warming would not be fundamentally different from many other challenges that require significant capital investment and operating expenses, including the need for the delivery of clean water to hundreds of millions of people who still lack it. But major disparities in wealth and technical capabilities mean that affluent nations should be able to cope with these new outlays relatively easily (and some should even benefit), whereas for many poor countries even moderate warming will add to an already unmanageable burden.
N. Stern (2006) concluded that the cost of climate change can be limited to about 1% of global economic product only if concerted action is taken within the next 10-20 years. Otherwise these will be a perpetual cost equivalent to at least 5% of annual gross world product. After taking a wider range of risks and impacts into account, Stern puts the overall annual damage at 20% of GWP, or even higher. The 2-OM range of the estimated costs of global warming (0.2%-20% of annual GWP) points to outcomes ranging from a negligible obligation to an unprecedented global economic burden. It does not offer any confidence-inspiring foundation for rational policy making.
A major reason for discrepancies in evaluating environmental burdens is the fact that health effects account for such a large part of these impacts and yet their monetization is notoriously difficult. For instance, suppose warmer temperatures doubled the frequency of allergies and environmentally induced asthma attacks plants growing in warmer temperatures produce more pollen (Epstein and Mills 2005). This would substantially increase the occurrence of serious personal discomfort, school and workplace absenteeism, and number of admissions for emergency treatment. But there are no clear valuations for these new realities. The salaries of physicians and nurses and hospital overhead can be used to value the time spent in emergency treatment and lost wages for missed workdays, but what value are we to put on missed school days, on a child’s severe discomfort, and on parental anxiety?
Another impact may be more premature deaths. Global climate models indicate that a warmer atmosphere would very likely bring more frequent temperate-latitude heat waves of greater intensity and longer duration, which are associated with semistationary high pressure anomalies that produce air subsidence, clear skies, light winds, and warm air advection (Meehl and Tebaldi 2004). Longer-lasting heat waves, such as the French episode in 2003 (fig. 4.8) (Pirard et al. 2005) or the Chicago spell in 1995 have considerable potential for increased excess mortalities (and for economic disruption), especially given the fact that by 2050 at least twothirds of humanity will live in cities, compared to 50% in 2005. Again, how should we value the truncation of lives, the loss of companionship when a spouse dies prematurely, when a child will never know its grandparent?
Concerns about global warming have hijacked most of the attention devoted to environmental problems, but there are other important trends that predate these concerns, and their continuation and intensification can have major (but difficultto-forecast) global impacts. Loss of arable land to nonagricultural uses and flooding by new large reservoirs are continuing problems, particularly in densely populated Asian countries. An even more common problem is excessive soil erosion on farmland, but we do not have enough solid information to quantify reliably this global trend (Smil 2000).
Many fields around the world are losing their soils at rates two or three times higher than the maximum, site-specific average annual losses compatible with sustainable cropping (mostly 5-15 t/ha) (Smil 2000). The African situation is particularly dire because the nutrients lost in the eroded soil are not replaced by fertilizers. This qualitative soil deterioration is of concern even in places with tolerable erosion. Declining levels of soil organic matter, essential for the maintenance of soil fertility and structure, result from inadequate crop residue recycling, low or no manure applications, and no cultivation of green manures or leguminous cover crops.
But the continuing loss of farmland and its qualitative decline do not imply any near-term global scarcity of arable land or inability to support good yields. New farmland is created by conversions of natural ecosystems (this being, of course, another environmentally destructive process). Significant areas of arable land are held in reserve in many affluent countries (particularly North America), and better management can restore soil quality and bring substantial yield increases everywhere outside the most intensively cultivated areas of the modernizing world (parts of China, Java, Punjab, Nile Delta). Arable land and its quality should not be a major constraint on food production by 2050, but in many places water availability and the use of nitrogen already play this limiting role.
Agriculture is by far the world’s largest user of water, and higher productivity from a declining area of farmland will require more irrigation. Similarly, higher yields from smaller areas will have to be supported by more intensive fertilization, and that means, given the inherent inefficiency of the nutrient absorption by plants, greater losses of reactive nitrogen. Another major concern is the continuing reduction of biodiversity, a trend that may eventually have enormous consequences for the maintenance of irreplaceable ecosystemic services. Finally, I address an insidious and ultimately very perilous environmental change, the trend toward widespread bacterial resistance to common antibiotics.
Concerns about global warming led to a widespread belief that no other biospheric cycle is subject to so much human interference as the global carbon cycle. This conclusion is doubly wrong. Human interference in the global water cycle is a source of more imminent problems and already a major cause of large-scale premature mortality. And human actions have already changed the global nitrogen cycle much more than carbon cycling, and the ultimate consequences of this multifaceted change may be even more intractable than dealing with excessive CO2. As a result, if we were a rational society, we would be paying a great deal more attention to these changing water and nitrogen cycles.
No resource is needed for life in such quantities as water. It makes up most of the living biomass (60%-95%), and its absence limits human survival to just days. The water molecule is too heavy to escape the Earth’s gravity, and juvenile water, originating in deeper layers of the crust, adds only a negligible amount to the compound’s biospheric cycle. Human activities—mainly withdrawals from ancient aquifers, some chemical syntheses, and combustion of fossil fuels—also add only negligible volumes of water, but they have changed the water cycle (fig. 4.9) in three principal ways, and all of them will intensify during the first half of the twenty-first century (Revenga et al. 2000; WCD 2001; Rosegrant et al. 2002; Shiklomanov and Rodda 2003; United Nations 2003).
Increasing volumes of freshwater are diverted to irrigation, urban and industrial uses (growing a kilogram of wheat needs about as much water as does the production of a kilogram of computer hardware, about 1.5 tons), and electricity generation (there are some 45,000 large dams), and most of this water is released to streams, lakes, and ultimately to the ocean without any or only rudimentary treatment. Moreover, decades of climate change have already intensified the global water cycle, and higher CO2 levels have affected continental runoff. And yet, global water supplies and their uses and misuses remain a curiously neglected topic of public discourse. Perhaps no investment is as rewarding as spending on clean water, be it in the form of protecting forests in key watersheds or preventing pollution, but the world has failed to make outlays commensurate with the challenge.
This is particularly true for Asia, a continent with 60% of the world’s population but only 27% of the Earth’s freshwater, which is, moreover, unevenly distributed in time (because of the monsoon) and space (arid Middle East and Central Asia vs. humid South and Southeast). Global warming will produce more but unequally distributed precipitation, and higher CO2 levels will reduce transpiration (by inducing stomata closure). Hence overall river discharge should increase, but because of continuing population growth the total number of people living with high water stress will increase in the next 50 years (Oki and Kanae 2006).
As Asian and African populations keep growing, the number of countries with stressed water supply (usually taken as the annual rate lower than 1,700 m3 per capita) and water scarcity (less than 1,000 m3 per capita) will also increase. In 2005, 17 countries (mostly in the Middle East and North and East Africa) experienced water scarcity, and another 15 countries were in the stressed category. By 2025 (assuming the UN’s medium population projections) nearly 30 countries will have water scarcity and another 20 will join the stressed group. The most populous countries in the first category will be Nigeria, Egypt, Ethiopia, and Iran, and in the second category, India and Pakistan (WRI 2006).
Africa’s problems will also become serious. A remote sensing analysis concluded that 64% of Africans are already relying on water resources that are both limited and highly variable and that nearly 40% of existing irrigation is unsustainable (Vörösmarty et al. 2005). Moreover, national means hide major regional scarcities that exist in large and populous countries. The most notable example of these regional disparities is China, where acute water shortages in the northern provinces have led to the construction of a massive water transfer project from the Yangzi to the Huanghe basin (see fig. 3.18) (Smil 2004). This is one of the most expensive massive geoengineering tasks ever undertaken and, as with every mega-project, it raises many concerns about its eventual utility and enormous environmental impacts.
Water supply may worsen even in some places with little population pressure. Modeling shows that during the twenty-first century progressively larger areas in Spain, Italy, and the Rhine basin will move into the stressed category (Schröter et al. 2005). A conservative estimate for the year 2050 would put at least 60 countries, with nearly half the world’s population, into the water-scarce and waterstressed categories. Only the installation of the most efficient irrigation systems as well as near-complete recycling of urban and industrial water could ease the deficits, but even so there will be a massive new need for desalination (and hence for substantial constant energy inputs).
Two compensating trends may provide relief. Paleoclimatological evidence indicates that in line with expectations, higher tropospheric temperatures have been intensifying the global water cycle and that the twentieth century brought large precipitation gains to regions including the subpolar Arctic, tropical Arabian Sea, and much of temperate Eurasia (Evans 2006). The last instance is illustrated by precipitation in northern Pakistan: the twentieth century was the region’s wettest period of the last millennium (Treydte et al. 2006). Moreover, higher atmospheric CO2 concentrations result in lower evapotranspiration losses from vegetation, and this effect, already detectable in continental runoff records, has increased the amount of water flowing to the ocean (Gedney et al. 2006). Consequently, it is impossible at this time to assess the net outcome of the two countervailing trends (rising demand vs. higher availability of water).
Poor water quality is a much more common problem. In 2005 more than 1 billion people in low-income countries had no access to clean drinking water, and some 2.5 billion lived without water sanitation (United Nations 2003). About half of all beds in the world’s hospitals were occupied by patients with water-borne diseases. Diarrhea in its many forms—acute dehydrating (cholera), prolonged with abdominal symptoms (typhoid fever), acute bloody (dysentery), and chronic (caused by waterborne bacteria like Vibrio, Salmonella, and Escherichia coli) is the leading killer (up to 4 billion episodes per year), and dehydration is the principal proximate cause of death. Contaminated water and poor sanitation kill about 4,000 children every day (UNICEF 2005). Deaths among adults raise this to at least 1.7 million fatalities per year. Add other waterborne diseases, and the total surpasses 5 million. In contrast, automobile accidents claim about 1.2 million lives per year (WHO 2004b), roughly equal to the combined total of all homicides and suicides, and armed conflicts kill about 300,000 people per year. The water treatment record of India and China, the world’s two most populous countries, is appalling. But even the richest countries have a poor record of water management. Primary treatment is generally in place, but the removal of eutrophication-inducing nitrates and phosphates is still rare.
The natural nitrogen cycle is driven largely by bacteria (fig. 4.10) (Smil 2000). Fixation, the conversion of inert atmospheric N2 to reactive compounds, is dominated by bacteria. They convert N2 to NH3 using nitrogenase, a specialized enzyme that no other organisms carry. Most N-fixing bacteria are symbiotic with leguminous plant roots (some live inside plants stems and leaves) and free living cyanobacteria are present in soils and water. Nitrifying bacteria present in soils and waters transform NH3 to NO3, a more soluble compound that plants prefer to assimilate. Assimilated nitrogen is embedded mostly in amino acids, which form plant proteins. Animals and people must ingest preformed amino acids in order to synthesize their tissues. Dead tissues undergo enzymatic decomposition (ammonification), which releases NH3 to be reoxidized by nitrifiers. Denitrification returns the element from NO-3 via NO22 to atmospheric N2, but incomplete reduction results in emissions of N2O, a greenhouse gas about 200 times more potent than CO2.
Cultivation of leguminous crops (done in every traditional agriculture) was the first major human intervention in the nitrogen cycle, and it now fixes annually 30-40 Mt N/year. During the nineteenth century, guano and Chilean nitrate were the first commercial nitrogen fertilizers. The synthesis of ammonia from its elements—demonstrated for the first time by Fritz Haber in 1909 and commercialized soon afterwards by the German chemical company BASF under the leadership of Carl Bosch—opened the way for large-scale, inexpensive supply of reactive nitrogen (Smil 2001). By 2005 global NH3 synthesis surpassed 100 Mt N/year, with about 80% of it going to produce urea and other nitrogen fertilizers and the rest used in industrial processes ranging from the production of explosives to the syntheses of plastics (Ayres and Febre 1999). The third-largest source of anthropogenic reactive nitrogen is combustion of fossil fuels, which adds almost 25 Mt N/year in nitrogen oxides.
Losses of nitrogen from synthetic fertilizers and manures, nitrogen added through biofixation by leguminous crops, and nitrogen oxides released from combustion of fossil fuels are now adding about as much reactive nitrogen (~150 Mt N/year) to the biosphere as natural biofixation and lighting does (Smil 2000; Galloway and Cowling 2002). This level of interference is unequaled in any other global biogeochemical cycle. Carbon from fossil fuel combustion and land use changes is equal to less than 10% of annual photosynthetic fixation of the element, and sulfur from combustion and metal smelting is equal to only about one-third of the annual flux of sulfurous compounds produced by biota, volcanoes, and sea spray (Smil 2000; D.I. Stern 2005). Not surprisingly, this large anthropogenic fixation of nitrogen has a number of undesirable biospheric impacts once the reactive compounds enter the environment.
Only 25%-40% of all fertilizer nitrogen applied to crops is taken up by plants; the rest is lost to leaching, erosion, volatilisation, and denitrification (Smil 2001). Because the photosynthesis of many aquatic ecosystems is limited by the availability of nitrogen, an excessive influx of this nutrient (eutrophication) leached from fertilizers promotes abundant growth of algae and phytoplankton. Subsequent decomposition of this phytomass deoxygenates water and reduces or kills aquatic species, particularly the bottom dwellers.
The worst affected offshore waters in North America are in the Gulf of Mexico, where every spring eutrophication creates a large hypoxic zone that kills many bottom-dwelling species and drives away fish (Rabalais 2002; Scavia and Bricker 2006). Other anoxic zones can be found in the lagoon of the Great Barrier Reef, the Baltic Sea, the Black Sea, the Mediterranean, and the North Sea. Algal blooms may also cause problems with water filtration or produce harmful toxins. Escalating worldwide use of urea (besides fertilizer also for animal feed and in industry) is increasing pollution of sensitive coastal waters (Glibert et al. 2006).
Nitrogen oxides formed during high-temperature combustion are essential ingredients for the formation of photochemical smog, a persistent feature of all major urban areas worldwide whose major impacts range from drastically reduced visibility to serious health effects (respiratory ailments) to chronic damage to crops and trees. Atmospheric oxidation of NO and NO2 also produces nitrates, which with sulfates compose acid rain. Atmospheric nitrates, together with volatilized ammonia (especially from fertilization and from large animal feedlots), also cause eutrophication of forests and grasslands. In parts of eastern North America, northwestern Europe, and East Asia, rains annually bring more reactive nitrogen than fertilizers do. Both nitrification and denitrification produce N2O, making fertilization contributor to global warming.
Human interference in the global nitrogen cycle is an inherently more intractable challenge than the decarbonization of the world’s energy supply. That will not be an easy transition (see chapter 3), but a carbon-free energy system is an eventual inevitability. By contrast, there can be no nitrogen-free organisms, and the larger and more affluent populations of the twenty-first century will demand better nutrition that will have to come largely (given the distribution of future population increments) from higher fertilizer applications, either to secure higher yields of more intensively cultivated land in Asia or to stop the still increasing nutrient mining in agricultural lands of Africa (Henao and Baanante 2006).
The loss of biodiversity usually evokes the demise of such charismatic mega-fauna as Indian tigers, Chinese pandas, and Kenyan cheetahs. All these species are greatly endangered, but in terms of irreplaceable ecosystemic services their loss would not even remotely compare with the loss of economically important invertebrates. Both Europe and North America have seen a gradual decline of pollinators, including domesticated honeybees and wild insects. Pollination by bees is an irreplaceable ecosystemic service responsible for as much as one-third of all food consumed in North America, from almonds and apples to pears and pumpkins (fig. 4.11) (NRC 2006a).
Pollinators are also needed to produce a full yield of seed for such important feed crops as alfalfa and red clover (Proctor et al. 1996). Spreading infestations of varroa mites and tracheal mites (they either kill bees outright or introduce lethal viruses into their bodies) are very difficult to control (NRC 2006a). Introduced African bees, which began their northward expansion in Brazil in 1956, have been destroying native wild colonies in the Americas, and indiscriminate use of pesticides has been the most important human factor in roughly halving the North American honeybee count during the second half of the twentieth century (Watanabe 1994; Kremen et al. 2002). This worrisome trend took a dramatic turn during the winter of 2006-2007, when beekeepers across North America reported widespread collapses of entire bee colonies. The usual suspects included a variety of pathogens, pesticides, and the high-fructose sugar syrup diets used to feed the hives in winter (and their contaminants). The most likely cause has been a virus from Australia (Stokstad 2007).
Loss of biodiversity takes place mostly because of the destruction or substantial alteration of natural habitats (Millennium Ecosystem Assessment 2005). Agricultural land is now the largest category of completely transformed, much less biodiverse land. In 2005 its extent, including permanent tree crops, was about 15 million km2, and the three largest grain crops—cultivars of wheat (originally from the Middle East), rice (from the Southeast Asia), and corn (from Mesoamerica)—are now grown on every continent and occupy a combined area of about 5 million km2, more than all remaining tropical forests in Africa. Land under settlement and bearing industrial and transportation infrastructures adds up to about 5 million km2, and water reservoirs occupy about 500,000 km2. Human activities have thus entirely erased natural plant cover on least 20 million km2, or 15% of all ice-free land surface.
Areas that still resemble natural ecosystems to some degree but that have been significantly modified by human actions are much larger. Permanent pastures total about 34 million km2, and at least one-quarter of this area is burned annually in order to prevent the growth of trees and shrubs. A very conservative estimate of the global extent of degraded forests is at least 5 million km2, and the real extent may be twice as large. Total area strongly or partially imprinted by human activities is thus about 70 million km2, no less than 55% of nonglaciated land. Another way to appraise the significance of this transformation is to estimate the share of global photosynthesis that is consumed or otherwise affected by human actions. Vitousek et al. (1986) calculated that during the early 1980s humanity appropriated 32%-40% of terrestrial photosynthesis through its field and forest harvests, animal grazing, land clearing, and forest and grassland fires. A more recent recalculation (Rojstaczer, Sterling, and Moore 2001) closely matched the older estimate.
The remaining wilderness has been pushed to subboreal and polar regions. During the late 1980s an inventory of its large (>400,000 ha) contiguous blocks found that more than one-third of the global land surface (nearly 48 million km2) is still wilderness, but 40% of the total was in Arctic or Antarctic (McCloskey and Spalding 1989). Territorial shares of remaining wilderness ranged from 100% for Antarctica and 65% for Canada to less than 2% for Mexico and Nigeria and, except for Sweden (about 5%), to zero for even the largest European countries. There were also no undisturbed large tracts of land in such previously biodiverse ecosystems as the tropical rain forests of the Guinean Highlands, Madagascar, Java, and Sumatra, and the temperate broadleaf forests of eastern North America, China, and California.
This enormous transformation and degradation will continue (in Asia largely unabated, in Africa much accelerated) during the first half of the twenty-first century, with the tropical deforestation and conversion of wetlands causing the greatest losses of biodiversity (fig. 4.12). Quantifying this demise has been difficult. Nearly 850 species were lost worldwide between 1500 and 2000 (Baillie, HiltonTaylor, and Stuart 2004), an average rate of one to two per year that is roughly 1 OM higher than the natural rate of extinction. However, for the past century the rate may have been about 100 times faster than indicated by the fossil record, and depending on the rate of habitat destruction, it may become 1,000 times faster during the next 50 years (Millennium Ecosystem Assessment 2005). The best compilation suggests that 12% of known bird species, 23% of mammals, 25% of coniferous trees, and 32% of amphibians are threatened with extinction (IUCN 2001), and unlike recent extinctions, which took place mostly on oceanic islands, continental extinctions will be the norm.
Climate change could become a major cause of extinction in addition to excessive exploitation and extensive habitat loss (to which climate change directly contributes). The theoretical potential for extinctions driven by climate change is substantial (Lovejoy and Hannah 2005). Thomas et al. (2004) used projections of species distributions (including hundreds of plants, mammals, birds, frogs, reptiles, and invertebrates) to asses extinction risks in a sample region covering about 20% of the continental surface and found that by the year 2050 minimal, medium, and maximum climate change scenarios would condemn, respectively, 18%, 24%, and 35% of species to extinction. Given the many uncertainties, these rates are not to be seen as predictions but as worrisome indicators of the extent of possible change. By 2050 the biosphere is likely to lose thousands of species ranging from Indian tigers to Central American tree frogs to cycad plants.
Further degradation of biodiversity will take place because of the continuing deliberate introductions and unintended invasions of alien plants, animals, and invertebrates that have already changed ecosystems on all continents (Mooney and Hobbs 2000; Baskin 2002; Cox 2004; Sax et al. 2005). Croplands are obviously anthropogenic ecosystems, but many seemingly natural landscapes are as well. For example, 99% of all biomass in parts of the San Francisco Bay is not native (Enserink 1999), and bioinvasions have had particularly devastating effects on islands. Oahu’s native vegetation now occupies less than 5% of the island, and even the Galápagos Islands, a national park where 97% of land is protected, have been heavily invaded by rats, pigs, goats, cats, ants, and quinine trees (Kaiser 2001).
The recent rush to cultivate energy crops should be seen “as adding biofuels to the invasive species fire” (Raghu et al. 2006, 1742) because such high-yielding grass species as giant reed, reed canary grass, and switchgrass have been shown to be actually or potentially invasive in U.S. ecosystems. Finally, there are also the largely unexplored consequences of the global spread of microbes in the ballast water of commercial ships (Ruiz et al. 2000). New bioinvasions are largely unpredictable, but their progress is often very rapid, and once they are under way, they are nearly impossible to stop. The rising volume of global trade and travel will only multiply the opportunities for unintended species introductions.
The consequences of bioinvasions often include a profound transformation of affected ecosystems and a high economic cost. Zebra mussels and water hyacinth are two notable examples. The mussels were carried by ships in European ballast water to North America for generations, but they took hold in Ohio and Michigan in 1988, and by the end of 2000 they had penetrated all the Great Lakes and the Mississippi basin. Their massive colonies cloak and clog underwater structure and pipe inlets, reduce the presence of native mussel species, and cause economic damage in billions of dollars per decade (fig. 4.13) (USGS 2006). Water hyacinth (Eichhornia crassipes, an Amazonian native) has taken over many river, lake, and reservoir surfaces on five continents, with major local and regional economic impacts. It asphyxiates native biota; interferes with fishing, electricity generation, and recreation; increases evaporation from infested surfaces; and serves as a breeding ground for disease vectors (McNeely 1996).
Why does all this matter? It is clear that overexploitation, loss of natural habitats, and species introductions and invasions lead to ecosystemic impoverishment and homogenization. These change previously unique and species-rich communities into homogenized assemblages dominated by a few generalists, be they pests or weeds. Sparrows, crows, pigeons, rats, mice, and feral dogs are the inheritors of the biosphere molded by humans. But there is much more than this esthetic impoverishment and lamentable loss of genetic information that took so many generations to select and perfect. The loss of biodiversity and bioinvasions have major economic consequences, and if severe enough, they can reduce the stability and resilience of ecosystems and seriously compromise the delivery of irreplaceable ecosystemic services.
Antibacterial drugs, originally derived from spore-forming microbes and commercially available since the 1940s, were one of the most consequential innovations of the twentieth century. They have been a major cause of extended life expectancy and saved millions of lives that would have been lost to previously untreatable diseases. Less dramatically, they have shortened the course of common infections, lessened patient discomfort, and speeded up recovery. But their spreading use led inevitably to the selection of resistant strains (Levy 1998; WHO 2002). Penicillinresistant Staphylococcus aureus was found for the first time in 1947, just four years after the mass production of this pioneering antibiotic.
As the resistance spread, methicillin became the drug of choice and the first methicillin-resistant strains of Staphylococcus aureus (MRSA, causing bacteremia, pneumonia, and surgical wound infections) were encountered in 1961. The MRSA strains are now common worldwide, and in the United States they have been responsible for more than 50% of all infections acquired in hospitals during intensive therapy (fig. 4.14) (Walsh and Howe 2002). Vancomycin was the next choice, but the first vancomycin-resistant enterococci appeared in 1987 in Europe and in 1989 in the United States. The first vancomycin-resistant staphylococci appeared in Japan in 1997 and in 1999 in the United States (Jacoby 1996; Cohen 2000; Leeb 2004). This was a particularly worrisome development because vancomycin has become the drug of last resort after many bacteria acquired resistance to penicillins, erythromycin, neomycin, chloramphenicol, tetracycline, and beta-lactams.
Strains of Salmonella with resistance to antimicrobial drugs are now found worldwide, and they are usually transmitted from animals to people through food (Threlfall 2002). A strain of Salmonella typhimurium (causing gastroenteritis) is now resistant to as many as six antimicrobial drugs, and multiple resistance associated with treatment failure is also found in Salmonella typhi (typhoid fever) in India and Southeast Asia. Multidrug resistance (including to tetracycline and ampicillin) was found among strains of Vibrio cholerae in India (Yamamoto et al. 1995). Other common bacteria with multidrug resistance now include Haemophilus influenzae (causing pneumonia, meningitis, and ear infections), species of Mycobacterium (tuberculosis), Neisseria gonorrhoeae, Shigella dysenteriae (severe diarrhea), Streptococcus pneumoniae, Clostridium difficile (severe diarrhea and fever), Escherichia coli, and a soil-dwelling bacterium Burkholderia pseudomallei, which causes melioidosis (joint inflammation, internal abscesses, and impaired breathing), whose progression may be rapid and whose mortality rate is high (Aldhous 2005).
Resistance to antibiotics would have developed naturally even under the most stringent conditions. It is acquired not only through inevitable spontaneous mutations but also through horizontal gene transfer among neighboring bacteria, even if they happen to be only distantly related species. Moreover, while the antibioticresistant strains were assumed to be weaker than their susceptible precursors, recent research found that resistant strains of Mycobacterium tuberculosis can be just as aggressive (Gagneux et al. 2006). But the rapid diffusion of drug-resistant strains has been greatly assisted by unnecessary self-medication with antimicrobials and by their prescribed overuse. Overprescribing and improper use of antibiotics in affluent Western countries look like a minor problem compared to the abuse of the drugs in the rest of the world. Amyes (2001) noted that in India more than 80 companies made ciprofloxacin without any license from the patent holder (Bayer) and that almost 100% of the country’s healthy population carries bacteria resistant to several common antibiotics. (European rates of resistance range from less than 5% to 40%.)
Other key factors that have promoted the diffusion of resistance are poor sanitation in hospitals and a massive use of prophylactic antibiotics in animal husbandry. Because of this common overuse antibiotics resistance is now present not only in humans and domestic animals but also in wild animals that have never been exposed to those drugs (Gilliver et al. 1999). Gowan (2001) called attention to a significant (but unquantified) economic impact of antimicrobial resistance: premature deaths due to ineffective drugs and extended hospital stays needed to combat the infections.
If antimicrobial drugs were to lose their efficacy completely, the cost would be truly catastrophic. Even with their use, infectious diseases remain the second-highest cause of deaths worldwide. Without them, every annual influenza epidemic might bring many more deaths due to bacterial pneumonia, and tuberculosis and typhoid fever might become very difficult or impossible to treat. Some appraisals of the current situation are very pessimistic. Amyes (2001) concluded that we are slipping into an abyss of uncontrollable infection. Amábile-Cuevas (2003) thinks that, in many ways, the fight against antibiotic resistance is already lost.
The situation has been made worse by the fact that most major pharmaceutical companies have largely withdrawn from the development of new antibiotics. In 2004 only 6 out of 506 drugs in late-stage clinical trials were antibiotics, and all of these were derivatives of existing drugs (Leeb 2004). Their reasoning, as profit-maximizing entities responsible to shareholders, was obvious: antibiotics are drugs of sporadic and time-limited utility, unlike the cholesterol-lowering statins or hypertensionlowering beta blockers that many people take daily for life and that can become multibillion-dollar items on corporate balance sheets. So we simply do not know how close we are, or might become, to a world that resembles the pre-penicillin era of bacterial infections with unpredictable and massively fatal outcomes.
As noted, more than four decades have passed since the introduction of the last new anti-tuberculosis drugs, and neither the existing drug pipeline nor the level of research funding indicates that a new compound could become available by 2010 (Glickman et al. 2006). At the same time, nearly every third person in low-income countries is infected with Mycobacterium tuberculosis, and while under normal circumstances fever than 10% of infected individuals develop the disease, the activation rate of latent TB rises once the immune system is weakened, a condition increasingly common with spreading HIV. The worst possible combination is that of HIV-infected patients stricken by extremely drug-resistant (XDR) TB. And while there have been many isolated XDR TB cases, the first virtually untreatable outbreak of the infection, which affected more than 100 patients in rural KwaZulu Natal in South Africa, occurred in 2006 (Marris 2006).
Bacteria have a great evolutionary advantage: some 3.5 billion years of continuous existence that has endowed them with superior mutation and survival capacities. And so a discovery by D’Costa et al. (2006) was not really surprising: they found that every one of 480 isolates of the diverse spore-forming genus Streptomyces, well known for its capacity to produce multiple antimicrobial agents, was also able to resist at least six to eight (and some up to 20) different antimicrobial agents that protect the bacteria against their own toxic products. This natural protective capacity could be harnessed for the synthesis of new antimicrobial compounds. Without such new departures we will steadily lose our ability to combat bacterial infections.
Some forms of life have been on the Earth for well over 3 billion years. Complex organisms began diffusing about half a billion years ago, mammals became abundant 50 millions year ago, our hominid ancestors date to more than 5 million years ago, and our species has been evolving increasingly complex modes of existence (though not necessarily a greater sapience) during the past 100,000 years. On the civilizational time scale—103 years, with less than 10,000 years elapsing from the tentative beginnings of first settled cropping and less than 5,000 years since the establishment of the first complex social entities, precursors of states, in the Middle East—the biosphere is an astonishingly durable system, and worries about its unraveling seem overwrought. Such a conclusion is absolutely correct on the microbial level: a rich assemblage of viruses, archaea, bacteria, and microscopic fungi will survive any conceivable insult that human beings can inflict on the biosphere.
But a biosphere resembling an aquatic slime or consisting of a cyanobacterial layer on rocks, without millions of differentiated invertebrates (insects are the most biodiverse organisms) or any higher plants or aquatic and terrestrial animals, would not be a place fit for the evolution and complexification of hominids. This perspective illuminates the irreplaceable importance of ecosystemic services for human survival and hence, obviously, for any economic activity. The global economy is merely a subsystem of the biosphere, and it is easy to enumerate the natural services without which it would be impossible, but it is meaningless to rank their importance because they are interconnected with a multitude of feedbacks.
Categorizing natural environmental or ecosystemic services requires some arbitrary decisions because individual services overlap and interact and because it is difficult to separate form and function. In a pioneering account, Vernadsky (1931) listed these principal roles (conceptually corresponding to nature’s services) played by the biosphere: gas function, the formation of atmospheric gases; oxygen function, the formation of free O2; oxygenating function producing many inorganic compounds; binding of calcium by marine organisms; formation of sulfides by sulfatereducing bacteria; bioaccumulation, the ubiquitous concentration of many elements from their dilute environmental presence; decomposition of organic compounds; and metabolism and O2-consuming respiration resulting in CO2 generation and biosynthesis (done by aerobic heterotrophs).
A differently formulated list must include at least the following nine service categories. Photosynthesis produces all our food, a large share of our fibers for clothes, all fiber (cellulose) for papermaking, and about one-tenth of the world’s primary energy use (wood, crop residues). Nitrogen, the key macronutrient needed for plant growth, is constantly converted from its inert atmospheric form (N2) into reactive NH3 by diazotrophs (N-fixing microbes dominated by symbiotic Rhizobium) (fig. 4.15) and then into more water-soluble NO3 (by nitrifying bacteria), which is readily absorbed by plant roots. Similarly, the cycling of sulfur is mediated by various bacteria.
Ecosystems regulate water runoff (and hence the severity of floods) and retain and purify water. Root systems of grasslands are particularly effective sponges, and forests soils can filter water as well as the best artificial filters. New York City relies on the Catskill forests for its high-quality drinking water; similarly, Winnipeg, Canada (where I live) relies on the forests surrounding the Lake of the Woods. Plant cover prevents or minimizes soil erosion and hence eliminates or reduces excessive silting of streams that contributes to floods and limits stream navigability. Forests help to regulate climate on regional scales. Even a small tree grove has a microclimatic impact, and all plants are highly effective controllers of many air pollutants. Coastal vegetation, particularly extensive wetlands, provides a highly effective means of reducing shore erosion and offering protection against storm surges. Healthy soils are assemblages of fine-grained minerals and a great deal of living matter; without soil bacteria we could not produce our crops. And we should not forget the soil invertebrates, above all, earthworms, Darwin’s great favorites; his last published book was devoted to these “lowly organized creatures” (1881). The decomposition of organic matter (and hence of carbon sequestration) and the recycling of nutrients are almost totally dependent on microbes and microand macroscopic fungi. Without their services there would be neither agriculture nor animal grazing, neither coral reefs nor magnificent tropical rain forests. Natural biodiversity reduces the impact of disease and pest attacks. I have already noted the importance of pollination.
Even if expense were no object, none of these services could be performed at such scales and with such efficacy by any anthropogenic means. Our dependence on biospheric services is literally a matter of survival, and that is why the biosphere’s integrity matters. Localized assaults exact a local price in degraded farmland or pasture, or in poor yields or skeletal animals (caused by inadequate recycling of organic matter or severe overgrazing), or in streams leaving their banks because of flooding aggravated by massive deforestation of a watershed. Regional impacts can influence the fortunes of a nation, but if the situation is desperate enough, people move. But major interference with ecosystemic services on a global scale is an entirely new challenge.
Never before have we been in a situation where we can concurrently affect so many biospheric functions on such a grand level and unleash a multitude of foreseeable as well as utterly unpredictable environmental consequences. The story of chlorofluorocarbons (CFCs) is a perfect illustration of such unintended perils. In the early 1920s, Thomas Midgley, Jr., a research chemist working for GM, introduced a lead-based additive for gasoline in order to eliminate engine knocking. At the time, the toxic effects of lead were well known. Midgley himself had lead poisoning in 1923, and the long-term problem with the use of this additive could have been anticipated. But a decade later Midgley selected CFCs as perfect refrigerants because they were inexpensive to synthesize, inert, noncorrosive, nonflammable, and “nontoxic” (fig. 4.16) (Midgley and Heene 1930).
Nobody anticipated any adverse effects when their use spread (after WW II) to hundreds of millions of refrigerators and air conditioners, and to propelling aerosol, blowing foams, cleaning electronic circuits, and extracting plant oils (Cagin and Dray 1993). Four decades after their introduction, experiments indicated that once these gases reach the stratosphere, they are dissociated by UVB waves of 290-320 nm, which are entirely screened from the troposphere by stratospheric ozone (Molina and Rowland 1974). (They are much heavier than air but turbulent mixing eventually transports them in small concentrations higher than 15 km above the ground.) This breakdown releases free chlorine atoms that then break down O3 molecules and form chlorine oxide; in turn, ClO reacts with O to produce O2 and frees the chlorine atom. Before it is eventually removed, a single chlorine atom can destroy about 105 O3 molecules. In 1985, the British Antarctic Survey confirmed the existence of this process by discovering a severe thinning of ozone layer above Antarctica, the famous “hole” (Farman, Gardiner, and Shanklin 1985).
Human actions thus imperilled one of the key preconditions for complex life on Earth, the planet’s ozone shield. Its continuing thinning and possible extension beyond Antarctica threatened all complex terrestrial life. This had evolved only after the oxygenated atmosphere gave rise to a sufficient concentration of stratospheric ozone to prevent all but a tiny fraction of UVB radiation from reaching the biosphere. Excessive UVB radiation drastically reduces the productivity of oceanic phytoplankton (the base of marine food webs), cuts crop yields, causes higher incidence of basal and squamous cell carcinomas, eye cataracts, conjunctivitis, photokeratitis of the cornea, and blepharospasm among animals and people, and also affects their immune systems (Tevini 1993).
Fortunately, CFCs were produced in quantity only by a handful of nations, and there was a ready solution: banning them (beginning with the Montreal Protocol of 1987) and substituting for them less harmful hydrofluorocarbons (UNEP 1995). Because of the long atmospheric lifetimes of CFCs, their stratospheric effect will be felt for decades to come, but atmospheric concentrations of these compounds have been falling since 1994, and the stratosphere may return to its pre-CFC composition before 2050. This experience offers no effective lessons for controlling greenhouse gases. All countries produce them, many economies depend on large-scale extraction and sales of fossil fuels, and all modernizing countries will be consuming more hydrocarbons and coal for decades to come. Moreover, there is no possibility of any simple and relatively rapid switch to noncarbon energy sources because the requisite conversions are not yet available at required scales and acceptable prices (see chapter 3).
At the same time, no country will be immune to global climate change, and no military capability, economic productivity, or orthodox religiosity can provide protection against its varied consequences. We might come to see such preoccupations as budget deficits and immigration or trade policies as trivial when compared to the climate’s changing faster than it has at any time during the past million years. Unfortunately, our capacity to assess the impact of these changes is hampered not only by a imperfect understanding of the complex feedbacks involved in the process but also by uncertainty about the future pace of change.
Long-range forecasts of fossil fuel combustion and rates of economic growth, the primary drivers of greenhouse gas emissions, are notoriously unreliable even when countries are on fairly stable trajectories. The forecasts can be drastically pushed up or down by unpredictable discontinuities in the history of major consuming nations. As an example, China’s carbon emissions rose sharply during Deng Xiaoping’s post-1980 economic reforms (emissions more than tripled by 2005), and in contrast, carbon emissions declined (by 1998, 35% below their peak a decade earlier) after the 1991 demise of the USSR, which had been before its dissolution the world’s second-largest emitter of greenhouse gases (fig. 4.17).