I trust that after what has been said the theory proposed in the foregoing pages will prove useful in explaining some points in geological climatology.
—Svante Arrhenius, 1896, from On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground
Perched near the top of Mauna Loa, the world’s largest volcano, Charles David Keeling in March 1958 made the first in a series of measurements of the concentration of carbon dioxide in the air. The average value for that month was 315.71 parts per million (ppm). This means that for every one million molecules in the atmosphere, 315.71 were composed of carbon dioxide. With the exception of a brief disruption in 1964, this record of timed measurements of the concentration of carbon dioxide in the atmosphere continues to the present day. This is one of the most important achievements and one of the most valuable sets of data from contemporary scientific research. On May 9, 2013, the average carbon dioxide concentration reached an unprecedented 400 ppm. This was the highest reading ever recorded at this site and the highest it has been in records that date back around twenty-four million years.1 The data collected at the Mauna Loa laboratory have provided convincing scientific evidence that carbon dioxide–mediated global warming is upon us. This record is now known as the Keeling Curve.
The Mauna Loa site is an ideal location for measuring carbon dioxide and other atmospheric gases. These gases are referred to as being “well mixed,” which means that there are few sources of the gases in the area and the prevailing winds that circle the globe mix gases quite evenly. In addition, there are vertical winds that mix gases to an extent between layers of the atmosphere that are close to the surface of the earth and those at high altitudes. In other words, the air samples obtained at Mauna Loa are a good representation of the average concentration of gases in the Northern Hemisphere.
Like many other notable scientists, Keeling experienced some difficult moments in his career. A few years after his initial seminal observations, the funding for his work from the National Science Foundation came to a halt. However, just a few years later, the Foundation cited his work in its 1963 warning about global warming. President George W. Bush awarded the National Medal of Science to Keeling in 2002, and Vice President Gore presented him with a special achievement award in 1997 on behalf of a grateful nation and as a recognition of the importance of his contribution.
Keeling’s work, coupled with that of thousands of scientists who have studied the earth’s climate, culminated in the 2013 report of the Intergovernmental Panel on Climate Change (IPCC), which states, “Warming of the climate system is unequivocal.”2 This is extraordinarily blunt language for scientists, who typically understate their conclusions and avoid controversy. The panel goes on to say that “since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and the ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased.”3 In spite of this detailed report prepared by literally hundreds of the world’s leading climate scientists, many influential and well-financed individuals and organizations still exhort us to believe that climate change is a hoax. (Note that the list of contributors to the IPCC report is forty-seven pages long.)
Medical education begins with the study of first normal, then pathological biochemical and physiological processes and anatomical structures. A student must learn what is normal before tackling and dealing with the abnormal. The same is true for the study of climate. The term climate change indicates that there has been a deviation from a prior state. In the field of climate science, as in medicine, it is important to understand what has changed and why before effective actions can be taken. In a second parallel with medicine, actions designed to prevent climate change are virtually certain to be preferable to those needed to deal with a new and arguably pathological state.
It is important to distinguish between weather and climate. Weather is a precise description of specific elements of the atmosphere, such as temperature, humidity, and wind speed and direction at a given time and place. Climate is a description of the weather over an extended period of time, often years. An understanding of climate and climate change depends on an understanding of what determines climate and what affects the forces that determine climate. The scientific literature commonly refers to these forces as the drivers of climate. The task of understanding climate begins with an analysis and understanding of the earth’s energy budget.
Your personal household budget reflects income from salary, interest, and other sources, along with expenditures for food, housing, transportation, and so on. The earth’s energy budget is analogous, but like the budget of a nation, it is complex. However, this budget can be simplified to an extent. Energy from the sun is deposited on the earth in the form of solar radiation or light at different wavelengths. Different wavelengths of light have different properties. For example, infrared wavelengths transfer heat. Some of the light energy that reaches the earth is reflected back into space by clouds, the atmosphere, snow and ice on the earth’s surface, and so on. Light that reaches the earth’s land areas and oceans may be absorbed and converted into heat energy that warms the planet. Some of the heat from the warmed earth radiates back toward space in the form of long wavelength infrared energy. Some of that heat escapes into space, but some is absorbed by gases, specifically the greenhouse gases that are in the atmosphere. This absorption of energy heats the layer of the atmosphere where this process takes place, and these greenhouse gases then reemit heat energy in every direction. Some of this heat energy returns to the earth, warming it. This is the greenhouse effect.
The amount of heat energy trapped by greenhouse gases is determined by the concentration of the gas and its physical properties. Not all greenhouse gases are equal. Different gases have different properties that make them more or less efficient at trapping heat. The amount of trapped heat increases as the gas concentration increases. In terms of the earth’s energy budget, an increase in greenhouse gas concentration reduces the amount of heat energy that escapes from the earth. From a budgetary perspective, the input of heat remains the same, but the output of heat decreases as more greenhouse gases are added to the atmosphere. In accord with the laws of physics, this extra heat energy is distributed throughout the air, land, and water: they all become warmer.
The scientists who study the climate and produced the IPCC reports use the term radiative forcing (RF) to describe the change in downward minus upward energy transfer to and from the earth at the top of the atmosphere that is due to a driver of climate change. Positive RF leads to warming of the earth and negative RF to cooling of the earth. Each component of the atmosphere makes its unique contribution to the total value for RF. The contribution to the total RF made by each greenhouse gas can be determined with great accuracy. By using information about the atmosphere of the past, present-day values for RF can be compared to past RF values. Changes in the RF of the earth can be due to human activity or due to natural phenomena. For example, greenhouse gases—such as carbon dioxide, methane, and nitrous oxide—all act to increase the total RF. Volcanic eruptions that inject large amounts of dust and sulfur dioxide into the upper atmosphere decrease the total RF. Transient periods of cooling have been observed after particularly large eruptions. The current value for RF is about 2.3 watts per meter squared (W/m2) higher for the whole earth compared to the 1750 pre–Industrial Revolution value. The overwhelming majority of climate scientists agree that this increase is due to human activity and is driving climate change. Disagreement among scientists centers on uncertainties in the data due to measurement error (error is an element in every measurement) and the concentration of relevant atmospheric components.
Changes in the earth’s temperature are at the heart of the argument that the earth’s climate is warming.
The ability to measure the temperature of the earth, or any other object, is something we take for granted now, but this has not always been the case. The concept that the volume of air or other substances changed when heated or cooled was known in ancient times. Primitive instruments called thermoscopes were built by a number of early scientists, including Galileo. They consisted of a tube partially filled with air and water. The air–water boundary moved as the instruments were heated or cooled and as the air pressure changed. What we would recognize as a modern thermometer was born when developers filled and sealed a liquid in a bulb-tube device and applied a reference scale. This made it possible to make and compare measurements that were not also partly dependent on air pressure. The earliest measurements of the temperature of the atmosphere began in the middle of the seventeenth century.
The earliest continuous air temperature measurements were made in the Midlands of England, beginning around 1659. These early data show that the average temperature in January was 3°C, with a yearly average of 8.8°C. These data are shown in panel A of figure 2.1. Although these early measurements lack the precision and accuracy that are characteristic of the thousands of measurements made all over the world with contemporary instruments, the values recorded over the next 314 years show little deviation from an average yearly temperature of around 9°C, which is about 49°F. This is a marked contrast to the data shown in panel B of figure 2.1, showing a substantial rise in the global surface temperature that began around the start of the twentieth century. Thus, there was little if any change in the temperatures recorded in the Midlands of England between 1659 and 1973. This stands in contrast to the increase in the global surface temperature of about 0.8°C that has taken place between 1900 and the early part of this century.
Although the instrumental age of temperature recordings began in the mid-seventeenth century, the earth’s temperature record reaches much farther back into the past. One remarkable study yielded a temperature record that extends back almost six hundred million years.4 These data point to temperatures that were around 7°C warmer than they are now. Other less dramatic but more direct records extend back just over 420,000 years, as shown in figure 2.2.
In order to compute temperatures at these extreme dates, scientists have developed so-called proxies for the measurement of the actual temperature with a thermometer. Proxies depend on well-preserved physical aspects of samples that can be analyzed under laboratory conditions and related, in a verifiable manner, to the temperature of the sample when it formed. Examples of proxy measurements include very old ice samples, growth rings in trees, pollen grains, and samples of coral or other marine animals. Some of the most useful proxy data have come from core samples obtained by drilling into Antarctic ice sheets and sediments at the bottoms of oceans. These core samples contain air bubbles that yield important information about the composition of the ancient atmosphere.
To better understand proxy measurements that extend far into the past, research must account for fluctuations in the amount of solar energy that reaches the surface of the earth (solar insolation). These changes are due to the well-known characteristics of the earth’s orbit around the sun and include the following:
Changes in the shape of the earth’s orbit (technically, the orbital eccentricity, with two different periods of about 100,000 and 400,000 years)
Variations in degree to which the earth tilts on its axis (obliquity, with a period of about 42,000 years)
The tendency for the earth to wobble on its axis, like a top (axial precession, with a period of 23,000 years)
These changes in the earth’s orbit are called Milankovitch cycles. Some of the impacts of these cycles on climate data are shown in figure 2.2.
Changes in the amount of solar energy reaching the earth due to predictable variations in the orbit of the earth occur extremely slowly—over the course of thousands of years. The rate of temperature changes due to orbital fluctuations is too gradual to play any role in the climate changes taking place now.
The ingenuity of modern scientists was taxed as they developed and validated proxies for actual measurements of temperature. Widely used techniques depend on measuring differences in the mass ratio or weight of molecules that contain different isotopic forms of elements, such as oxygen and hydrogen. Most oxygen exists in one of two stable (nonradioactive) forms: oxygen-16 (16O ) and oxygen-18 (18O). The nucleus of each form contains eight protons; this makes it oxygen. Oxygen-18 contains ten neutrons, whereas 16O contains only eight, making 18O water slightly heavier than its 16O counterpart. This difference in weight between water molecules that contain 18O versus 16O causes slight but definable differences in the way these water molecules behave at different temperatures. The lighter 16O water molecules evaporate more readily than their heavier counterparts. Similarly, the heavier 18O water molecules condense out of the atmosphere before the lighter molecules.
By measuring the ratio of 18O to16O in an ice-core sample and comparing this ratio to an appropriate reference sample, it is possible to make accurate deductions about the temperature, the amount of rainfall, or the amount of ice covering the earth when the sample was fixed by freezing (see box 2.1). A 420,000-year record of surface temperatures at the Vostok site in Antarctica and global ice coverage made using oxygen isotopic techniques is shown in figure 2.2. During the period when this ice formed, temperatures were typically lower than at present. However, occasional peaks of about 2°C above current temperatures occurred that correspond to the warm periods present in the intervals between ice ages. These peaks are correlated with peaks in the concentration of carbon dioxide (CO2) and methane (CH4) in the atmosphere. These gas concentration data are the result of analyses of the small air bubbles that were trapped in the ice when they formed. A similar core drilled by the European Project for Ice Coring in Antarctica (EPICA) extends back about 801,000 years.
Since 16O is lighter than 18O, water containing the lighter isotope evaporates more readily than water containing the heavier isotope. Water containing heavy oxygen condenses out of the atmosphere more readily than water containing light oxygen. Water vapor from the warm equator, high in 16O, tends to move toward the cooler polar regions of the earth. This increases the amount of 18O in equatorial waters.
On the way toward the poles, the air cools to form precipitation that preferentially depletes the amount of 18O in the air. When the remaining water vapor reaches polar regions, it contains much less heavy oxygen and more light oxygen. This increases the amount of light oxygen in polar precipitation. High concentrations of the heavier oxygen in oceans recovered from oceanic core samples, trapped as carbonate in the shells of sea creatures such as foraminifera, are characteristic of periods when large amounts of ice covered the earth. Ice cores from polar regions with low amounts of 18O indicate low polar temperatures. When the climate warms, polar ice (rich in 16O) melts, reducing the salt content of ocean water and the 18O to 16O ratio. Thus, by measuring the isotopic forms of oxygen in various samples and comparing this ratio to standards, proxy measurements of temperature and ice coverage of the earth can be deduced.
This isotopic ratio technique has other applications. Nonradioactive carbon isotopes (12C and 13C) are used to determine whether atmospheric carbon dioxide is derived from burning fossil fuels.
The data obtained from analyses of the air bubbles trapped in Antarctic ice laid down over thousands of years show that substantial temperature changes have occurred in the past. However, these changes occurred much more slowly than the changes measured since the beginning of the Industrial Revolution that continue through to the present. Critically, as discussed ahead, the concentrations of greenhouse gases in the atmosphere were much lower when the ice formed than they are at the present.
Thus, it is possible to conclude that the earth is warming—and at a rate that is greater than any rate measured for the past 800,000 years. As a corollary to the temperature data, the present concentration of greenhouse gases that are driving climate change exceeds any concentrations found during those hundreds of millennia, and they are rising at a rate that is faster than in times past.
“It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-twentieth century. This is evident from the increasing greenhouse gas concentrations in the atmosphere.”5 In this sobering conclusion published in the most recent IPCC report, “extremely likely” is shorthand for a 95 to 100 percent probability that the statement is true. These increases have occurred in spite of several high-profile international conferences, such as those that took place in Kyoto and Rio de Janeiro, where participating nations failed to produce a binding agreement that would reduce greenhouse gas emissions. A 2013 report by the United Nations Environment Programme estimated that global greenhouse gas emissions in 2005 were the equivalent of forty-five billion metric tons (45 gigatons [Gt], where 2,200 lb = 1 tonne) and rose to 49 Gt by 2010.6 The UN’s business-as-usual scenario predicts a further rise to 59 Gt by 2020. Most of the current relentless increase is driven by burning fossil fuels in developing nations, China, and the United States.
Scientists who have studied the greenhouse effect and its link to climate change have shown clearly that there is a net gain in the earth’s heat energy (positive RF) since the dawn of the industrial age—arbitrarily established as 1750. Large increases in the atmospheric concentration of greenhouse gases, particularly those that remain in the atmosphere for long periods, have acted as the primary drivers for the increase in RF. The principle greenhouse gases are carbon dioxide, methane, nitrous oxide, stratospheric ozone-depleting halocarbons and their substitutes (chlorofluorocarbons, hydrochlorofluorocarbons, hydrofluorocarbons, perfluorinated carbon compounds, and other chlorinated and brominated chemicals), and sulfur hexafluoride. Carbon dioxide, methane, and nitrous oxide occur naturally but also are emitted as the result of human activity. The remainder—that is, all of those containing chlorine or fluorine—are all products of industrial activity. Serial measurements using sensitive instruments have provided the data to support this conclusion. Some data are the result of atmospheric sampling, such as that which Charles Keeling started in the late 1950s, whereas other data have come from an analysis of air bubbles trapped in ice cores.
Carbon dioxide makes the largest contribution to global warming of all of the long-lived greenhouse gases. This is due to its high concentration, its heat trapping ability (RF; see table 2.1), and the very long period for which newly formed CO2 remains in the atmosphere. A 2005 study reported that the average lifetime for atmospheric CO2 is between thirty thousand and thirty-five thousand years and that between 17 and 33 percent of the CO2 in the air today will still be in the atmosphere one thousand years from now.7 This long lifetime has led climate scientists to use CO2 as the basis for comparing the global warming potential of all of the long-lived greenhouse gases (see table 2.1).
In more technical terms, total RF in the time interval between 1750 and 2011 attributable to human activity is around 2.29 W/m2 (90 percent of the estimates range between 1.13 and 3.33 W/m2). Approximately 73 percent of that total, or 1.68 W/m2, is attributed to CO2 according to the IPCC Fifth Assessment Report (90 percent of these estimates range between 1.33 and 2.33 W/m2).
Carbon is present in all organic molecules and is essential for life. Because it is found everywhere and in all living things, it is not surprising that the earth’s carbon budget is extremely complex. Figure 2.3 is a simplified overview of the carbon cycle that depicts the equilibrium between the atmosphere, the oceans, and terrestrial systems, and how human activity has disrupted the carbon cycle.
The ability to make serial measurements of the atmospheric concentration of CO2 has led to valuable insights into the mechanisms causing climate change. The record of these measurements, now referred to as the Keeling Curve, is shown in figure 2.4. The curve has two essential features: a seasonal fluctuation, superimposed on a relentless increase. Work based on Keeling’s measurements has shown that the reductions in the concentration that occur during the spring and summer are due to carbon dioxide that is trapped in the new growth of trees and other plants in the Northern Hemisphere. In the fall and winter, when this growth has ceased, the concentration rises.
Data from ice cores, shown in figure 2.2, show the concentration of CO2 in the atmosphere for the past 420,000 years. There have been five CO2 concentration peaks in that interval, which are referred to as the periods of interglacial warming. As seen in the figure, the highest concentration of CO2 from the ice core data is approximately 300 ppm. A concentration of 400 ppm was recorded on May 9, 2013. Since 1980, the concentration of CO2 has risen at an average rate of 1.7 ppm per year. It is certain that the cyclic rise described by Keeling will continue into the foreseeable future.
How can we be so certain that this excess of CO2 is due to human activity? Climate change deniers point to the natural CO2 variability shown in figure 2.2 and contend that the recent changes are nothing new—but they are wrong. The current CO2 level is something quite new. The present concentration is higher than at any time in the last twenty-four million years.8
The link between burning fossil fuels and the rise in atmospheric carbon dioxide is firm. Records and estimates of fossil fuel combustion correlate well with the rate and magnitude of the rise in the atmospheric CO2.
In chemistry, we learned that CO2 is formed by the creation of the chemical bond between an atom of carbon and a molecule of oxygen:
C + O2 → CO2 + heat
From this equation, we can see that a molecule of oxygen is consumed when a molecule of CO2 is created. Because the concentration of oxygen in the atmosphere is very high, it is extremely difficult to measure the very small changes in its concentration predicted by the equation. Nevertheless, this is what Keeling’s son did. His work has shown that there is a cyclic change in the concentration of atmospheric oxygen that matches the prediction made by his father’s work.9 These data are a second critical part of the proof that the increase in the concentration of CO2 in the atmosphere is due to burning fossil fuels and not the result of natural processes.
Advances in analytical chemistry and a better understanding of photosynthesis have made it possible to produce one of the most sophisticated pieces of evidence linking burning fossil fuels with the rise in atmospheric CO2.10 Like oxygen, carbon exists in several isotopic forms. Carbon-12, containing six protons and six neutrons, accounts for about 99 percent of the element. The remaining 1 percent contains an extra neutron, making it 13C. Both are nonradioactive. (The very tiny amounts of radioactive 14C remove it from consideration in a geological time perspective because of its half-life of about 5,700 years.) Carbon dioxide containing the lighter 12C enters the leaves of plants more easily than its heavier counterpart. That, along with the fact that certain photosynthetic pathways prefer the lighter isotopic form, leads to a dilution of 13C in plants compared to the 13C in the atmosphere. When burned, the resulting CO2 has less of the heavier isotopic form and more of the lighter isotope. Because fossil fuels are derived from prehistoric plants, the CO2 from fossil fuel combustion is also “plant-like” in terms of its 13C to 12C ratio when compared to a standard. This low 13C to 12C ratio allowed scientists to identify fossil fuels as the source of the increased atmospheric CO2. Serial measurements of this ratio between 1980 and 2002 show a progressive decline, the result expected as a consequence of burning plant-based fossil fuels.11 The fall in the isotope ratio also parallels the increase in fossil fuel consumption during this same period.
Of the long-lived global warming gases, CH4 makes the second most important contribution to climate change. As shown in table 2.1, the global warming potential of methane is eighty-six and twenty-four times greater than that of CO2 when measured twenty and one hundred years after emission, respectively. The concentration of methane in the atmosphere has risen fairly steadily since the beginning of the Industrial Revolution, as shown in figure 2.5. In the early years of this century, there was a period of apparent stabilization followed more recently by a return to an upward trend.12 Although the reasons for the stabilization are not completely clear, the resumption of the increase is thought to be due to emissions from the fossil fuel sector (oil, coal mines, coal-bed methane, and natural gas). This increase and other considerations have apparently triggered the plans to regulate methane emissions that the EPA announced in the early part of 2015.
The global methane budget is complex due to multiple incompletely understood natural and anthropogenic sources, shown in part in table 2.2. Total methane emissions from anthropogenic sources in the interval between 2000 and 2009 are thought to be around 365 million tons. The fossil fuel industry is responsible for about a third of all anthropogenic methane emissions (table 2.2). Natural sources emit about 382 million tons. Total methane sinks are estimated to be around 666 million tons per year. Most atmospheric methane is oxidized in the lower atmosphere. Unfortunately, ground-level ozone is produced by these reactions, producing secondary effects on climate and health. There are substantial uncertainties in this budget due to emerging data that seek to quantify fugitive methane leaks at every stage of natural gas production, distribution, and utilization. There is also uncertainty about the fate of huge amounts of methane trapped in water crystals in the permafrost. These ice-like crystals known as methane clathrates can be burned. Some call clathrates “burning ice,” and they form a portion of the carbon trapped in the permafrost shown in figure 2.3.
Table 2.2 Global methane budget
Agriculture and waste
Total Arctic carbonCarbon trapped in permafrost
1,843 billion tons1,615 billion tons
Note: Unless otherwise noted, units are in millions of US tons (2000 lb/ton) and are taken from Table 6.8 of Climate Change 2013: The Physical Science Basis; Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, ed. T. F. Stocker, D. Qin, G-K. Plattner, et al. [New York: Cambridge University Press, 2014]. Numbers in parentheses represent ranges from sources included in the table. Data for Arctic carbon are from C. Tarnocai, J. G. Canadel, E. A. G. Schurr, P. Khury, G. Mazhitova, and S. Zimov, “Soil Organic Carbon Pools in the Northern Circumpolar Permafrost Region,” Global Biogeochemical Cycles 23, no. 2 (2009), doi:10.1029/2008GB003327.
Issues surrounding methane (the principal component of natural gas) and its role in global warming have taken on new dimensions as the result of arguments suggesting that natural gas can serve as a “bridge” from coal to more sustainable sources that make smaller contributions to global warming. The crux of the argument centers on the fact that burning methane produces about 57 percent of the amount of CO2 per unit of heat produced compared to coal. Burning coal has numerous and serious adverse health effects, strengthening the bridge-fuel argument.13 The recent boom in natural gas extraction, particularly with the widespread use of hydraulic fracturing, or fracking, has led to a substantial reduction in the price of natural gas. Coupled with increasingly stringent emission requirements for coal-fired power plants and pending rules that will regulate carbon dioxide emissions by new and existing power plants, this price reduction has triggered a shift from coal to natural gas in many electricity-generating units. Alas, the solution is not that simple.
The amount of methane that escapes into the atmosphere from the natural gas industry is contested hotly. Several recent studies illustrate the debate. Scientists sponsored by the National Oceanographic and Atmospheric Administration (NOAA) mapped methane plumes over gas wells in Uintah County, Utah, in 2012.14 They reported that between 6.2 and 11.7 percent of all of the natural gas produced in the field studied escaped into the atmosphere. Using similar technology contained in a motor vehicle, others mapped all of the streets in Boston, Massachusetts.15 They identified 3,356 sites where methane appeared to be leaking into the atmosphere. In some places, methane concentrations were approximately fifteen times greater than the concentration in well-mixed greenhouse gas samples, such as those collected at multiple sites in the Northern Hemisphere. A similar study of Washington, DC, identified almost six thousand leaks, with some methane concentrations reaching explosive levels.16 These investigators measured the 13C to 12C isotopic ratio in their methane samples and concluded that the methane came from the natural gas distribution system as opposed to natural sources. Similar studies have been performed in California, with an emphasis on the Los Angeles area, and Indianapolis, Indiana.17 On the basis of these studies, it seems likely that the estimate of the fossil fuel industry’s contribution to anthropogenic sources of methane, as shown in table 2.2, is too low.
This possibility is supported by another recent study of methane releases throughout the United States. This study concluded that the estimates made by the US EPA and the Emissions Database for Global Atmospheric Research (EDGAR) were low by factors of about 1.5 and 1.7, respectively.18 When various sources were estimated, these investigators concluded that the methane emissions from cattle (ruminants and manure) may be twice as high as suggested in prior studies. The prior estimates were particularly low for the south-central portion of the United States. In that region, the authors of this new methane study found that emissions were about 2.7 times greater than reported by others. Fossil fuel extraction and processing in refineries were identified as contributing to almost half of this excess, leading to the conclusion that fossil fuel emissions were almost five times greater than reported in the comprehensive EDGAR database. Emissions from three states—Texas, Oklahoma, and Kansas—account for 24 percent of all US methane emissions.
Although methane leaks from wells are suspected as a source for a substantial amount of methane, there is little consensus on this point. In a widely heralded study sponsored in part by the natural gas industry and the Environmental Defense Fund, investigators from several top-tier universities reported that emissions during natural gas production were approximately 0.42 percent of the total yield. This finding was challenged by Anthony Ingraffea, a Cornell University professor widely regarded for his expertise in this field, and his colleagues. He pointed out several potential sources of bias in the study, including the small number of wells using hydraulic fracturing techniques that were industry-selected for study.
In numerous public appearances, Ingraffea has reported data from publicly available sources detailing methane leaks from wells drilled in Pennsylvania. These data were published in the highly respected Proceedings of the National Academy of Sciences in the summer of 2014.19 Ingraffea’s team found that methane leaks from wells that took advantage of horizontal drilling with hydraulic fracturing techniques are several times more common than methane leaks from wells drilled in a more conventional vertical manner. They also reported that leaks from wells drilled after 2009 are more common than leaks from older wells. Because the location of many abandoned wells is not known precisely, measuring leaks from such wells is difficult. Thus, the true magnitude of the leaking well problem may be difficult to determine. In the meantime, approaches such as more rigorous controls of drilling procedures and better attention to constructing well casings that are more durable and more likely to remain intact and not leak are warranted.
The magnitude of the methane leak problem has enormous implications for combating global warming. Even though its lifetime in the atmosphere is short relative to CO2, the global warming potential for methane is high, as shown in table 2.1. Theoretical projections of methane releases that have since been essentially confirmed indicate that the impact of methane on global warming may be greater than that of coal when fugitive methane releases are added to the CO2 produced by burning natural gas.20 This forms the crux of Ingraffea’s argument that natural gas is a “gangplank to a warm future,” as he wrote in an op-ed in the New York Times.21
The IPCC’s methane budget estimates include emissions from permafrost and methane hydrates or clathrates (see table 2.3 and figure 2.3). Their data summary suggests that the total amount of carbon trapped in permafrost is between 1.5 and six times the amount in natural gas reserves. In the Fifth Assessment Report, these scientists also state that it is virtually certain (99–100 percent probability) that climate change will cause a retreat of the permafrost, which will release methane. In a 2011 report in Nature, scientists from the Permafrost Network wrote that the magnitude of carbon releases from thawing of the permafrost is “highly uncertain” but “[their] collective estimate is that carbon will be released more quickly than models suggest, and at levels that are cause for serious concern.”22
Halocarbons are carbon-containing molecules with one or more molecules of fluorine or chlorine. These chemicals do not occur in nature; they all are the result of human activity and industrial processes. The original compounds were developed in the late nineteenth century, but they were not synthesized in large quantities until the late 1920s and 1930s, when they replaced other hazardous chemicals used as refrigerants. Widespread use for this purpose, as propellants in spray cans, and in the electronics industry followed thereafter.
In the late 1970s and early 1980s, atmospheric scientists found that these gases were responsible for the destruction of the stratospheric ozone layer. Stratospheric ozone blocks ultraviolet (UV) radiation from the sun. Without this protective layer, many plants would not be able to grow. Other adverse effects caused by UV radiation, including dramatic increases in skin cancers, were predicted. As a result, the Montreal Protocol on Substances that Deplete the Ozone Layer was adopted, and recovery of the ozone layer is under way.
Then President Ronald Reagan, not known for his support of environmental regulations, was one of the champions of this treaty, which some have referred to as “the Little Treaty That Could.” Similar chemicals that do not affect the ozone layer were developed to replace banned substances. Unfortunately, many of these, along with the original compounds, are potent, long-lived greenhouse gases. According to the IPCC Fifth Assessment Report, as a group, chlorofluorocarbons and related compounds (hydrochlorofluorocarbons, hydrofluorocarbons, perfluorinated carbon compounds, other chlorinated and brominated chemicals, and sulfur hexafluoride) rank third among the long-lived greenhouse gases in terms of their total effect on global warming. The best estimate for their RF is 0.18 W/m2.
Nitrous oxide (N2O) is the fourth of the common, long-lived greenhouse gases, with an estimated RF of 0.17 W/m2. Its atmospheric lifetime is 121 years. As shown in table 2.1, its global warming potential is very high—268 times greater than carbon dioxide at twenty years and 298 times greater at one hundred years after it enters the atmosphere as shown in figure 2.5. Due to decreases in the atmospheric Freon concentration (perhaps the most widely used halocarbon), as mandated by the Montreal Protocol, N2O ranks third in terms of its contribution to global warming. Nitrous oxide emissions arise directly or indirectly from multiple sources, largely related to agriculture and food production. These sources account for about 60 percent of the anthropogenic emissions of this gas.23 Nitrous oxide arises from synthetic fertilizers, animal wastes and their management, nitrogen leaching and runoff, human sewage, and other sources. As with other greenhouse gases, the atmospheric concentration of N2O has risen dramatically since the onset of the Industrial Revolution. Ice core and other data indicate that current concentrations are higher than at any other time during the last eight hundred thousand years.
In addition to the greenhouse gases discussed thus far, there are other atmospheric constituents that affect the energy balance of the earth. In general, these are short-lived compounds, such as volatile organic compounds (other than methane) and carbon monoxide, which cause positive forcing, or a net energy gain. They yield ozone, methane, and CO2 as the result of atmospheric chemical reactions. Black carbon also contributes to energy gain. Oxides of nitrogen, produced in boilers and internal combustion engines, prevent energy gain, as do mineral dust, sulfates, and nitrates. The interaction between clouds and atmospheric aerosols (tiny droplets and particles dispersed in the atmosphere) prevents energy gain (negative forcing) and has blunted the effects of industrial age greenhouse gas emissions. This effect is difficult to quantify and is responsible for a substantial amount of the uncertainty in the estimate of total RF. Finally, changes in land use have affected the earth’s albedo, or the tendency of the earth to reflect solar energy, and contributes a small negative effect to the earth’s energy balance. These factors are all discussed in detail in the IPCC Fifth Assessment Report discussion of the physical science basis for climate change.24
The IPCC Fifth Assessment Summary for Policy Makers begins its section on the climate of the future with this unequivocal statement: “Continued emissions of greenhouse gases will cause further warming and changes in all components of the climate system.” Because we have not yet experienced the future, climatologists must rely on mathematical representations of the behavior of the earth’s systems, or models, to predict future behavior. This strategy is used widely in virtually every branch of science, ranging from predictions of the behavior of subatomic particles to the interaction of neural systems in the brain. When a modeler creates a climate model, he or she divides the earth into small segments. Each segment has multiple compartments, such as the land, ice, oceans, and atmosphere. Each compartment contains components of interest, such as CO2, water, and heat. Modelers want to predict the future behavior of these components. Equations based on the laws of physics and chemistry describe the intercompartmental movement of the model’s components, (e.g., CO2). Additional equations describe the movement of compartmental components (e.g., CO2) between compartments in adjacent segments (e.g., the back-and-forth movement of CO2 between the atmospheric compartments of two adjacent segments). Models include predicted emissions into a compartment (e.g., CO2 emissions by power plants), a variable that can be defined by the modeler, as well as information about how a greenhouse gas behaves in earth systems, such as the simplified carbon budget shown in figure 2.3. In a final step, the model is solved to yield a desired result, such as the temperature of the atmosphere or the ocean’s surface. Model complexity and the requirement for enormous computing power increase as the number of earth segments increases and as more variables and their behaviors are included in the model.
Understandably, scientists devote a huge amount of effort toward validating models, and models of the climate are no exception. In general, model validation depends on using actual measurements of past physical conditions, such as temperature, and CO2 emissions to predict a future, but known, climate. For example, a modeler may take known data from 1900 to predict the present temperature. Since both of these elements are known, the behavior of the model can be evaluated. If the model uses past data and makes an accurate “prediction” of the present condition, the model gains validity. If the model is inaccurate, it is refined by using more accurate data and an improved understanding of the climate system until it works properly. In this way, modelers have made extensive use of direct and indirect measurements of temperature, greenhouse gas concentrations, and ocean temperatures from the past to “predict” climate conditions of the less distant past or present. As scientists learn more about the behavior of the earth’s systems and as more data become available, modelers become more experienced, and their predicted results become better and more reliable.
Because climate models are complex and virtually all elements include some uncertainty (a universal feature of all scientific observations), the confidence limits associated with long-term predictions grow as the model attempts to make predictions farther into the future. The presence of uncertainty does not mean that the model is invalid. What it does mean is that a scientist has been realistic in his or her ability to make future predictions. Restated, the uncertainties associated with predictions of the climate of the future do not mean that global warming will not occur. It will.
One of the most powerful and important models was developed by NOAA’s Geophysical Fluid Dynamics Laboratory. This model includes interactions between the atmosphere and the oceans and is one of the most sophisticated approaches in current use. Many of the conclusions presented in the IPCC’s most recent assessment are based on this model and the use of four representative concentration pathways (RCPs) that specify greenhouse gas concentrations along with other variables and their effects at various times. The results of applying four RCPs are shown in figure 2.6, which shows future surface temperatures, and table 2.3, which portrays future sea levels predicted by the model scenarios. Each concentration pathway specifies the summed concentrations and effects of all greenhouse gases between the present and the year 2100 in terms of energy gain by the planet or RF (measured in W/m2). Thus, RCP2.6 is a concentration pathway that yields an RF of 2.6 in the year 2100. This optimistic scenario includes a temperature peak partway through the century, with subsequent reductions in the temperature and in greenhouse gas concentrations due to efforts to control emissions. Temperatures stabilize after that year. The RPC4.5 and RPC6.0 pathways depict intermediate effects, whereas RPC8.5 depicts a business-as-usual scenario that predicts CO2 concentrations will rise to about 950 ppm, just over twice the present concentration, by 2100. At that time, the model predicts a temperature increase of about 5°C with no end in sight and oceans that are 0.73 meters higher than they are now and rising at a rate that may be on the order of one-half inch per year.
The overwhelming majority of climate scientists agree: the climate is warming, human activities are responsible, and, if we fail to act, there will be adverse effects that virtually all of us will feel. Overwhelming majority may be an understatement. The noted historian of science Naomi Oreskes analyzed the abstracts from 928 papers published in peer-reviewed journals to determine whether the authors endorsed the IPCC consensus position. “Remarkably,” she wrote, “none of the papers disagreed with the consensus position.”25
The RPCs that are the heart of the IPCC report outline four options. We are at a point where it is still possible to choose one. Yogi Berra, a former star baseball catcher for the New York Yankees, is famously said to have offered this advice: “When you come to a fork in the road, take it.” The IPCC has presented us with four choices as depicted in the representative concentration pathways. We will take one of them, whether by choice or by default. It remains to be seen which one we will choose and where it will take us.