I know this ... a man got to do what he got to do.
—John Steinbeck, in The Grapes of Wrath1
Overshoot, adapt and recover. We will probably overshoot our current climate targets [i.e., temperatures will be higher than hoped for], so policies of adaptation and recovery need much more attention.
—Martin Parry, Jason Lowe, and Clair Hanson, commentary in Nature, 20092
Prevention is better than treatment is a medical axiom. Healthcare professionals typically think in terms of primary, secondary, and occasionally tertiary prevention of disease. Primary prevention refers to the steps needed to prevent the occurrence of a disease. Referring back to table 1.2, hypertension, the leading risk factor for disease throughout the world, serves as an example.3 The National Heart, Lung, and Blood Institute of the National Institutes of Health (NIH) lists the following primary prevention interventions with documented efficacy: weight loss, restricting dietary sodium, increased physical activity, moderation of alcohol consumption, supplementing the diet with potassium, and consuming a diet that is low in saturated and total fat and rich in fruits, vegetables, and low-fat dairy products.
Primary prevention of climate change requires a reduction in the emission of greenhouse gases. In order to maintain our energy-dependent economy, this means replacing energy sources that depend on burning fossil fuels with energy sources that don’t, such as wind, solar, water, and others. It also means supporting energy research that opens other possibilities, such as artificial photosynthesis and sunlight-mediated hydrolysis of water.
Secondary prevention requires taking steps to prevent recurrence or progression of a disease once it has been diagnosed. For hypertension, this usually means starting drug treatments under a physician’s supervision. Low-dose enteric-coated aspirin is used widely for secondary prevention of stroke, with the goal to prevent a second stroke. In a climate change context, secondary prevention means taking the steps necessary to reduce threats that are on the horizon. In other words, it is necessary to adapt to the changing climate by taking action to prevent additional damage. Adaptation may include mosquito control measures, building sea walls, developing drought- and heat-resistant crops, strengthening the public health infrastructure, and many others.
Tertiary prevention involves helping patients manage long-term health problems to maximize quality of life. These measures may include rehabilitation programs and support groups. Often, the lines between secondary and tertiary prevention blur under the rubric of climate change. Adaptation is at the center of both.
Many sectors of society should and must be involved in preventive measures that center on climate change.4 These include international organizations, such as the United Nations; national governments, particularly in countries that emit large amounts of greenhouse gases; state and local governmental agencies; nongovernmental organizations, such as Physicians for Social Responsibility, Natural Resources Defense Council, the Sierra Club, and Earthjustice, to name but a few; research universities; national laboratories; and the private sector. Although individual concerned citizens may feel helpless when working to prevent climate change, everyone can make a contribution. We are all stakeholders, and stakeholders can and must be involved.
Just as conventional medicine struggles to deal with any severe medical problem, society needs as many strategies as possible to deal with climate change in order to minimize the health and environmental impacts that are so clearly on the horizon.
The IPCC authors conceptualized relative risks to health due to climate change in a series of target-like pie graphs, as shown in figure 10.1.5 The size of each slice of the pie is proportional to the risk and the potential for risk reduction. These judgments were based on a critical evaluation of the literature and the judgment of the authors. At present, risks are real but relatively small compared to risks posed by future temperature increases. As temperatures increase, risks rise. There are reasons to be hopeful. As shown in the figure, risks can be reduced substantially even if temperatures rise by as much as 4°C, but only if adaptation measures are intense.
Figure 10.1 Targeting health. Panel A is a key to understanding the remainder of the figure. The magnitude of the risk for each factor is shown by the width of the slice of the pie. The darkened portion of each slice depicts the potential for risk reduction in a hypothetical, highly adapted condition. Panel B shows the relative importance of the burden of poor health at present in a qualitative way. Panel C depicts the risks of and potential benefits to be gained from adaptation in the relative near term, 2030 to 2040. Panel D portrays the relative risks and adaptation potentials toward the end of the century, 2080 to 2100, with a temperature rise of 4°C relative to the preindustrial era. Source: Figure 11–6 (originally in color) from D. Campbell-Lendrum, D. Chadee, Y. Honda, et al., “Human Health: Impacts, Adaptation, and Co-Benefits,” in Climate Change 2014: Impacts, Adaptation, and Vulnerability; Part A: Global and Sectoral Aspects; Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, ed. C. B. Field, V. R. Barros, D. J. Dokken, et al., 709–754 (New York: Cambridge University Press, 2014). Reproduced per IPCC guidelines.
Parry, Lowe, and Hanson, the authors of the second quotation at the beginning of this chapter, made several important points that emphasize the reality of the situation we face. They wrote that those engaged in efforts to halt climate change by reducing the emission of greenhouse gases “need to be optimistic that their decisions could have swift and overwhelmingly positive effects on climate change.”6 Although these words were written in 2009, the message is still true and important today. The climate is changing slowly, and the benefits of mitigation and adaptation will not be evident until far into the future. The authors gave several examples: they stated that if emissions were to peak in 2015, temperatures would not peak until 2065 (a peak-to-peak interval of fifty years). Delaying the emissions peak by ten years, to 2025, pushes the temperature peak an additional fifteen years into the future, to 2080 (a peak-to-peak interval of fifty-five years). In their most extreme scenario, if the emissions peak did not occur until 2035, the ever-rising temperature peak would not occur until 2100 (a peak-to-peak interval of sixty-five years). Do the math: deferring the emissions peak by twenty years delays the temperature peak by sixty-five years (2100 – 2035 = 65). This is around the average lifetime for most citizens of the world.
In 2008, the United Nations Framework Convention on Climate Change updated its earlier report on the investments needed to mitigate and minimize climate change.7 They estimated that expenditures of around $86 billion would be needed in 2015. In its update of the 2007 report, the committee estimated that an additional annual investment of between $200 and $210 billion would be needed to reduce the emission of carbon dioxide equivalents to levels 25 percent below 2000 levels by 2030. This level of funding is nowhere in sight. To paraphrase the Steinbeck quote, “What does a man got to do?” That which needs to be done varies substantially among countries and governmental entities.
There are limits to everything. This is particularly true for the ability of humans to withstand elevated temperatures. Relatively sharp limits define the range of temperatures within which humans can survive.8 At rest, all adults and children generate heat as the result of normal metabolic processes. There is a minimum amount of heat that cannot be reduced. Normally, we generate about one hundred watts of power in a resting state. Naturally, any increase in activity results in an expenditure of energy that increases heat output as we do work. This increase can be very large under conditions of extreme exercise. In addition, wearing bulky clothing or uniforms (e.g., football uniforms) interferes with the loss of heat needed to maintain a normal body temperature, which causes the retention of body heat and an increase in body temperature. This is why football players and others are particularly susceptible to developing heat-related illnesses during practices or games: they wear bulky protective gear and engage in very intense exercise.
In order to maintain a normal body temperature of 98.6°F (37°C), the body must be able to dissipate enough heat to prevent an increase in body temperature. We dissipate heat, regardless of how it is generated, by four mechanisms: convection, conduction, radiation, and evaporation. Evaporative heat loss is the most important of these and mostly occurs via sweating. It is also possible to dissipate heat by immersion into water, a more efficient but often impractical alternative to the usual process of sweating. Ambient temperature and humidity limit the ability to lose heat by evaporation. Above a critical temperature, the body gains heat no matter what. This limit is typically defined by what is known as the wet bulb temperature (TW), the temperature registered by a well-ventilated ordinary thermometer covered by a wet cloth. The TW limit for humans is around 35°C with a relative humidity of 100 percent. Hyperthermia develops at temperatures above this point and can lead to heat exhaustion or to the more serious and potentially fatal condition of heat stroke, as discussed in chapter 3. As the climate warms and more water enters the atmosphere from warming oceans, temperatures that exceed the Tw threshold will become more common. Adaptive measures will become increasingly important.
The Chicago heat wave of July 1995 was a learning experience for the public health community. An analysis of the Chicago data helps point the way toward the development of adaptive measures that should limit morbidity and mortality associated with heat. The most important of these measures is improving access to air conditioning. Public health programs that identify those at risk, reach out to them at a time of an impending heat emergency, and move them to safer locations should be protective.
Strategies that have a longer lead time and are designed to alter the space in which we live, work, and play on a daily basis (the built environment) should be helpful from any one of several perspectives. Roofs that are white reflect more heat than those that are black (common in tar and gravel roofs). Buildings with “green roofs” that are covered by grass or other plants will benefit their occupants. These measures will lead to cooler buildings, particularly those that are poorly insulated. Cooler buildings require less air conditioning to maintain healthy indoor temperatures. Reducing the need for air conditioning also reduces the amount of indoor heat transferred to the environment, which attacks the heat island effect that is increasingly common and problematic in modern urban areas. Lowering the air conditioning demand also reduces the consumption of electricity, particularly at times of peak usage, and the need to burn fossil fuels to generate that electricity.
Spurred on by the realities of climate change and a strong desire to avoid the consequences of another heat wave such as the one that devastated the city in 1995, Chicago established a task force to plan for the future. This task force has done more than kick the can down the road a few feet: it has created a series of steps that are both ambitious—but workable, in the eyes of its creators—and necessary to protect the health of Chicagoans and designed to bring the city and its residents safely and responsibly into the next century. Details of the plan can be viewed at www.chicagoclimateaction.org.
The plan’s creators began by confronting some of the realities of their present situation and likely scenarios for the future. The predictions by climate change experts suggest Chicago will have a climate like that of Mobile, Alabama, by the end of the century. The number of days when the temperature will exceed 100°F will rise from two per year to thirty-one. An energy and greenhouse gas inventory showed that the city emitted around 36.2 million metric tonnes of carbon dioxide equivalents each year (MMTCO2e; one metric tonne = 2,200 lb.). That works out to be around 12.7 tonnes per person. In keeping with the Kyoto Protocol, the plan sets a target of a 25 percent reduction in CO2e emissions by 2020 and an 80 percent reduction by 2050. The plan to achieve this goal focuses on five objectives:
Improving the energy efficiency of buildings
Using clean renewable sources of energy
Improving transportation options
Reducing waste and industrial pollution and improving energy efficiency
Adapting to higher temperatures
The task force came to the realization that most of the readily achievable goals were associated with buildings and transportation. Around 70 percent of the city’s greenhouse gas emissions were attributed to buildings and 21 percent to the city’s various forms of transportation—mostly from cars, trucks, and buses. Only 9 percent were the result of the combined effects of all other sources.
A strategy to retrofit existing buildings with more energy-efficient systems is central to Chicago’s plan. In one example that the plan creators cite, F&F Foods spent $780,000 on an energy-efficiency project that led to an immediate savings of $280,000 per year. The payback time was relatively short: 2.6 years. Similar retrofitting projects applied to half of the city’s commercial and industrial buildings are predicted to yield a 30 percent savings in energy, equivalent to 1.3 MMTCO2e annually. A 50 percent improvement in the energy efficiency of residential buildings would translate into another 1.44 MMTCO2e per year. Replacing tungsten light bulbs with compact fluorescents or light-emitting diodes (LEDs) and exchanging old, energy-hogging appliances such as older refrigerators with more modern, efficient models would yield another 0.28 MMTCO2e. As a concrete example, my wife and I recently replaced a refrigerator/freezer purchased in 1992 that had an annual energy consumption of 1600 kWh of electricity (per the EPA) with one that uses about 25 percent of that amount of electricity. Updating the Chicago energy code, establishing new guidelines for building renovations, building more cooling parks, lining streets with trees, and creating green roofs would save another estimated 1.61 MMTCO2e each year. Other steps listed would yield an additional 0.84 MMTCO2e.
Substantial savings are available by improving the electrical power supply for Chicago. By building renewable energy sources for the city, the task force estimates that emissions from that sector can be reduced by 20 percent, or 3.0 MMTCO2e per annum. Upgrading power plants (specifically, twenty-one located in Illinois) and making other improvements in the efficiency of existing power plants would yield an estimated 3.54 MMTCO2e.
Although utilities claim that they encourage improvements in the energy supply system, some promote policies that discourage or stop homeowners from net metering. Net metering allows your electric meter to run backward if your home systems generate more electricity than you are consuming at that moment. For example, an Arizona utility, located in the state that has the largest potential for capturing solar energy, proposed imposing fees that would make rooftop photovoltaic electricity prohibitively expensive for individuals. Similar threats are arising elsewhere.
Chicago is currently one of the hubs for rail transportation. It is also the site of congestion that paralyzes the rail industry. A 2012 article in the NY Times reported that a train that travels from Los Angeles to Chicago can make the journey in forty-eight hours—then it may take an additional thirty hours to pass through Chicago. The reporter followed a train full of sulfur. It took twenty-seven hours to pass through the city at an average speed of 1.13 miles per hour. This pace is slower, or so the reporter wrote, than that of an electric wheelchair!9 Improving freight movement was estimated to reduce annual emissions by 1.61 MMTCO2e. Investing in more public transit would save 0.83 MMTCO2e. Other transportation-related improvements were identified, including transit-oriented development, making walking and bicycling easier (a health-promoting goal in and of itself), carpooling and sharing, replacing fuel-inefficient cars and trucks with those with higher fuel efficiencies (already a part of the corporate average fuel economy standards agreed to by the motor vehicle industry and the EPA), using cleaner fuels, and other strategies. These strategies would raise the annual total energy savings to over 2.5 MMTCO2e. Mitigation strategies involving reduce, reuse, recycle promotions and switching to refrigerants that are not greenhouse gases would add another 2.1 MTTCO2e to the total.
Finally, Chicago will update the heat response plans for the city to focus on assisting vulnerable populations. The plan creators hope to make the city’s urban areas greener, to preserve trees and plants, and to make cooling more innovative. Improvements in power plant design and operation will reduce the concentration of ground-level ozone, protect air quality, and promote better health. Again, children, the elderly, and those with chronic diseases would benefit the most.
These ambitious goals need support at all levels. Ordinary citizens, city officials, and stakeholders must join forces to combat the opposition that is certain to arise and to be well financed. Preventing climate change will save energy. By the end of the century, a low emissions scenario could limit the number of days in Chicago for which temperatures exceed 100°F to six, compared to the thirty-one days predicted by a business-as-usual scenario. This reduction alone will save enormous amounts of electrical energy, resulting in fewer emissions of greenhouse gases and better health.
This example of a city working to reduce carbon dioxide emissions, save energy, and improve health represents one strategy for moving forward toward a more sustainable energy future and a future that holds the promise of better health for its citizens. There are other paths forward, but they require political leadership, stakeholder involvement, and other elements (as outlined in chapter 1).
A variety of tools are available to combat the threats posed by infectious diseases as the climate warms. They range from the simple to the complex—from hand-painted signs to satellites in orbit. All have a role to play.
Multiple effective strategies are available to prevent mosquito-borne illnesses. It is important to prevent bites by infected mosquitoes. When possible, people at risk should remain indoors during dawn and dusk hours when mosquitoes are on the prowl for their next meal. Wear protective clothing. Use insect repellants containing N,N-diethyl-meta-toluamide. (No wonder this chemical is commonly called DEET!) When appropriate, sleep under insecticide-impregnated nets to keep insects from biting while sleeping. Make certain that there are no open containers around that will store or trap water, providing potential breeding sites for mosquitoes. Although effective, these measures should be supplemented by other, more aggressive and targeted measures designed to keep mosquitoes from reproducing.
Combating malaria was chosen as one of the Millennium Development Goals because the magnitude of the problem is high and the effect on children is disproportionate. In spite of the progress described ahead, malaria must remain a high-priority target in the task of adapting to climate change.
The most recent World Malaria Report, from 2014, makes it clear that the official estimates of the number of malaria cases as presented in the report are inadequate.10 The WHO numbers include just those individuals with what they call “patent” infections—that is, those with evidence of an infection found on light microscopy of a stained blood smear or a laboratory diagnostic test. Even with that limitation, the number of infections is staggering. The eighteen sub-Saharan nations account for an estimated 90 percent of the infections in that vulnerable part of the world. Nigerians harbor an estimated thirty-seven million infections and the Democratic Republic of the Congo another fourteen million. The World Malaria Report indicates that the number of Africans who have low-intensity infections is considerably higher. In fact, one might argue that virtually all Nigerians, particularly those who live in rural areas, are infected at some time during their lives.
The 2014 World Malaria Report contains good news and bad news—and some of that news is or ought to be embarrassing.11 The good news is that since 2000 there has been a sharp reduction in the malaria prevalence among children between two and ten years of age. In 2000, 26 percent of children were infected, a number that fell to 14 percent in 2013. Correspondingly, the number of malaria infections dropped from 173 million to 128 million during that same period. Finally, malaria mortality rates fell by 47 percent worldwide and even more, by 54 percent, in the WHO-designated Africa region. Given the circumstances in effect at the time of the report, all indications are that there will continue to be progress in the control of this disease, one of the primary goals set forth in the Millennium Development Goals. The authors of the report project a 55 percent reduction worldwide and a 62 percent reduction in the WHO Africa region if the preceding thirteen-year rates hold through 2015. Additional progress toward the reduction of mortality among children less than five years old is expected, with a 61 percent reduction globally and a 67 percent reduction in the WHO Africa region.
The bad news in the report is that much more could be done. Only $2.7 billion of the $5.1 billion, or just under 53 percent of the funds that would be likely to achieve worldwide control of malaria, were made available from international and domestic sources. Funds are needed to finance steps that are proven to work:
Supplying enough insecticide-treated bed nets (ITNs) for populations at risk, particularly children
Providing additional protection for pregnant women by treating them with what is referred to as intermittent preventive treatment in pregnancy (IPTp)
Increasing the availability of definitive testing for malaria and distributing so-called artemisinin-based combination therapy (ACT)
ITNs are a mainstay in the prevention of malaria. These nets operate by several mechanisms. First, they kill and repel mosquitoes, thereby protecting people from bites of the disease-carrying females while they sleep. Second, there is an aspect of “herd immunity” that occurs when enough people in a house or community are protected by nets. The repellant and insecticidal properties provide an element of protection to those not under an appropriate net. The CDC’s malaria website is an excellent ITN reference.12
The first nets developed were treated with pyrethroids, members of a class of insecticides that is thought to have relatively few adverse effects on the humans it is designed to protect. This is especially true when compared to other insecticides, particularly the organophosphates. The organophosphates are close relatives of nerve gases and have many toxic effects. Nets treated with pyrethroids do have a distinct disadvantage: they do not last very long. In addition, exposure to sunlight ruins their ability to kill and repel mosquitoes. Therefore, to remain effective, they must be periodically immersed in a water–pyrethroid solution and allowed to dry in the shade. The typical useful lifetime of these nets is about a year. Nets that last longer are clearly desirable. As of February 2014, the WHO gave full or interim approval to eleven more desirable, long-lasting insecticide-treated nets, or LLINs.13 These nets typically remain effective for around three years. The CDC website suggests that it would be possible to realize a savings of $3.5 billion over ten years by extending the useful lifetime of LLINs from three to five years.
The immunity that appears to develop in adults who are routinely and repeatedly infected by the malaria parasite may wane during pregnancy. The cause of this partial immunological failure has been attributed to the changes in the pregnant woman’s immune system and the presence of a new organ, the placenta, which is a potential binding site for the malaria protozoans. The CDC and WHO websites both recommend that a curative dose of sulfadoxine/pyramethamine (SP) be given to all pregnant women whether an infection is present or not as part of IPTp. Folic acid is also given to women to prevent neural tube defects, such as spina bifida. However, in high doses folic acid inhibits the action of SP. Therefore, the CDC recommends a folic acid dose of 0.4 mg or less. In places where a 5 mg dose is routine, SP treatment should be suspended for two weeks after the vitamin is administered. Unfortunately, this makes compliance less likely. SP therapy has a number of benefits associated with the prevention of a malaria infection, including reducing the risk of premature delivery, preventing intrauterine growth retardation, and reducing the probability of delivering low birth-weight babies (less than 5.5 pounds or 2.5 kg); preventing fetal loss; and reducing the risk of anemia in the mother. IPTp should be given in addition to the routine use of ITNs (or LLINs). Women should be monitored during pregnancy, and effective treatment should be administered if malaria is diagnosed before the delivery of a baby.
Although fever is a common symptom among individuals with malaria, there are many other infections that are heralded by an increase in temperature. Most tragically, this was seen during the Ebola epidemic that peaked between 2014 and 2015 in West Africa. Ebola overwhelmed many clinics that were then unable to treat other patients. Also, fear of Ebola kept many febrile individuals from receiving proper diagnostic testing. As a result, many patients who had malaria or other treatable diseases went undiagnosed and untreated. That specific issue aside, the universal availability of rapid testing either by microscopic examination of the blood or a rapid serological test for malaria coupled with instant access to treatment would reduce malaria-associated morbidity and mortality substantially. Treatment for uncomplicated malaria due to P. falciparum is thought to reduce the mortality in children between the ages of one and twenty-three months by as much as 99 percent. The results are almost as good in older children less than five years of age.
The availability of rapid diagnostic tests (RDT) for malaria has increased dramatically in the past decade. The World Malaria Report puts the number of tests distributed by malaria control programs in 2013 at 160 million, up from two hundred thousand in 2005.14 In parallel with this increase, the reliability of the tests has also increased. Manufacturers of RDTs reported selling almost 320 million tests in 2013. Of these, around 60 percent were specific for P. falciparum. The remainder were so-called combination tests that are able to detect more than one species of the malaria parasite.15 The goal of producing an effective vaccine for malaria remains elusive, although progress has been made. Producing an effective vaccine against protozoal diseases is difficult for many reasons. The life cycle of the parasite is complex, as discussed in chapter 4. Each stage of the infection poses different challenges for those who are working to develop an effective vaccine.
In the fall of 2011, the first results were published from a large clinical trial in which a malaria vaccine was compared to a nonmalarial vaccine.16 This report described the results from the first six thousand children entered into the study. The group consisted of children who were between the ages of five and seventeen months at the time that the first of three doses was administered and who completed all three of the scheduled immunizations. Researchers also studied another group of younger children between the ages of six and fifteen weeks. In the older group, the incidence of clinically proven malaria was 0.32 episodes per person per year in the vaccinated group compared to 0.55 episodes per person per year in the control group. Efficacy in the combined age group was around 35 percent. Serious adverse events were comparable in all groups, including those treated with the placebo. The investigators noted that during the course of the study they transported participants to study sites so they would not miss an injection. This level of support is much less likely to occur in a real-life situation. Although the results are promising, there is still considerable room for improvement. If the vaccine had been spectacularly successful, it is probable that the safety monitoring committee would have stopped the study, as continuing to offer placebo vaccination would have been deemed unethical. This did not happen. It is likely that the committee decided that the most ethical choice was to continue the study as designed.
As time passes and as public health officials develop effective partnerships with climatologists with access to satellite-based data, it is reasonable to expect that it will be possible to deploy resources needed to prevent malaria more effectively. This is illustrated, as described in chapter 4, by using climate predictions based on El Niño and the atmospheric pressure correlate, the Southern Oscillation, to predict periods of higher risk of malaria in Botswana.17
More robust public health infrastructures would make it much easier to meet the WHO’s Millennium Development Goals for malaria in developing countries. This is particularly true for Africa. The barriers to achieving that objective are daunting—and poverty may be the biggest. As discussed previously, malaria and poverty are locked in a destructive positive feedback cycle; either element makes the other worse. Weak, corrupt central governments also make bad situations worse, and weak governments also make it possible for terrorist organizations such as Boko Haram to proliferate, gain strength, and disrupt existing public health infrastructures.
The quote by Margaret Chan, the director general of the WHO, that introduces chapter 4 points out the precarious nature of the progress made toward the control of malaria. In the 2013 World Malaria Report, she wrote that “the great progress that has been achieved could be undone in some places in a single transmission season.”18
In February 2015, my wife and I visited Siem Reap, Cambodia. This city is well-known for its proximity to Angkor Wat and other spectacular temples from the days of the Khmer Empire. Visiting these temples was the main purpose of our trip. Our tour company advised us that “medical facilities and services in Cambodia do not meet international standards.” We knew that dengue is endemic in Cambodia. This was reinforced by one of our guides, who told us that one of his children had the disease. We carried lots of 100 percent DEET mosquito repellant. Although we were forewarned, we were surprised to see a street sign asking for blood donations in front of a pediatric hospital. Dr. Beat Richner, the founder and primary supporter of the hospital, named it after Jayavarman VII, who reigned between 1181 and 1218 CE. He was perhaps the greatest king of the Khmer era, and he built more than one hundred hospitals. Dr. Richner is an accomplished cellist who performs frequently in Cambodia and his native Switzerland to raise money for the hospital. A photograph of the sign pleading for blood donations to treat children with dengue hemorrhagic fever is shown in figure 10.2. It is in English and clearly aimed at the huge number of tourists who flock to the temples (for good reason).
Figure 10.2 Plea for blood donations to treat children with dengue hemorrhagic fever. This sign is in front of the Jayavarman VII Hospital in Siem Reap, Cambodia. It is in English and clearly aimed at the increasing number of tourists who visit Angkor Wat and other Khmer temples that surround the city.
Developing, testing, and implementing measures designed to adapt to the realities posed by this complicated disease will be difficult. One strategy for controlling dengue depends on the elimination of breeding sites for Aedes aegypti, the mosquito vector for the disease. The determination of how rigorous a breeding site elimination program must be depends on a number of variables, including the number of people in the population who have already been exposed to the virus (the seroprevalence, which ranges from 0 to 100 percent), the temperature of the air in the region under consideration, and the number of mosquito pupae per person. The authors of a study from 2000 determined that if between 0 and 67 percent of the population had antibodies to the dengue virus in its blood, the threshold level for infection ranged between 0.5 and 1.5 Ae. aegypti pupae per person. The results of the investigations were discouraging. In places like Mayaguez, Puerto Rico, the researchers concluded that seven out of every seventeen standing water containers must be eliminated for the prophylaxis program to succeed. In other places, such as Trinidad, twenty-four of twenty-five breeding sites would need to be eliminated.19
In order to determine how to best focus Ae. aegypti eradication efforts, a group of investigators based in Trinidad inspected over 1,500 homes containing over twenty-four thousand containers.20 In these homes, 223 harbored larvae or pupae of the offending mosquito. The investigators classified forty-one of these as key premises or key locations because they persistently tested positive for this dengue vector. An analysis of which container type was most likely to harbor the vector was revealing. Water drums were at the top of the list, with 53.5 percent containing the larvae or pupae. Next came buckets, with 22.2 percent containing larvae. Then, 8 percent of tubs and basins were infested. This was followed by water tanks, which accounted for 5.4 percent. Tires came last, accounting for just 2 percent. The authors argued that better control of the vector would be achieved if mosquito eradication efforts were focused on treating or emptying the high-risk containers in the key premises. To maximize the effectiveness of interventions, eradication efforts should be focused on months with the highest rainfall, when dengue cases are most likely to occur.21
About two decades ago, another study addressed issues that arose as the result of a loss of effectiveness of what was termed ultra-low-volume insecticide spraying.22 This spraying was used to control the population of adult mosquitoes. The study authors concluded that a combination of community-based and centralized approaches, vertically structured so that a central authority controlled most elements of the eradication program, offered the best chance of success. This finding speaks to the overarching importance of developing and sustaining stable governments and a vigorous and effective public health infrastructure.
Because dengue is a viral disease, there is hope that a vaccine will be developed to prevent infections, reduce morbidity, and save lives. The results of a trial among children were published in early 2015.23 Because dengue was endemic in the region where the vaccine was being tested, a subset of the children were tested to determine how many had evidence of a prior infection by any of the four subtypes of the virus (see chapter 4 for additional information about the virus). It was no great surprise that almost 80 percent of the children had evidence of a prior infection. This made it possible, or even likely, that children with evidence for a prior infection might be reinfected with one of the other subtypes of the virus.
Because many children had evidence of a prior dengue infection, it was necessary to enroll a large number of participants in the trial to obtain a statistically valid result. This study included over twenty thousand children between the ages of nine and sixteen who lived in one of five Latin American countries. Because the trial was so important, it merited publication in The New England Journal of Medicine, perhaps the leading US medical journal. The trial was designed to determine whether the vaccine worked; that is, it was an efficacy trial, designed to determine whether the vaccine would protect children from dengue. It was not known whether the vaccine actually worked, so it was necessary to administer a placebo vaccine to some participants. This choice is ethical when the efficacy of a treatment is not known. Two-thirds of the children were randomly assigned to get the vaccine, and the other third received the placebo, an approach that anticipated benefit but also ensured that the trial would be ethical and provide a definitive answer. The test substance (vaccine or placebo) was given at months zero, six, and twelve, and the children were followed for twenty-five months. To avoid loss of the participants, they were contacted weekly to see whether they were well. Since there are many reasons why the children in the study might get sick (as any parent knows), children were seen by a member of the study team within five days of developing a fever. Blood tests were performed at that time and again one or two weeks later to determine whether the child had dengue or some other illness.
In the final analysis, it was shown clearly that the vaccine worked, but it worked better against some types of the virus than others. Overall, the efficacy was just barely under 65 percent for children who received at least one of the three planned doses of the vaccine. Efficacy was about 50 percent for serotype 1, 42 percent for serotype 2, 74 percent for serotype 3, and just under 78 percent for serotype 4. Those who contracted dengue in spite of their vaccination may have had partial protection, as the efficacy against hospitalization was about 80 percent. The vaccine was 99.5 percent effective for preventing dengue hemorrhagic fever, the most severe form of the disease. The prevalence of adverse events was no higher among the vaccinated children than among those who received the placebo—additional good news. Because the children were followed for just twenty-five months after they were vaccinated, the study could not determine how long the vaccine will continue to provide protection. This study confirmed an earlier Asian investigation that was slightly smaller and funded by the same pharmaceutical company.24
Taken together, these two studies illustrate the difficulties encountered when attempting to determine whether a vaccine works and is safe. Neither study specified the cost, but the costs were almost certain to have been very high. However, the benefits almost surely will be enormous. Remember, with the business-as-usual climate scenario, about half of the world’s population and a great deal of the US population will be at risk by the end of the century (for additional details, see chapter 4). An effective vaccine will be most welcome.
More than 70 percent of all agricultural systems depend on the amount of rain that falls on croplands.25 This fact makes food production and food security highly vulnerable to climate change—perhaps more vulnerable than other segments of the economy. If there is too little rain, crops fail because of drought. If there is too much rain, crops will also fail because they will drown or they cannot be planted because fields are inaccessible. Chapter 5 reviewed some of the other factors that contribute to the sensitivity of agriculture to climate change. Temperature, the atmospheric concentrations of CO2 and ozone, the impact of plant diseases, and the proliferation of weeds are all related to the climate and will affect agricultural productivity.
Although climate change is likely to benefit agricultural productivity in some regions, it may be devastating in others. For example, growers in the high latitudes may benefit from longer growing seasons and the possibility that they may be able to plant and harvest two crops instead of just one. However, in the low latitudes nearer the equator, climate change may lead to severe crop failures or the inability to grow much at all. Unfortunately, it is these low-latitude nations—such as sub-Saharan Africa and India—that already suffer from food insecurity. Substantial numbers of children in these areas are undernourished—and they will suffer the most.
The yields of some crops will increase in a warmer world. IPCC estimates suggest that around 10 percent of crops will have yield increases of around 10 percent. For another 10 percent of the total yield, losses of 25 percent can be anticipated. Overall, losses will be greater than gains. The IPCC authors predict that in the absence of adaptation a temperature increase of 2°C will reduce the yields of major food crops, such as rice, wheat, and corn. After the middle of the century, things are likely to become even worse.
Although it is likely that there is a limit to successful adaptations to climate change, it is essential to apply measures that are known to work as widely as possible. These measures include
changes in planting practices;
alterations in harvesting;
adapting fertilizer and water usage practices—that is, harvesting rainwater for use in agriculture;
increasing crop diversity—because monoculture agriculture risks massive failure if the planted crop fails to resist stresses imposed by changes in rainfall, pest proliferation (including invasive weeds), and so on; and
improving conservation methods to better protect existing resources.
Agricultural practices vary enormously within the United States and even more across the globe. Thus, it is no surprise that the strategies needed to cope with the changes that are already occurring must also vary. Different crops, growing conditions, and forecasts for the future will all require different adaptations if worldwide agriculture is to keep pace with a growing population. Many of the details associated with the methods needed for agriculture to adapt successfully to climate change are beyond the scope of this book. However, reviewing several examples of successful adaptation in sub-Saharan Africa is warranted. The countries of sub-Saharan Africa are largely poor and frequently they are poorly governed. In addition, they are too often very reliant on agriculture to sustain local populations, vulnerable to climate change, and, because of these limitations, badly in need of effective adaptation strategies to forestall mass starvation. Fortunately, many of the most highly effective adaptation strategies are being practiced in this region of Africa.
Evergreen agriculture is defined as an agricultural system in which trees (typically perennial) are integrated into the production of food crops (typically annuals).26 This agricultural practice has had particular success in parts of Africa where there is a concurrence of undernutrition, drought, and increases in population. Faidherbia albida is the tree species at the center of this movement. This nitrogen-fixing acacia is indigenous to Africa. Its main (or tap) root penetrates deeply into the ground, making it quite resistant to drought. Other aspects of its annual cycle make it well suited to evergreen agricultural practices. Its dormant period coincides with the beginning of the rainy season. During dormancy, these trees lose their leaves and require little water. The fallen leaves provide organic matter that enrichs the soil even as the bare trees allow sunlight to penetrate to the sprouting crops planted beneath them, enhancing growth. The leaves reappear at the end of the wet season and shade the crops at this critical stage of crop growth. Combine these features with the acacia’s nitrogen-fixing ability and you have a formula for success.
Other features of evergreen agriculture make it a highly desirable strategy, particularly in parts of Africa where per capita income is so low that farmers cannot afford to purchase fertilizer:27
It maintains a cover of vegetation all year.
Nitrogen fixation stimulates crop growth without inorganic fertilizer.
Pests and weeds are suppressed.
Water infiltration into the soil is enhanced.
Soil structure is improved.
The growth of the trees above and below ground captures carbon dioxide from the atmosphere.
Biodiversity is enhanced.
Best of all, the production of food for the farmer and his family is improved, there is more fodder for animals, and more fuel for cooking. Frequently, evergreen agriculture–based farms produce enough food to sell some to others or for export, thereby improving the economic status of these low-income farm families. These evergreen practices are worthwhile now, but it is likely that they will prove to be even more useful in the future in protecting against climate change-induced problems.
Several regions of Africa already benefit from this practice. There has been a remarkable turnaround of the ecosystems and hence the fortunes of the people living in the Maradi and Zinder regions of Niger.28 Niger lies in the middle of the Sahel region of sub-Saharan Africa between Libya and Algeria to the north, Nigeria to the south, and Chad and Mali to the east and west, respectively. Beginning in the 1960s, this region of Africa was thought to be in the midst of what might have become a state of irreversible decline. The desert was advancing, crops were failing, and firewood was scarce and becoming scarcer. Severe drought gripped the nation in the early 1970s. Livestock herds shrank dramatically, mortality rates climbed, and famine became widespread. The situation began to improve toward the end of that decade, when the knowledge of the indigenous people and nongovernmental organizations supplanted immediate postcolonial practices. Key among these was the rediscovery of the benefits of something as simple as proper pruning of what were thought to be shrubs. Pruning allowed these presumed shrubs to grow into trees, which provided shade to crops, which began to flourish. Altogether, over five million hectares (over 5.47 million acres; one hectare = 10,000 m2, or 2.471 acres) were restored to productivity, in part by planting two hundred million trees that improved the livelihood of around 4.5 million people. Row crops such as millet, sorghum, peanuts, cassava, and others were planted between the trees and began to provide food and cash. Previously starving people began to export food to their southern neighbors. As an additional benefit, time spent on the task of foraging for firewood fell from three hours to thirty minutes per day. This change empowered women, those who had been tasked with collecting wood for cooking fires, thus enabling them to assume greater responsibilities in their homes and society.
Zambia is a successful variation on this theme. Maize is the country’s primary crop.29 Before the adoption of evergreen agricultural practices, yields were low, just over a ton per hectare (t/ha). Around 70 percent of farmers could not buy fertilizer, and about that same number failed to produce enough maize to sell at the market. Between 2002 and 2008, a third of the maize area under cultivation was abandoned before the harvest due to drought, falling soil fertility, and other factors. Here enters evergreen agriculture. Planting Faidherbia albida was combined with other improvements in agricultural practices fostered by the Zambian Conservation Farming Unit. Changes included the introduction of minimum tillage methods, cessation of burning crop residues from prior harvests, rotating crops, planting crops in precise locations to optimize fertilizer applications, and others. In 2008, at one site where Faidherbia albida trees had been planted the maize yield was 4.1 t/ha under the tree canopy, compared to 1.3 t/ha away from the canopy. In other regions, increases in yield of 280 percent were recorded.
Excellent results have also been reported in Malawi, where the economy is highly dependent on agriculture. In the pre-evergreen-agriculture era, more than half of farm households fell below subsistence levels, and large numbers of families required food aid. Governmental subsidies to provide fertilizer and the spread of agroforestry practices now have led to improvements. In one area of the country, maize yields under the Faidherbia albida trees were 100 to 400 percent higher than yields of maize not grown under the canopy of the trees. Malawi has a relatively long history of intermixing nitrogen-fixing trees with food crops, and trees other than Faidherbia albida also have been employed with success.
Now is the time to implement these strategies, before climate change results in further weakening of societies that are already fragile and increasing human suffering. Such steps will not solve all of the problems associated with climate change, but these are examples of some of the measures that can be enacted to adapt to and minimize the consequences of climate change.
As with agriculture, successful adaptation to rising sea level and the concomitant threat posed by storms and storm surges is highly dependent on local conditions. Successful coastline adaptation depends on local, state, and federal governance issues. Because of these complexities, detailed accounts are far beyond the scope of what can be covered in this book. Approaches to this issue that are particularly oriented around developing nations can be found in several sources, such as the United Nations Development Program’s document “Adaptation Policy Frameworks for Climate Change: Developing Strategies, Policies, and Measures,” and the United Nations Framework Convention on Climate Change’s National Adaptation Programs of Action. However, this section offers a look at adaptation practices in two regions in the developed world that face similar challenges but have vastly different responses: South Florida and the Netherlands.
In chapter 1, I presented a general strategy for adaptation, in which the importance of political leadership, stakeholder involvement, the availability of funding, and other factors were discussed. In chapter 6, meanwhile, I presented three basic adaptation strategies: abandonment, in which adaptation measures either will not work or are deemed too expensive to implement; nourishment, in which natural barriers are augmented; and armoring, in which barriers to rising sea level and surges, such as sea walls, are constructed.
Dealing with the sea has been a serious issue in the Netherlands since the ninth century, when it is believed that the first dikes were constructed. According to a 2011 report, nine million Dutch people live in areas that are below sea level, and 70 percent of the country’s GDP is generated in these areas.30 In addition to protecting the population from the sea, other important aspects of the coastal zone of the Netherlands are related to industry, recreation, residences, supplies of drinking water, and other factors. Protecting these resources is a primary function of the Dutch government.
By modern standards, the early dikes were primitive. They consisted of earthen structures reinforced with logs and seaweed. Sediments tended to accumulate on the seaward side, making it possible to construct another dike in a series of seaward-marching structures. Skip ahead to the twentieth century, when in 1918 the Zuiderzee Act was passed and plans were launched to construct a huge dike to seal off an inlet of the North Sea known as the Zuiderzee from the rest of the North Sea. This effort was to be coupled with strategies to drain the newly reclaimed area. This project is known as the Zuiderzee Works. The dam is called the Afsluitdijk, and the sealed-off portion is the Ijssulmeer. I saw this dike myself in 1963, when a friend and I rode bicycles from one end to the other on a trip through Europe. It is an impressive structure.
The Dutch did not stop there. To ensure safety of the vulnerable shoreline, three more laws were passed: the Delta Act, the Flood Defense Act, and the Water Act.31 Since 1990, the Dutch have relied heavily on nourishing the coastal zone by building up sandy areas, including dunes. Specific design criteria have been developed and implemented for the shape and size of these sandy shoreline defenses. Some of the sand used for this purpose is dredged up from the North Sea bed.
The Delta Works consists of a system of storm surge barriers and dams that was constructed in response to severe floods in 1953. The largest of these is the Eastern Scheldt storm surge barrier (in Dutch, Oosterscheldekering). This barrier is nine kilometers long and has doors that can be opened and closed. It maintains the mix of sea and fresh water needed to support existing ecosystems while also providing protection from storms and storm surges. The construction of the Maeslant Barrier, shown in figure 10.3, was the final step in the Delta Works project. This movable barrier protects the harbor at Rotterdam. It consists of two semicircular gates that are hinged and that rotate to open or close the barrier. It cost €450 million to build and is said to be one of the largest movable structures on the earth. The barrier is controlled by computers linked to storm forecasts. When a storm surge greater than three meters is likely, a sequence of events is initiated that culminates in the closure of the barrier.
Figure 10.3 Maeslant Barrier during a test closure. This storm surge barrier was built to protect the Rotterdam Harbor. It was completed in 1997. When monitoring systems predict a surge of more than three meters, computerized systems flood the dry docks that house each gate. The floating gates are moved to close the 360-meter-wide Rhine River waterway. Once in place, the gates are flooded and sink to prevent the surge from flooding the city and the port. Source: Koninklijk Nederlands Meteorologisch Instituut, http://bit.ly/19JEIiJ, accessed April 1, 2015. Not copyrighted; original in color.
Sea level is a moving target, and the Dutch move along with it. They continue to seek input from stakeholders and experts in the field, as evidenced by surveys conducted to assist in the ranking of adaptation options. The qualitative portion of a 2009 assessment focused on prioritization and ranking of various options. Results included a need to integrate water and nature management, integrate coastal zone management, and provide more space for water in rivers and regional water systems. The quantitative evaluation identified costs and benefits of the various options for adaptation.32
The Dutch take evaluating the risks posed by rising sea level seriously and have spent and are prepared to spend additional large sums to provide protection for the citizens of their nation. They are determined to prevent potentially catastrophic social and economic upheavals caused by climate change.
This is in stark contrast to South Florida and the Florida Keys. According to an article in the Miami Herald published in March 2015, state officials work actively to discourage any reference to climate change.33 This effort is apparently led by Governor Rick Scott, who took office in 2011. The article provides detailed accounts, all transmitted verbally with no written policy, that discourage multiple state agencies from making references to climate change and related topics.
The geology of South Florida places limits on adapting to rising sea level. The coral limestone that is found throughout the region is porous; that is, water moves through it quickly. This is an advantage during the heavy rains that are common in the region. Within a matter of minutes, rainfall of up to a foot over a few hours soaks into the ground and disappears—a good thing. However, building sea walls or physical barriers as armor against rising sea level will fail because the ocean’s water will seep under the barrier through pores in the rock. Infiltration of sea water into fresh water supplies is already a problem.
In many areas, such as South Miami Beach and Miami Beach, large amounts of sand are dredged up from the ocean floor to nourish the beaches that draw the tourists. For many areas in the Florida Keys, placing buildings on stilts is the only practical solution.
Local governments serving South Florida and the Florida Keys are not constrained by gag orders, but they lag far behind the Dutch in planning for increases in the sea level and storm surges. The Florida Keys are a curved group of around 1,700 sandy islands perched on coral reefs and rocks that extend around 220 miles from just south of Miami to Key West. The average elevation of the Florida Keys is 1.5 meters above sea level, in a part of the world that is highly exposed to hurricanes. In spite of their vulnerability, the islands have a hugely popular, multibillion dollar economy built largely on tourism. In 2011, two professors from Florida International University in Miami, along with a Massachusetts colleague, published the results of a survey conducted among stakeholders who serve the Florida Keys.34 Their anonymous survey included federal, state, and local officials, along with representatives from nongovernmental organizations, such as the Audubon Association. A total of 845 potential participants were contacted; 26.6 percent completed the questionnaire. The low response rate is problematic. Of the respondents, 9.6 percent were federal officials and 17.6 percent were state officials. The authors of the 2011 report found a “deep concern among federal, state, and local decision makers and experts.” The most discouraging result of the survey was that although 85.7 percent of the respondents support preparing now for the events that are the most likely to occur as the result of climate change, only 5 percent indicated that their agencies or organizations had a climate change adaptation/action plan.
A three-county coalition in Southeast Florida (Miami-Dade, Broward, and Monroe Counties, the latter home to the Florida Keys) reported on their joint efforts to begin to address climate change with a report titled “A Region Responds to a Changing Climate” in October 2012.35 Ironically the report’s cover art features a woman wearing a chic dress riding a motor scooter on a flooded street in Miami-Dade County as though she was on a vacation in Rome and not facing an environmental disaster. The goal of the coalition was to “unite, organize and assess [its] region through the lens of climate change in setting the stage for action.” The Compact, as members refer to the group, calls for “urging Congress to pass legislation that recognizes the unique vulnerabilities of Southeast Florida to climate change impacts, especially sea level rise,” along with other objectives that depend largely on federal action. Of the thirteen public policy objectives, three began with “Urge Congress,” another with “Encourage federal support for research,” and four with either “Advocate” or “Support and advocate.”
The contrasts are stark. The people of the Netherlands have been building defenses against the sea for over one thousand years; the people of South Florida are giving serious thought to the process, or at least some are, but so far they have done little. Whether Floridians meet the challenge is yet to be determined.
The terms clean coal and war on coal are often couched in other terms, such as Obama’s war on coal or Obama’s war on jobs, which emphasize the political split in the United States as opposed to a sincere effort to deal with coal-derived pollution.
Most of the time, clean coal refers to a series of processes known collectively as carbon capture and storage (CCS). However, some use clean coal to describe other aspects of coal use. Some cleaning typically occurs at the mine itself, where coal is washed. When it emerges from the mine, the coal is contaminated by unwanted waste. This waste includes dirt, rock, sulfur, and other noncoal debris. The coal is washed by utilizing differences in the physical properties of coal and waste to divert the waste to a repository, which makes it more economical to ship the final product. The downside of this washing process is that it creates yet another waste stream that must be dealt with. The residents of Buffalo Creek Hollow, West Virginia, learned about this the hard way when a series of dams designed to impound the coal-washing-waste slurry failed after heavy rains.36 The resulting flood was responsible for 125 deaths and 1,100 injuries and rendered four thousand people homeless. It was one of the worst flood disasters in US history.
There is substantial corporate interest in so-called clean coal. One company, Clean Coal Technologies, with corporate headquarters in New York City, has a website that features photographs of children, their mothers, and dogs. The company’s technology is aimed at the overseas markets for coal. It has developed a process that removes water and hydrocarbons from coal, making it cleaner, lighter, and cheaper to ship. The company also leads people to believe that its product is less likely to undergo spontaneous combustion, which is a significant problem for low-grade coals. I saw many coal fires at a huge lignite mine in Kosovo, and all were the result of spontaneous combustion of poor quality coal, virtually the only natural resource in that impoverished nation.
Now the term clean coal is used almost exclusively to refer to efforts to prevent carbon dioxide formed by burning coal from entering the atmosphere. With the exception of a few demonstration projects, CCS technology exists only in the realm of the future. We will return to the topic of CCS in the section of this chapter describing climate interventions.
On June 2, 2014, the EPA published what it called a commonsense rule designed to limit carbon dioxide emissions from existing power plants.37 In a decision supported by the US Supreme Court, the EPA claimed that its authority to regulate carbon dioxide emissions is derived from the Clean Air Act. Those who believe that this gas drives climate change and poses an enormous threat to all hailed the plan as a long overdue step, whereas climate change deniers lined up in opposition to what they viewed as yet another step taken by the government to control our lives.
The plan, if implemented, is designed to reduce carbon dioxide emissions from existing power plants by 32 percent by the year 2030, compared to the 2005 baseline. Like most EPA rules, it is filled with specific numbers, but essentially the proposed rule will limit emissions from coal-fired plants to no more than those expected from power plants fueled by natural gas.
Each state will be required to evaluate its strengths and weaknesses and design a state-specific approach to fulfilling its obligation. In general, the plan encourages states to design plans that convert coal plants to natural gas; increase the amount of electricity generated from renewable sources, such as wind, water, and solar; and improve the efficiency with which power is used. The EPA notes that as a nation we are well on our way toward meeting the 2030 goal.
In its analysis of the impact of the plan, the EPA claims that when implemented, compliance will cost between $7.3 billion and $8.8 billion per year but will save between $55 billion and $93 billion annually. The savings are largely in the realm of better health as a result of lower emissions of criteria pollutants and climate change prevention. Mean estimates for emission reductions are around 450,000 tons of sulfur dioxide, just under 420,000 tons of nitrogen dioxide, and 55,000 tons of small particles. These translate into between 2,700 and 6,600 fewer deaths, from 140,000 to 150,000 fewer attacks of asthma in children, between 340 and 3,300 fewer heart attacks, from 2,700 to 2,800 fewer hospital admissions, and between 470,000 and 490,000 fewer missed days at school and work.
The Clean Power Plan will have two benefits. First, it will improve the health of individual Americans. Second, it will begin to slow the pace of climate change. It will not prevent all of the effects of climate change, however; at best, it may get us off the trajectory predicted by the IPCC business-as-usual scenario. If this plan is successful and if other nations follow through with effective measures to reduce greenhouse gas emissions in order to combat climate change, it is possible that the political leadership needed to combat climate change will emerge along with support by stakeholders. The 2015 UN Conference on Climate Change could be an important turning point in the efforts to mitigate climate change.
In February 2016, the Supreme Court unexpectedly blocked implementation of the Clean Power Plan in a 5 to 4 vote. Within days, Justice Antonin Scalia died and Senate Republicans appeared to be poised to block any appointment by President Obama. Thus the future of the plan may be in doubt. If the Court blocks the plan, the climate change accord reached by the 2015 UN Conference could be thrown into disarray.
In 2013, professors from Stanford and Cornell published a detailed roadmap showing how virtually all of New York’s energy needs could be met using readily available wind, water, and solar (WWS) off-the-shelf technology.38 Electricity would become the dominant source of power. It would be generated by onshore and offshore wind turbines (just over sixteen thousand five-megawatt [MW] units), solar-photovoltaic (PV) plants (just over 820, generating 50 MW each), residential rooftop systems (five million at five kilowatts [kW] each), and PV systems placed on commercial and governmental buildings (around five hundred thousand 100 kW systems). The plan also envisions around eight hundred concentrated solar systems generating 38,700 MW. These sources would be supplemented by geothermal systems to supply around 5 percent of the state’s energy demands, tidal systems to supply 1 percent of the need, and hydroelectric systems to supply 5.5 percent of the need (of which almost all exist today). The transportation sector, with the exception of air traffic, would be powered by hydrogen fuel cell power sources equipped with regenerative energy capture systems, such as those already in use in hybrid automobiles, buses, locomotives, and trucks. Residential, commercial, and governmental buildings would be heated by electrical resistive units. Batteries, some of which would be in vehicles and be able to power the grid at night; storing heat and cold in designed sinks; and splitting water molecules to form hydrogen and oxygen would all power the grid at night and when there is no wind.
Gains in efficiency due to some grid modernization and the relative efficiency of electricity over fossil fuels would result in a projected 37 percent overall power saving. Some improvements in efficiency would be achieved by the replacement or de novo installation of efficient appliances, the use of LEDs for lighting, and similar steps designed to decrease the use of electricity.
The authors of the New York proposal provide an accounting of costs associated with the migration to sustainable WWS power. True, there would be capital and maintenance costs associated with the installation of turbines and so on. However, once installed, there would be large savings because the energy to power the WWS generators is free. The cost of electricity between 2020 and 2030 under the proposed plan is estimated to be between $0.04 and $0.11 per kilowatt-hour (kWh), including transmission and distribution. This is substantially less than the estimates of $0.178 and $0.207 per kWh for electricity generated by burning fossil fuels. These cost estimates include not only the cost of electricity itself but also the so-called externalities, such as health costs associated with the mortality and morbidity associated with air pollution from fossil fuel combustion. These costs are real, but they are rarely included in reports of the “real” costs of power.
New York produces only small amounts of fossil fuels, so few jobs would be lost in this sector of the economy. This job loss would be more than compensated for by the large number of new jobs that would be created to construct and maintain the new electricity-generating units. Overall, the plan is a net job creator.
These sources of electrical power do not produce hazardous air pollutants such as oxides of sulfur and nitrogen, small particles, carbon monoxide, and others that are harmful to health. Thus, pollution-related morbidity and mortality would fall, along with corresponding health care costs, and the cost of economic opportunities lost due to pollution-related job impacts. The median estimate for this cost savings is $33 billion per year, which is around 3 percent of the gross domestic product for the state. By the year 2050, climate change costs to the nation would fall by an estimated $3.3 billion per year.
When something sounds too good to be true, it probably is. In this case, it is almost too much to hope for; political will among our leaders, who have become followers, is the missing ingredient.
Thus far, I have largely presented relatively conventional measures that fit under the umbrella of health, as defined broadly by the World Health Organization. In the final portion of this chapter, I will turn to more extreme measures that are advocated by some as reasonable, technologically based strategies to combat climate change. This calls to mind one of the aphorisms of Hippocrates: “For extreme diseases, extreme methods of cure ... are most suitable.”
No matter what decisions we will make as a civilization on our disorganized and occasionally contradictory paths toward the future, we must curtail the emissions of greenhouse gases. Failure to do so will almost certainly lead to a climate legacy that none of us want to leave for our children, grandchildren, and those who will follow.
On one hand, there are proponents of rapid movement toward sustainable energy sources that use available, off-the-shelf technology; on the other, there are proponents of solutions that require substantial amounts of research and development. The first position is exemplified by plan for New York, discussed earlier in this chapter. The second relies on technologies that are under development, the focus of the remainder of this chapter.
In the spring of 2015, the National Academy of Sciences (NAS) issued the findings of its Committee on Geoengineering Climate. The committee conducted a critical evaluation of the science, risks, and potential benefits of selected strategies designed to cope with climate change. The NAS prefers the term climate intervention to one that may be more familiar, geoengineering. The rationale is that engineering implies a higher level of precision and certainty than warranted by the evidence. The committee presented its findings in two reports, one based on solar radiation management and the second on carbon dioxide removal technologies.39 The studies were supported by NAS, NASA, NOAA, and, somewhat surprisingly, by the “US Intelligence Community.” The reports can be downloaded at no cost from NAS’s web site (http://www.nationalacademies.org).
The NAS report on managing carbon adds the word reliable to the standard terminology, carbon capture and storage. This emphasis on the reliability of the storage is entirely appropriate. It does the world little good if the carbon dioxide that is stored escapes into the atmosphere. I also find the use of the word carbon as shorthand for carbon dioxide to be somewhat confusing and even misleading. Perhaps a linguistics consultant decided that the word carbon is less likely to trigger opposition than the full name of the greenhouse gas that we must deal with.
Some approaches to managing carbon dioxide require little, if any, technology. They are based on improving land-management practices. As shown in figure 2.3, which illustrates the simplified carbon budget for the earth, between 1.7 and 2.6 trillion tons of carbon (not carbon dioxide) are sequestered in soils, and between 495 and 715 billion tons are trapped by vegetation—a good thing and an explanation for the annual dip in atmospheric carbon dioxide shown in the Keeling Curve (see chapter 2). This movement of carbon from the atmosphere to the land amounts to around three billion tons per year. Land-management strategies could be redesigned to minimize loss of carbon from the soil and vegetation and maximize the movement from the atmosphere to the earth stores.
The NAS report indicates that deforestation released approximately three gigatons of carbon dioxide annually between 2002 and 2011. Most of this deforestation occurs in the tropics as land is cleared and trees are burned to make way for crops and grazing. Deforestation accounts for around 10 percent of all anthropogenic greenhouse gas emissions. An examination of the seasonal dips in the atmospheric carbon dioxide concentration shown in the Keeling Curve (see figure 2.4) demonstrates the potential for removal of carbon dioxide from the atmosphere by forests. Each spring and early summer, the forests in the northern hemisphere begin to grow new leaves and branches. This is reflected in the transient dip in the atmospheric carbon dioxide concentration seen at that time of the year.
The growth of cover crops on land that is not producing food will also remove carbon dioxide from the atmosphere. Plowing these crops into the soil will sequester additional amounts of carbon dioxide in the soil.
Another way to capture carbon dioxide is to convert it into limestone, which contains large amounts of calcium carbonate. When this mineral is exposed to carbon dioxide, it forms calcium and bicarbonate ions, as shown in the following equation:
CO2 + CaCO3 + H2O react to form Ca++ + 2 HCO3−
When dissolved, bicarbonate ions enter seawater, which becomes more alkaline. This could be a good thing, because carbon dioxide dissolving in our oceans has made them more acidic, thereby threatening marine ecosystems. Eventually, the bicarbonate ions are trapped in the shells of marine animals as carbonate, thus storing the carbon. Similar reactions involving carbon dioxide and carbonate-containing minerals may prevent at least some carbon dioxide from traveling to the surface after a deep well injection. This process is envisioned by some as a means to capture and sequester carbon dioxide formed by burning fossil fuels, as discussed ahead.
Planktonic algae and other plants that live in the ocean remove carbon dioxide from the surface layers of the ocean and convert it to sugars and other organic molecules by photosynthesis. This trapped carbon either will be eaten by marine animals or will settle to the bottom of the ocean. This natural process has attracted the attention of some who have proposed stimulating these reactions by fertilizing the ocean with iron. Both controlled and poorly controlled experiments have been performed to test this strategy; one of the first of these is referred to as the IronEx II experiment and was conducted in an equatorial region of the Pacific Ocean.40 The success of the trial is reflected in the title of the paper describing it: “A Massive Phytoplankton Bloom Induced by an Ecosystem-Scale Iron Fertilization Experiment in the Equatorial Pacific Ocean.” Other so-called experiments appear to have been conducted by rogue individuals. One such study was alleged to have been conducted by an American businessman who claimed to have dumped one hundred tons of iron-rich, dirt-like material into the ocean in 2012.41 Herein lies one of the problems associated with techniques of this type: so-called experiments can be performed by virtually anyone, without scientific, technical, or ethical oversight or international cooperation.
The most conventional process for the removal of carbon dioxide produced by burning fossil fuels is generally referred to as carbon capture and sequestration or storage. In broad terms, a fuel is burned and the resulting carbon dioxide is captured, liquefied, transported to a secure repository, and sequestered for periods measured on a geological scale. The fuel source may be biomass or fossil in origin, or some combination of the two.
A recent analysis of the use of biomass reveals some of the limits of this strategy.42 Large amounts of land are required to grow, for example, switchgrass, which needs to be fertilized with both nitrogen- and phosphorous-containing fertilizers. In addition, water is required, a problem for areas already stricken by drought or areas where droughts may develop. Finally, the process is not likely to be very energy efficient. The example provided in the study predicts an overall energy efficiency of just over 47 percent. The greatest losses occur during the processing needed at the electrical generating unit to prepare the grass for burning (62 percent efficient) and the actual capture process (89 percent efficient).
The more typical consideration, and the one promoted most heavily by the coal industry, involves burning coal.43 There are several different processes in the combustion step, ranging from more or less straight combustion to more sophisticated strategies that involve gasification. Some use ordinary air, whereas others use highly enriched oxygen. The goal of the latter strategy is to produce a post-combustion gas that is highly enriched in carbon dioxide.
The use of the term capture is misleading and may be a source of some confusion. For the purposes of the carbon (dioxide) capture and storage (CCS) literature, capture means producing a flue gas that has a very high percentage of carbon dioxide—the higher, the better. When the police capture a suspect, one envisions a set of circumstances that preclude escape—not so with CCS. The carbon dioxide will easily enter the atmosphere unless energy-intense processes are used to keep this from happening. The next all-important step involves taking the captured carbon dioxide and compressing it so that it becomes a liquid.
The liquefied carbon dioxide must then be transported to a site where it can be sequestered from the atmosphere. Various studies have shown that pipelines are likely to be the cheapest way to transport liquid carbon dioxide. There are dangers inherent in transporting any product by pipeline. In the case of carbon dioxide, the greatest risks are associated with rupture and leakage. Although carbon dioxide does not burn—in fact, it is used widely in fire extinguishers—it is toxic and can cause asphyxiation and death at concentrations that are around 17 percent by volume.44 Since carbon dioxide is heavier than air, any gas that escapes from pipelines is likely to settle in low-lying areas or basements, where it may be undetected by anyone who enters the area. A disaster near Lake Nyos in Cameroon illustrates the dangers this fact poses.45 Lake Nyos covers volcanic magma and carbon dioxide. On October 2, 1986, there was a natural release of carbon dioxide from under the lake that killed around 1,700 individuals as the gas flowed downhill from the lake and into villages.
The final step in the process, and perhaps the most difficult, is permanent storage in a manner that keeps the CO2 separated from the atmosphere—forever. The strategy proposed most frequently relies on injection into deep wells that are specially designed and constructed for this purpose. Supporters of this choice note that this process has been used widely to enhance the recovery of oil from wells that were running dry. Although earthquakes attributed to injection of wastewater from hydraulic fracturing into wells have been reported in Ohio, this problem has been minimized by the proponents of injection—even though the practice has been stopped in that state. It is likely that a report issued jointly by the Oklahoma and US Geological Surveys in the spring of 2015 will alter the perception of earthquake safety substantially.46 Between 1978 and 1999, Oklahoma had an average of about 1.6 earthquakes that measured 3.0 or higher on the Richter scale per year. The Richter scale is logarithmic; every unit increase represents a tenfold increase in the amount of energy released. Beginning in 2009, about the time that deep well injection of fracking waste began in earnest, the number of earthquakes began to rise. There were twenty that year. There were 109 in 2013 and 584 in 2014, of which nineteen exceeded 4.0 on the Richter scale. The USGS estimates that by the end of 2015 there will be 941 earthquakes of magnitude 3.0 or greater if the current pace continues without any change. These data are shown in figure 10.4.
Figure 10.4 The earthquake history of Oklahoma. The USGS estimates that there will be 941 earthquakes with a magnitude of 3.0 or greater for the year 2015 if the frequency continues to accelerate at the rate observed in the spring of that year. In the interval between 1978 and 1999, the state averaged 1.6 earthquakes of that magnitude annually. Modified from a color graph published by the United States Geological Survey. United States Geological Survey, “Oklahoma Earthquake Information,” last updated April 18, 2014, http://earthquake.usgs.gov/earthquakes/states/?region=Oklahoma, accessed December 29, 2015.
In its report, the NAS states clearly that there is no substitute for reducing carbon dioxide emissions. Without this critical commitment, all other strategies have serious drawbacks. For the plans designed to remove carbon dioxide from the atmosphere, they make the additional points:
Removing carbon dioxide addresses the most important cause of climate change—high greenhouse gas concentrations.
Unlike other proposed solutions, these plans do not pose risks on a global scale (earthquakes are local, not global, in their nature).
The plans have a high cost and are likely to be judged solely on cost.
The effects will be modest at best because of the long-lived nature of atmospheric carbon dioxide (a lifetime of thirty to thirty-five thousand years). Large-scale implementation by major carbon dioxide emitters is a prerequisite to success.
These plans do not require new international agreements before implementation.
Incremental implementation is possible.
Unlike some interventions designed to mitigate climate change, abrupt termination would have only a small effect (see below).
Coal companies are among the chief proponents of CCS. Is this a legitimate position that is designed to mitigate climate change? Or, as claimed by the cynics, is it merely a device to promote mining and sell more coal at a time when natural gas is replacing coal in many power plants? The answer might be found by following the money. The money issue was detailed in an August 2015 report in the New York Times, titled “King Coal, Long Besieged, Is Deposed by Market.”47 At least four large coal producers have declared bankruptcy, and the stock prices of others have fallen drastically. Finally, we should ask ourselves who is paying for the public relations campaign that promotes CCS.
A final question remains. Even if CCS were to operate at the highest projected level of efficiency, would it prevent enough carbon dioxide from entering the atmosphere to make a difference?
Albedo is a term that describes the degree to which solar energy is reflected back into space by the earth. High albedo regions reflect lots of energy, whereas low albedo regions reflect little or none. Measures that increase the earth’s albedo lead to cooling, and the reverse also is true. An example of this effect is the fact that the loss of the highly reflective snow and ice around the Arctic has led to a decrease in the earth’s albedo that favors warming. Increases in the earth’s albedo after huge volcanic eruptions that inject sulfates and dust into the atmosphere have produced transient cooling. This effect was confirmed by direct measurements made after the June 1991 eruption of Mount Pinetubo in the Philippines.48 Many believe that the 1815 eruption of Mount Tambora led to what has been referred to as the year without a summer. This cataclysmic event is the centerpiece of Gillen D’Arcy Wood’s new book, Tambora: The Eruption That Changed the World.
Volcanic eruptions of the type exemplified by the Pinetubo and Tambora events have complex effects on the climate related to the injection of massive amounts of sulfates and dust into the stratosphere. This injection reflects solar energy back into space. However, important provisions of the regulations promulgated under the authority of the Clean Air Act are designed to reduce sulfates and particles in the air. From a technical perspective, the products of the eruptions alter the atmospheric aerosol. Aerosols are fine suspensions of solid particles and liquid droplets in a gas. The portion of the NAS report that concentrates on albedo modification focuses on strategies designed to modify the stratospheric aerosol or, alternatively, to change the albedo of the clouds over oceans.
Although volcanic eruptions are known to have effects on climate if they are large enough, incorporating this strategy into a realistic attempt to cool the earth cannot be justified on the basis of the data at hand. This is exemplified by a quote from the NAS Climate Interventions report: “No well-documented field experiments involving controlled emissions of stratospheric aerosols have yet been conducted.”49 Thus, this is an area where an enormous amount of research that has yet to be undertaken must be performed before any evidence-based decisions can be made.
There also seems to be little to be gained by injecting sulfur dioxide into the atmosphere and a lot to lose. Although this action would lead to cooling, it would move the earth in a direction that is exactly the opposite of the intent of the Clean Air Act. Sulfur dioxide is one of the criteria pollutants listed by the EPA. It is a highly reactive chemical that irritates the lungs and other tissues. Sulfur dioxide also reacts with other elements in the atmospheric aerosol to form fine particles. These too are criteria pollutants. Particulate matter and sulfur dioxide have large detrimental effects on health. So, one might think of enacting sulfate injections as jumping out of the frying pan and into the fire.
The case is only marginally better when considering altering the clouds over the oceans to reflect sunlight. Clouds at low altitudes that cover large portions of the earth’s oceans scatter sunlight back into space. These clouds reflect energy away from the planet that would have been absorbed by the darker water beneath the cloud layer, which forms the basis for considering modifications to the cloud layer as an earth-cooling measure.
Some proof-of-concept data are the result of several experiments reviewed in the NAS report. Under very specific and limited conditions, diesel exhaust particles emitted by ships at sea may serve as a nidus for the condensation of water vapor to form clouds. These clouds are somewhat analogous to the contrails produced by high-altitude aircraft. However, these ships consume large amounts of fuel—around one hundred thousand gallons per day. This would almost surely create an unacceptably large carbon dioxide burden. There is no such thing as a free lunch.
Other, more fanciful strategies include blocking solar radiation with mirrors or other devices launched by rockets, manipulating the albedo of the earth’s surface, and manipulating cirrus clouds. This cloud layer is composed of ice crystals that contribute to greenhouse warming, the dominant effect, and reflect solar energy, preventing warming. By decreasing their opacity, distribution, and lifetime, it might be possible to affect the climate in the short term.
Aside from the all-important fact that little is known about these potential technological fixes, the NAS report lists other potential complicating factors:
These measures do nothing to address the underlying cause of climate change: uncontrolled emissions of greenhouse gases.
They pose significant risks that are global, novel, and poorly understood.
There are no international bodies that could exercise oversight.
Unilateral, even rogue implementation creates the potential for unforeseeable threats.
Not every place on earth would be affected evenly. Thus, unilateral actions could benefit some at the expense of others. Could this create the potential for climate warfare?
Abrupt cessation of these “fixes” would lead to rapid warming, because greenhouse gases would continue to accumulate during periods of active interventions.
Because carbon dioxide would continue to accumulate in the atmosphere, other effects of the gas would continue to increase, including ocean acidification, effects on agriculture, and so on.
It is likely that these strategies would be much less expensive than methods that capture and sequester carbon dioxide. They could also act rapidly after a perceived climate emergency. Some methods are capable of acting in time spans measured in years rather than decades or centuries. At this point in time, climate engineering solutions appear to be a mixed bag of potential benefits and losses.
In their plan to convert New York’s energy sources to wind, water, and solar, Jacobson and his colleagues envisioned storing electricity in batteries—presumably batteries in hybrid vehicles or similar battery arrays.50 Owners of these vehicles would plug them into stations that would utilize this stored energy when needed—notably, when the sun was not shining or the wind was not blowing. Just a short time after the publication of their report, additional, and possibly more practical, strategies for battery storage of electricity have begun to emerge. At least two large firms are developing batteries for this purpose. Perhaps the most prominent and highly publicized of these batteries is being manufactured by Elon Musk’s firm that makes lithium ion batteries for its Tesla automobiles.51 Musk received substantial news coverage in February and April 2015 when he announced that a version of this battery, being produced in a $5 billion factory, would be marketed for home use. Earlier news coverage suggested that the factory would be able to produce five gigawatts of battery storage capacity by 2020. UniEnergy Technologies also has plans to market liquid flow batteries that are capable of storing a megawatt of power that can be discharged over three to four hours.52 This amount of electricity would be capable of powering around five hundred typical homes. These batteries are about the size of the containers used for transoceanic shipping. From the company’s website, it looks like they are designed to be moved easily, perhaps a huge advantage during an emergency such as a hurricane, flood, or other disaster. This is clearly a rapidly emerging field.
Energy can also be stored in the form of hydrogen, the most abundant element in the universe. A group of Chinese investigators appears to have made significant progress toward the goal of splitting water molecules into hydrogen and oxygen.53 In a paper with the enticing but daunting title “Metal-Free Efficient Photocatalyst for Stable Visible Water Splitting via a Two-Electron Pathway,” this group opens the door to producing large amounts of hydrogen without the need for any exotic metal catalysts. The group used carbon to make nanodots, defined as quasi-spherical carbon particles with a diameter of less than ten nanometers. The researchers were able to achieve an efficiency of 2 percent, which, with further refinements, should make it possible to produce hydrogen gas on a commercial scale in a cost-effective manner, using sunlight as the energy source. The hydrogen could be stored and used at a later time in fuel cells or some other energy-producing device.
In an April 2015 op-ed in the New York Times, former Republican Speaker of the House Newt Gingrich called for doubling the NIH budget.54 He cited the appropriate data concerning the impacts of diseases on the American public, concluding with the statement that supporting research to facilitate medical progress was an appropriate role for the federal government. We should not stop with the NIH; this plea should extend to all federal and nonfederal research programs. President Obama used different words in his January 25, 2011, State of the Union speech when he called on us all to realize the huge American potential to “out-innovate, out-educate, and out-build” the rest of the world.55
Experience has shown clearly what must be done to minimize the effects of climate change. Although there is no substitute for reducing greenhouse gas emissions, vigorous adaptive measures are also needed. We must be better prepared to cope with more heat waves. The public health infrastructure must be strengthened worldwide to cope with the threats posed by infectious diseases. Agricultural methods must be improved so that we can feed everyone. Solutions to the multiple dilemmas posed by rising sea level must be developed and acted on; we need to be more like the Dutch and less like Floridians. The root causes of violence require attention.
A number of targets have been set in order to minimize climate change. The group 350.org and its website 350.org were established with the goal of keeping the atmospheric carbon dioxide concentration below 350 ppm. That target was passed long ago and was probably never realistic. The World Bank called for keeping the global temperature increase below 4°C.56 A lower target that seeks to limit the temperature increase to 2°C emerged from the Copenhagen conference.57 Although that meeting failed to yield an agreed-upon enforceable goal, it did give the 2°C goal additional visibility. A recent report suggests that even this goal, which seems elusive, may not be sufficiently low to prevent a so-called tipping point, defined by the IPCC Fifth Assessment Report as “a large-scale change in the climate system that takes place over a few decades or less, persists (or is anticipated to persist) for at least a few decades and causes substantial disruptions in human and natural systems.”58 A report published in October 2015 identified thirty-seven of these events affecting oceans, sea ice, snow cover, permafrost, and terrestrial ecosystems. Of these, eighteen were anticipated to occur at or below the 2°C target. The earth’s climate system may not be as resilient as we hoped.
We must join our partners across the world to accept and meet the challenge of climate change. There are huge barriers that must be overcome—and quickly. We lack the worldwide political leadership and the institutional organizations that are needed to identify and execute climate change policies. More stakeholder involvement is needed. At every stage, more research, development, and funding are needed. We must succeed. The cost of failure is too high to bear.