Augescunt aliae gentes, aliae minuuntur, inque brevi spatio mutantur saecla animantum et quasi cursores vitai lampada tradunt.
(Some nations rise, others diminish, the generations of living creatures are changed in short time, and like runners carry on the torch of life.)
—Titus Lucretius Carus, De Rerum Natura, II, 7S
Fundamental changes in human affairs come both as unpredictable discontinuities and as gradually unfolding trends. Discontinuities are more common than is generally realized, and chapter 2 addressed those natural and anthropogenic catastrophes that have the greatest potential to affect the course of global civilization. But another category of discontinuities deserves at least a brief acknowledgment, that of epoch-making technical developments. Incremental engineering progress (improvements in efficiency and reliability, reduction of unit costs) and gradual diffusion of new techniques (usually following fairly predictable logistic curves) are very much in evidence, but they are punctuated by surprising, sometimes stunning, discontinuities.
By far the most important concatenation of these fundamental advances took place between 1867 and 1914, when electricity generation, steam and water turbines, internal combustion engines, inexpensive steel, aluminum, explosives, synthetic fertilizers, and electronic components created the technical foundations of the twentieth century (Smil 2005a). A second remarkable saltation took place during the 1930s and 1940s with the introduction of gas turbines, nuclear fission, electronic computing, semiconductors, key plastics, insecticides, and herbicides (Smil 2006). The history of jet flight is a perfect illustration of the inherently unpredictable nature of these rapid technical shifts. In 1955 it did not require extraordinary imagination to see that intercontinental jet travel would become a large-scale enterprise, but no one could have predicted (a year after the third fatal crash of the pioneering British Comet) that by 1970 there would be an intercontinental plane capable of carrying more than 400 people. Introduction of the Boeing 747 was an unpredictable step, a result of Juan Trippe’s (PanAm’s chairman) vision and William Allen’s (Boeing’s CEO) daring, not an inevitable outcome of a technical trend (Kuter 1973).
Some political discontinuities of the second half of the twentieth century were equally stunning. The year 1955 was just six years after the Communist victory in China’s protracted civil war, two years after China’s troops made it impossible for the West to win the Korean War and forced a standoff along the 38th parallel, and three years before the beginning of the worst (Mao-made) famine in history (Smil 1999). At that time China, the legitimacy of its regime unrecognized by the United States, was an impoverished, subsistence agrarian economy, glad to receive a few crumbs of Stalinist industrial plant, and its annual per capita GDP was less than 4% of the U.S. average.
Fifty years later China, still very much controlled by the Communist party, had become a workshop for the world, an indispensable supplier of goods ranging from pliers to cell phones. In 2005 its per capita GDP (expressed in PPP values) was at about $4,100 (comparable to that of Syria or Namibia), and the country was providing key support for the United States’ excessive spending through its purchases of U.S. Treasury bills. How could anyone have anticipated all these developments in 1955, or for that matter in September 1976, right after Mao’s death, or in 1989, after the Tian’anmen killings?
Demographic discontinuities offer perhaps the best illustration of a continuum between abrupt changes and gradually unfolding trends. Clear directional changes (rather than mere fluctuations) in fertility, mortality, or marriage rates become obvious only after one or two decades of reliable records. A major change accomplished in a decade is an abrupt shift on the demographic time scale (where one generation, 15-25 years, is the basic yardstick) but a rapidly unfolding trend by any other temporal metric. Declines in fertility (leaving aside China’s one-child policy) have best exemplified these relatively rapid shifts/unfolding trends.
Two generations ago Europe’s Roman Catholic South had total fertility rates close to or above 3.0, but by the year 2000 Spain and Italy shared the continent’s record lows with Czechs, Hungarians, and Bulgarians (Billari and Kohler 2002; Rydell 2003). In Spain’s case, most of this decline took place in just a single decade, between the late 1970s and the late 1980s (fig. 3.1). In Canada a similar phenomenon was seen not only in Roman Catholic Quebec, whose fertility during the mid1920s was nearly 60% higher than Ontario’s, but also among the francophones outside the province, whose fertility fell from nearly 5.0 during the late 1950s to below 1.6 by the mid-1990s (Rao 1974; O’Keefe 2001).
Energy transitions provide another illustration of a continuum between abrupt changes and gradually unfolding trends. Substitution of a dominant source of primary energy by another fuel or by primary electricity (hydro and nuclear) has been slow. It usually takes decades before a new source captures the largest share of the total supply. But in some countries these primary energy transitions have been relatively fast. An outstanding example is France’s successful strategy to produce most of its electricity from nuclear fission; the country quintupled its nuclear generation during a single decade, the 1980s (Dorget 1984).
The probabilities of unfolding trends capable of changing the fortunes of nations and reshaping world history are not any easier to assess than those of catastrophic discontinuities. There are three reasons for this.
First, many long-term trends are not explicitly identified as they unfold and are recognized only ex post once they end in discontinuities. Recent history offers no better example than the demise of the Soviet Union. In retrospect it is clear that during the second half of the twentieth century the Soviet regime was falling steadily behind in its self-proclaimed race with the United States. Yet the West—spooked by Nikita Khrushchev’s famous threat, “We will bury you,” awed by a tiny Sputnik, and willing to believe in a huge missile gap—operated for two generations on the premise of growing Soviet might.
In reality, any Soviet domestic and foreign gains were outweighed by internal and external failures, and improvements in the country’s economic and technical abilities were not sufficient to prevent its average standard of living and its military capability from falling further behind those of its great rival. I became convinced about the inevitability of the Soviet Union’s demise in Prague during the spring of 1963, when as yet another sign of glacially progressing de-Stalinization, Time, Newsweek, and The Atlantic Monthly as well many technical publications became available in the Carolinum University’s library. Comparing this sudden flood of facts, figures, and images with the realities of the crumbling society around me, I could draw only one conclusion: contrary to Khrushchev’s boasts, the gap between the East and the West was widening and the Communist regimes would never catch up. But I, like others, was not bold enough to imagine the collapse coming within a single generation, and thought that the Soviet Union would endure into the next century.
Second, we cannot foresee which trends will become so embedded as to seem immune to any external forces and which ones will suddenly veer away from a predictable course or never come to pass. The history of nuclear fusion is an example of an unrealized trend. Ever since the late 1940s nuclear physicists have been anticipating that we need 30-40 years to make the first breakthroughs in controlled fusion and to follow them with relatively rapid commercialization of fusion-powered electricity generation. Skeptics should be forgiven for concluding that this everreceding horizon will not be reached soon, perhaps not even by 2100 (Parkins 2006).
Third, what follows afterwards is often equally unpredictable: a new long-lasting trend or a prolonged oscillation, a further intensification or an irreversible weakening? This challenge can be illustrated by concerns about the rapidly approaching end of the oil era. Are we facing only more unpredictable price fluctuations that will eventually be moderated by transition to natural gas and renewable energies, or is this the real beginning of the end of cheap oil as the most important energy source of modern civilization? In turn, the latter could be seen as fairly catastrophic (we do not now have an equally flexible and affordable alternative) or as a tremendous opportunity for technical innovations and social adjustments that would actually improve the world’s economic fortunes and environmental quality (Smil 2003; 2006).
My assessment of globally important demographic, economic, political, and strategic trends is done in three ways. First, I address the most fundamental future shift in the global economy. It is not, as one might think, further globalization but rather the coming epochal energy transition. Second, I look at long-term trends as they affect the major protagonists on the world scene, the United States, the European Union, China, Japan, Russia, and the militant political Islam. Given the number and complexity of these factors, my approach is to address only those key trends that will most likely shape the fortunes of the world’s leading economies. Third, I close the chapter with some musings about who is on top. That section also includes some conclusions about the equivocal effects of continuing globalization of the world’s economy, above all, rising inequality.
Energy flows and conversions sustain and delimit the lives of all organisms, and hence also of such superorganisms as societies and civilizations. No human action can take place without harnessing and transforming energies. From a fundamental biophysical (thermodynamic) perspective, the fortunes of nations are not determined primarily by strategic designs or economic performance but by the magnitude and efficiency of their energy conversions (Smil 2008). Global consumption of coal (supplemented by small amounts of crude oil and natural gas) surpassed that of phytomass (wood, charcoal, crop residues) during the 1890s (Smil 1994). Coal’s share in the world’s total primary energy supply (TPES), excluding the phytomass, stood at 95% in 1900; it slid below 50% during the early 1960s, but in 2005 it was still at about 28%. Crude oil accounted for 4% of global TPES in 1900, 27% in 1950, and about 46% in 1975. By 2005 it was about 36%, while the importance of natural gas kept rising; natural gas supplied nearly 24% of global TPES in 2005 (fig. 3.2).
Given the need to put in place new infrastructures but the difficulty of rapidly discarding existing capital investments, the substitution process is slow, with a new energy source taking about a century after its initial introduction to capture half of the market share. Until the early 1970s the process was highly regular, leading Marchetti and Nakićenović (1979) to conclude that the entire energy system operates as if driven by a hidden clock. In reality, the pace and timing of the transition is far less predictable. OPEC’s oil price hikes (1973-1974 and 1979-1981) clearly disrupted the previously orderly energy substitutions. Coal has remained important (holding the same share in 2005 as it did in 1975, 28%); oil’s share fell from 46% to 36%, and natural gas gained only half the expected share (25% by 2005). Altogether, fossil fuels supplied about 88% of global TPES in 2005, down just 5% compared to 1975, whereas new nonfossil alternatives (other than well-established hydro and nuclear electricity generation) are still only marginal contributors.
Although coal has lost its traditionally large transportation, household, and industrial markets, it generates about 40% of the world’s electricity (WCI 2006). It is the principal fuel for the rapidly rising production of cement and coke, and it remains the key energizer and reducing agent of iron smelting. The energy density of natural gas is typically around 34 MJ/m3 compared to 34 GJ/m3 for oil, or a 1,000-fold difference. This limits the use of gas as transportation fuel; gas is also more costly to transport and store. Consequently, modern economies will do their utmost to stay with the more convenient liquid fuels, and contrary to alarmist claims about the end of oil production, the transition to renewable energies will be a protracted affair.
Many comments on energy futures (ranging from catastrophic predictions of an imminent oil drought to unrealistic forecasts of future biomass energy uses) betray the fundamental lack of understanding of the nature and dynamics of the global energy system. The three key facts for this understanding are these: We are an overwhelmingly fossil-fueled civilization; given the slow pace of major resource substitutions, there are no practical ways to change this reality for decades to come; high prices have concentrated worldwide attention on the availability and security of the oil supply, but coal and natural gas combined provide more energy than do liquid fuels.
As coal’s relative importance declined, its absolute production grew roughly sevenfold between 1900 and 2005, to more than 4 billion t of bituminous coal and nearly 900 million t of lignites. Yet this prodigious production was only about 0.6% of that year’s fuel reserves, the share of mineral resources in the Earth’s crust that can be recovered profitably with existing extraction techniques. This means that coal’s global reserves/production (R/P) ratio is nearly 160 years, and new extraction techniques could expand the reserves. Coal production is thus constrained not by resources but by demand, and by old (particulate matter and sulfur and nitrogen oxides) and new (emissions of CO2, the leading anthropogenic greenhouse gas) environmental considerations.
Natural gas produces the least amount of CO2 per unit of energy (less than 14 kg C/GJ, compared to about 25 kg C/GJ for bituminous coal and 19 kg C/GJ of refined fuels); hence it is the most desirable fossil fuel in a world concerned about global warming. But until recently long-distance pipeline exports were limited to North America; Russian supplies of Siberian gas to Europe; and pipelines from Algeria and Libya to Spain and Italy. Rising prices have led to new plans for longdistance pipelines and to renewed interest in liquefied natural gas shipments. As with coal, the fuel’s resources are abundant, and in 2005 the reserves (at 180 Tm3) were twice as large as in 1985.
A small army of experts has disseminated an alarmist notion of imminent global oil exhaustion followed by economic implosion, massive unemployment, breadlines, homelessness, and the catastrophic end of industrial civilization (Ivanhoe 1995; Campbell 1997; Laherrère 1997; Deffeyes 2001). Their alarmist arguments mix incontestable facts with caricatures of complex realities, and they exclude anything that does not fit preconceived conclusions in order to issue obituaries of modern civilization.
Their conclusions are based on a lack of nuanced understanding of the human quest for energy. They disregard the role of prices, historical perspectives, and human inventiveness and adaptability. Their interpretations are anathema to any critical, balanced scientific evaluation, but, precisely for that reason, they attract mass media attention. These predictions are just the latest installments in a long history of failed forecasts but their advocates argue that this time the circumstances are really different and the forecasts will not fail. In order to believe that, one has to ignore a multitude of facts and possibilities that readily counteract their claims. And, most important, there is no reason that even an early peak to global oil production should trigger any catastrophic events.
The modern tradition of concerns about an impending decline in resource extraction began in 1865 with William Stanley Jevons, a leading economist of the Victorian era, who concluded that falling coal output must spell the end of Britain’s national greatness because it is “of course . . . useless to think of substituting any other kind of fuel for coal” (Jevons 1865, 183). Substitute oil for coal in that sentence, and you have the erroneous foundations of the present doomsday sentiments about oil. There is no need to elaborate on how wrong Jevons was. The Jevonsian view was reintroduced by Hubbert (1969) with his “correct timing” of U.S. oil production, leading those who foresaw an early end to oil reserves to consider Hubbert’s Gaussian exhaustion curve with the reverence reserved by Biblical fundamentalists for Genesis.
In reality, the Hubbert model is simplistic, based on rigidly predetermined reserves, and ignoring any innovative advances or price shifts. Not surprisingly, it has repeatedly failed (fig. 3.3). Hubbert himself put the peak of global oil extraction between 1993 and 2000. The Workshop on Alternative Energy Strategies (WAES 1977) forecast the peak as early as 1990 and most likely between 1994 and 1997; the CIA (1979) believed that global output must fall within a decade; BP (1979) predicted world production would peak in 1985 and total output in the year 2000 would be nearly 25% below that maximum (global oil output in 2000 was actually nearly 25% above the 1985 level).
Some peak-of-oil proponents have already seen their forecasts fail. Campbell’s first peak was to be in 1989, Ivanhoe’s in 2000, Deffeyes’s in 2003 and then, with ridiculous specificity, on Thanskgiving Day 2005. But the authors of these failed predictions would argue that this makes no difference because oil reserves will inevitably be exhausted in a matter of years. They are convinced that exploratory drilling has already discovered some 95% of oil that was originally present in the Earth’s crust and that nothing can be done to avoid a bidding war for the remaining oil.
True, there is an unfortunate absence of rigorous international standards in reporting oil reserves, and many official totals have been politically motivated, with national figures that either do not change at all from year to year or take sudden suspicious jumps. But this uncertainty leaves room for both underand overestimates, and until the sedimentary basins of the entire world (including deep offshore regions) are explored with an intensity matching that of North America and the U.S. sector of the Gulf Mexico, I see no reason to prefer the most conservative estimate of ultimately recoverable conventional oil (no more than 1.8 trillion barrels) rather than the substantially higher totals favored by other geologists, although not necessarily the highest values estimated by the U.S. Geological Survey, whose latest maximum is in excess of 4 trillion barrels.
Even if the amount of the world’s ultimately recoverable oil resources were perfectly known, the global oil production curve could not be determined without knowing future oil demand. We have no such understanding because that demand will be shaped, as in the past, by shifting prices and unpredictable technical advances. Who would have predicted in 1930 a new huge market for kerosene, created by commercial jets by 1960, or in 1970 that the performance of an average U.S. car would double by 1985? As Adelman (1992, 7-8), who spent most of his career as a mineral economist at MIT, put it, “Finite resources is an empty slogan; only marginal cost matters.”
Steeply rising oil prices would not lead to unchecked bidding for the remaining oil but would accelerate a shift to other energy sources. This lesson was learned painfully by OPEC after oil prices rose to nearly $40/bbl in 1981, and it led Sheikh Ahmed Zaki Yamani (2000), the Saudi oil minister from 1962 to 1986, to conclude that high prices would only hasten the day when the organization would be left with untouched fuel reserves because new efficient techniques would reduce the demand for transport fuels and leave much of the Middle East’s oil in the ground forever.
And yet, as noted, price feedbacks are inexplicably missing from all accounts of coming oil depletion and its supposedly catastrophic consequences. Instead, there is an assumption of demand immune to any external factors. In reality, rising prices do trigger powerful adjustments. Between 1973 and 1985 the U.S. CAFE (corporate automobile fuel efficiency) was doubled to 27.5 mpg, but further improvements were not pursued largely because of falling oil prices. A mere resumption of that rate of improvement (technically easy to do) would have automobiles averaging 40 mpg by 2015, and a more aggressive adoption of hybrids could bring the rate to 50 mpg, more than halving the current U.S. need for automotive fuel and sending oil prices into a tailspin.
And although oil prices are still relatively low (adjusted for inflation and lower oil intensity of modern economies, even $100/bbl is at least 25% below the 1981 peak), they have already reinvigorated the quest for tapping massive deposits of nonconventional oil as well as the development of new gas fields aimed at converting the previously “stranded” reserves into a massively traded global commodity (liquefied natural gas). Technical advances will also make possible the conversion of that gas (and coal) into liquids, and increasing recoveries of coalbed methane and extraction of methane from hydrates will supply more hydrocarbons. But even if the global extraction of conventional crude oil were to peak within the next two decades, this would not mean any inevitable peak of overall global oil production, and even less so the end of the oil era, because very large volumes of the fuel from traditional and nonconventional sources would remain on the world market during the first half of the twenty-first century.
As oil becomes dearer, we will use it more selectively and efficiently and intensify the shift from oil to natural gas and to renewable and nuclear alternatives. Finally, it must be stressed that fossil fuels will retreat only slowly because the dominant energy converters depend on their supply. The evolution of modern energy systems has shown a great deal of inertia following the epochal commercial introduction of new prime movers. All those overenthusiastic, uncritical promoters of new energy techniques would do well to consider five fundamental realities.
First, the steam turbine, the most important continuously working high-load prime mover of the modern world, was invented by Charles Parsons 120 years ago, and it remains fundamentally unchanged; gradual advances in metallurgy simply made it larger and more efficient. These large (up to 1.5 GW) machines now generate more than 70% of our electricity in fossil fueled and nuclear stations; the rest comes from gas and water turbines and from diesels.
Second, the gasoline-fueled internal combustion engine, the most important transportation prime mover of the modern world, was first deployed (based on older stationary models) during the same decade as the Parsons machine, and it reached a remarkable maturity in a single generation after its introduction.
Third, Diesel’s inherently more efficient machine followed shortly after the BenzDaimler-Maybach design, and it matured almost as rapidly. As I explain later, it is entirely unrealistic to expect that we could substitute most of the gasoline or diesel fuel by fuels derived from biomass within a few decades.
Fourth, the gas turbine, the most important prime mover of modern flight, is now entering the fourth generation of service after a remarkably fast progression from Frank Whittle’s and Pabst von Ohain’s conceptual designs to high-bypass turbofans (Smil 2006). Again, conversion of biomass could not supply an alternative aircraft fuel at the requisite scale for decades (even if that conversion were profitable).
Fifth, Nikola Tesla’s induction electric motor, commercialized during the late 1880s, diffused rapidly to become the dominant prime mover of industrial production as well as of domestic comfort and entertainment. Renewable conversions should eventually be capable of supplying the needed electricity for these motors by distributed generation.
There are five major reasons that the transition from fossil to nonfossil supply will be much more difficult than is commonly realized: scale of the shift; lower energy density of replacement fuels; substantially lower power density of renewable energy extraction; intermittence of renewable flows; and uneven distribution of renewable energy resources.
The scale of the transition is perhaps best illustrated by comparing it to the epochal shift from biomass to fossil fuels. By the late 1890s, when phytomass slipped below 50% of the world’s total primary energy supply (TPES), coal (and a small amount of oil) was consumed at the rate of 600 GW, whereas in 2005 the world used fossil energies at a rate of 12 TW, a 20-fold difference. Of course, phytomass was never totally displaced. During the twentieth century its use (now mainly in poor countries) roughly doubled, but it now provides only about 10% of global TPES.
The magnitude of the needed substitution also runs into some important resource restrictions. At 122 PW the enormous flux of solar radiation reaching the Earth’s ground is nearly 4 OM greater than the world’s TPES of nearly 13 TW in 2005 (fig. 3.4). But this is the only renewable flux convertible to electricity that is considerably larger than the current TPES; no other renewable energy resource can provide more than 10 TW. Generous estimates of technically feasible maxima are less than 10 TW for wind, less than 5 TW for ocean waves, less than 2 TW for hydroelectricity, and less than 1 TW for geothermal and tidal energy and for ocean currents. Moreover, the actual economically and environmentally acceptable rates may be only small fractions of these technically feasible totals.
The same disparity applies to the production of phytomass that yields solid fuels directly and that can be converted to liquids and gases. The Earth’s net primary productivity (NPP) of 55-60 TW is more than four times as large as was global TPES in 2005. Its mass is dominated by the production of woody tissues (boles, branches, bark, roots) in tropical and temperate forests, and recent proposals of massive biomass energy schemes are among the most regrettable examples of wishful thinking and ignorance of ecosystemic realities and necessities. Their proponents are either unaware of (or deliberately ignore) some fundamental findings of modern biospheric studies.
As the Millennium Ecosystem Assessment (2005) demonstrated, essential ecosystemic services (without which there can be no viable economies) have already been modified, reduced, and compromised to a worrisome degree, and any massive, intensive monocultural plantings of energy crops could only accelerate their decline. Second, humans already appropriate 30%-40% of all NPP as food, feed, fiber, and fuel, with wood and crop residues supplying about 10% of TPES (Rojstaczer, Sterling, and Moore 2001). Moreover, highly unequal distribution in the human use of NPP means that the phytomass appropriation ratios are more than 60% in East Asia and more than 70% in Western Europe (Imhoff et al. 2004).
Claims that simple and cost-effective biomass could provide 50% of the world’s TPES by 2050 or that 1-2 Gt of crop residues can be burned every year (Breeze 2004) would put the human appropriation of phytomass close to or above 50% of terrestrial photosynthesis. This would further reduce the phytomass available for microbes and wild heterotrophs, eliminate or irreparably weaken many ecosystemic services, and reduce the recycling of organic matter in agriculture. Finally, nitrogen is almost always the critical growth-limiting macronutrient in intensively cultivated agroecosystems and in silviculture. Mass production of phytomass for conversion to liquid fuels, gases, or electricity would necessitate a substantial increase in continuous applications of this element (Smil 2001). Proponents of massive bioenergy schemes appear to be unaware of the fact that human interference in the global nitrogen cycle has already vastly surpassed anthropogenic changes in the carbon cycle (see chapter 4).
The transition to fossil fuels introduced fuels with superior energy densities, but the coming shift will move us in an opposite, less desirable direction. Ordinary bituminous coal (20-23 GJ/t) contains 30%-50% more energy than air-dried wood (15-17 GJ/t); the best hard coals (29-30 GJ/t) are nearly twice as energy-dense as wood; and liquid fuels refined from crude oil (42-44 GJ/t) have nearly three times higher energy density. With this transition we are facing the reverse challenge: replacing crude oil-derived fuels with less energy dense biofuels. Moreover, this transition would also require 1,000-fold and often 10,000-fold larger areas under crops than the land claimed by oil field infrastructures, and shifting from coal-fired to wind-generated electricity would require at best 10 times and often 100 times more space (fig. 3.5) (Smil 2008). In order to energize the existing residential, industrial, and transportation infrastructures inherited from the fossil-fueled era, a solarbased society would have to concentrate diffuse flows to bridge power density gaps of 2-3 OM (Smil 2003).
The mismatch between the inherently low power densities of renewable energy flows and the relatively high power densities of modern final energy uses means that a solar-based system would require profound spatial restructuring, with major environmental and socioeconomic consequences. Direct solar radiation is the only renewable, energy flux available, with power densities of 102 W/m2 (global mean of ~170 W/m2), which means that increasing efficiencies of its conversion (above all, better photovoltaics) could harness it with effective densities of several 101 W/m2 (the best all-day rates in 2005 were on the order of 30 W/m2). All other renewables have low (<10 W/m2), or very low (<1 W/m2) production power densities. Low extraction power densities would be the greatest challenge in producing liquid fuels from phytomass. Even the most productive fuel crops or tree plantations have gross yields of less than 1 W/m2, and subsequent conversions to electricity and liquid fuels prorate to less than 0.5 W/m2.
During the first years of the twenty-first century, global consumption of gasoline and diesel fuel in land and marine transport and kerosene in flying was about 75 EJ. Even if the most productive solar alternative (Brazilian ethanol from sugar cane at 0.45 W/m2) could be replicated throughout the tropics, the aggregate land requirements for producing transportation ethanol would reach about 550 Mha, slightly more than one-third of the world’s cultivated land, or nearly all the agricultural land in the tropics. There is no need to comment on what this would mean for global food production. Consequently, global transportation fuel demand cannot be filled by even the most productive alcohol fermentation. Corn ethanol’s power density of 0.22 W/m2 means that about 390 Mha, or slightly more than twice the country’s entire cultivated area, would be needed to satisfy the U.S. demand for liquid transportation fuel.
The prospect does not change radically by using crop residues to produce cellulosic ethanol (from cereal straw, crop stalks, and leaves). Only a part of these residues could be removed from fields so as to maintain the key ecosystemic services of recycling organic matter and nitrogen, retaining moisture, and preventing soil erosion (Smil 1999). Moreover, even large efficiency improvements in alcohol fermentation or car performance would not make up for the inherently low power densities of cropping. The U.S. transportation sector, three times more efficient than it was in 2000, would still claim some 75% of the country’s farmland if it were to run solely on ethanol produced at rates prevailing in 2005.
The intermittence of renewable energy flows poses a particularly great challenge to any conversion system aimed at a steady, reliable supply of energy required by modern industrial, commercial, and residential infrastructures. Solar radiation, wind, waves, and plant harvests fluctuate daily or seasonally, but the base load’s needed share of the maximum power load has been increasing in all affluent societies. Easily storable high energy density fossil fuels and thermal electricity generating stations capable of operating with high load factors (>75% for coal-fired stations, >90% for nuclear plants) meet this need. By contrast, the two most important renewable flows seen as the key future generators of electricity, wind and direct solar radiation, are intermittent and far from perfectly predictable. Photovoltaic generation is still so negligible that it is impossible to offer any meaningful averages, but annual load factors of wind generation in countries with relatively large capacities (Denmark, Germany, Spain) are just 20%-25%.
Much is made of the uneven distribution of fossil fuels, particularly of the hydrocarbon anomaly of the Persian Gulf Zagros Basin, which contains nearly two-thirds of the world’s known oil reserves. But renewable flows are also highly unevenly distributed. The equatorial zone has much reduced direct solar radiation: the peak midday flux in Jakarta (6°S) is no different from the summer fluxes in Canada’s sub-Arctic Edmonton (nearly 55°N). Large areas of all continents are not sufficiently windy or are seasonally too windy: think of large turbines on top of tall towers in any areas with strong cyclonic flows, or what another Katrina hurricane would do to the Gulf of Mexico wind farms. Sites with the best potential for geothermal, tidal, or ocean energy conversions are even less common. Some densely populated regions have low potential for both solar and wind conversions, for instance, Sichuan, China’s most populous province. Also, many windy or sunny sites are far from major load centers, and their exploitation will require entirely new extrahigh-voltage lines (from southern Algeria to Europe or from North Dakota to New York).
Another myth concerning renewable conversions is the expectation of a nearautomatic decline in their cost with increasing volume. This trend is common for most techniques undergoing commercialization, but not inevitable; if it were, we would already have had inexpensive electric cars for decades. The costs of many renewable techniques have actually been increasing. Photovoltaic silicon prices have more than doubled, and the cost of structural steel, aluminum, and plastics for wind turbines and of ethanol fermentation from corn have both risen because all these techniques depend on large inputs of more costly fossil energies.
For all these reasons (and because of other nontrivial technical problems) the global transition to a nonfossil fuel world will be gradual and uneven, driven as much by external political and strategic considerations as by autonomous technical advances. The entire process could be speeded up by aggressively pushing nuclear electricity generation. A general agreement among energy experts is that nuclear electricity must have a significant place in a nonfossil fuel world. This conclusion is based on the high capacities of nuclear power plants and on the high load factor of fission-powered generation.
Nuclear plants have been among the largest electricity generation facilities for some three decades. The largest stations surpass 5 GW, the largest turbogenerators are about 1.5 GW, yet new reactor designs would make units as small as a few hundred megawatts commercially profitable. By contrast, typical large wind turbines rate below 5 MW, and by 2005 the largest PV assemblies were on the order of 4-6 MW of peak power. Well-run nuclear power plants can operate 95% of the time, compared to typical rates of 65%-75% for coal-fired stations, 40%-60% for hydrogeneration, and 25% for wind turbines. A reliable, predictable, high-capacity, high-load mode of electricity generation would be a perfect complement to various renewable conversions that share the attributes of low-capacity, moderate-load, and often unpredictable intermittent operation.
Concerns about global warming have made the pronuclear sentiment much more widespread than it was a generation ago. Some new advocates now include prominent former adversaries, including Patrick Moore, the co-founder of Greenpeace, and James Lovelock, the originator of Gaia theory (Moore 2006; Lovelock 2006). Both now believe that nuclear generation may be the best choice to save the Earth from a catastrophic climate change. Nevertheless, the realities of large-scale nuclear expansion are otherwise. The industry still suffers because of its rushed post-1945 development, and public acceptance of nuclear generation and the final disposal of radioactive wastes remain the key obstacles to massive expansion. No other mode of primary electricity production was commercialized as rapidly as the first generation of water reactors (most of them operating with a pressurized water loop, hence pressurized water reactors, PWRs). Only some 25 years elapsed between the first sustained chain reaction, which took place at the University of Chicago on December 2, 1942, and the exponential rise in orders for new nuclear power plants after 1965.
Consequently, it was widely expected that by the year 2000 worldwide electricity generation would be dominated by inexpensive fission. Instead, the industry experienced stagnation and retreat. In retrospect, it is clear that the industry was far too rushed and that too little weight was given to public acceptance of fission (Cowan 1990). The economics of fission generation has been always arguable because the accounts have excluded both the enormous government subsidies for nuclear R&D and the costs of decommissioning the plants and safely storing radioactive wastes permanently. Accidental core meltdown and the release of radioactivity during the Chernobyl disaster in Ukraine in May 1986 made matters even worse (Hawkes 1987). Although the Western PWRs with their containment vessels and tighter operating procedures could not experience such a massive release of radiation, the Ukrainian accident only reinforced the common public perception that all nuclear power plants are inherently unsafe.
Another problem to overcome is the final disposal of a small volume of highly radioactive wastes that must be sequestered for thousands of years. Ancient stable rock formations provide such repositories, but public distrust of these plans, objections to chosen sites, and bureaucratic procrastination have prevented the activation of any of these sites. To this must be added the serial failure of fast breeder reactors that were designed to use limited supplies of fissionable fuel more efficiently. During the 1970s it was widely predicted that by the year 2000 they would dominate global electricity generation. Instead, the three major national programs were soon abruptly shut down, first in the United States (1983), then in France (1990), and finally in Japan (1995).
If there is widespread expert agreement regarding the desirability of a major nuclear contribution to the energy picture, there is also clear consensus that any new major wave of reactor building must be based on new designs. There is no shortage of these new, more efficient, more reliable and safe designs, including reactors that could be entirely buried underground and that would not have to be refueled during the entire life of their operation. Edward Teller, one of the great pioneers of the nuclear era, detailed the technical parameters of this ingenious solution (Teller et al. 1996). But the likelihood of their early large-scale commercialization is very low, and longer-term prospects remain highly uncertain.
And it is also extremely unlikely that nuclear fusion can be a part of an early (before 2050) or indeed any solution. The engineering challenges of a viable plant design (heat removal, size and radiation damage to the containment vessel, maintenance of vacuum integrity) mean that the technique has virtually no chance to make any substantial contribution to the global primary energy supply of the next 50 years (Parkins 2006). Yet this fata morgana of energy techniques keeps receiving enormous amount of taxpayer monies; U.S. spending on fusion has averaged about a quarter billion dollars per year for the past 50 years.
A miracle of a new generation of inexpensive, safe, and reliable fission reactors (or significant breakthroughs in efficiency and cost of photovoltaics) would provide an essential foundation for a transition to a hydrogen-based energy system, but even then its realization would be a protracted affair. Undeniably, energy transitions have been steadily decarbonizing the global supply as average atomic H/C ratios rose from 0.1 for wood, to 1 for coal, 2 for crude oil, and 4 for methane. As a result, the logistic growth process points to a methane-dominated global economy after 2030, but a hydrogen-dominated economy, requiring production of large volumes of the gas without fossil energy, could take shape only during the closing decades of the twenty-first century (fig. 3.6) (Ausubel 1996).
I agree with those who say that hopes for an early reliance on hydrogen are just hopes (Mazza and Hammerschlag 2004). There is no inexpensive way to produce this high energy density carrier and no realistic prospect for the hydrogen economy to materialize for decades (Service 2004). In any case, a methanol economy may be a better, although also very uncertain, alternative (Olah et al. 2006). And there will be no rapid massive adoption of fuel cell vehicles because they do not offer a major efficiency advantage over hybrid cars in city driving (Demirdöven and Deutsch 2004).
Three key factors drove the transition to fossil fuels: declining resource availability (deforestation), the higher quality of fossil fuels (higher energy density, easier storage, greater flexibility), and the lower cost of coals and hydrocarbons. The coming transition will be entirely different. There is no urgency for an accelerated shift to a nonfossil fuel world: the supply of fossil fuels is adequate for generations to come; new energies are not qualitatively superior; and their production will not be substantially cheaper. The plea for an accelerated transition to nonfossil fuels results almost entirely from concerns about global climate change, but we still cannot quantify its magnitude and impact with high confidence.
Given the complications outlined in this saction, the coming energy transition will be even more challenging than were past shifts. Wishful thinking is no substitute for recognizing the extraordinary difficulty of the task. A nonfossil fuel world may be highly desirable, and determination, commitment, and persistence could accelerate its arrival, but the transition would be difficult and prolonged even if it were not complicated by specific national conditions and trends creating a new constellation of world power.
Using the term new world order, judging if rise or retreat best describes a nation’s (or a continent’s or a religion’s) recent past and imminent future, and assessing a global power ranking to conclude who is “on top” (as I do here)—such analysis is both useful and problematic. Useful because it offers effective shortcuts for appraising the dynamics of the global power system. Unfilled niches are as rare in ecosystems as they are in the now undoubtedly global quest for influence, affluence, and power. Problematic in that any such ranking must be suspect because all the “data” have complex, multiattribute dimensions and eventual outcomes do not always conform to the expectations of a zero-sum game. Rising global interdependence on resources (mineral or intellectual) and a shared dependence on the biosphere’s habitability preclude that.
Consequently, the following appraisals respect the multifaceted nature of rising or falling national fortunes, do not attempt quantitative international comparisons, do not aim at definite rankings, and do not suggest time frames. Multifactorial reviews of complex realities may inform us clearly enough about a nation’s relative trajectory, imminent prospects, or comparative position vis-à-vis its principal competitors, but they do not suffice to predict its overall standing after decades of dynamic global contest. As for relative trajectories, how can there be any doubt about China’s post-1980 rise?
The post-WW II history of the world’s four largest economies provides many illustrations of these complex and elusive realities and profoundly changing trends. I have already noted the curiously taken-for-granted demise of the Soviet Union. As for China, less than four years after Mao Zedong’s death in September 1976, Deng Xiaoping, his old revisionist comrade, launched the modern world’s most farreaching national reversal. He has transmuted the country, stranded for two generations in the role of an autarkic Stalinist underperformer capable of providing only basic subsistence to its people, into a global manufacturing superpower closely integrated into the international economy.
At the same time, Japan, the world’s most dynamic large economy of the 1960s, 1970s, and 1980s, widely predicted to become the world’s leading economy by the beginning of the twenty-first century, has suddenly lost its momentum and has spent nearly a generation in retreat and stagnation.
U.S. economic and strategic fortunes seemed rather bleak during most of the 1980s, but during the 1990s the country recovered in a number of remarkable ways. This rebound was short-lived, however, and was followed by worrisome fiscal and structural reversals accompanied by unprecedented strategic, military, and political challenges.
I argue that two key trends, China’s rise and U.S. retreat, will continue during the coming generation, but I also caution against overestimating the eventual magnitude of the Chinese ascent or the speed of the U.S. decline. At the same time, I do not see how the united Europe or a reenergized Japan can regain their former dynamism. Russia, despite its many problems, is reemerging as a more influential power than it appeared to be after the unraveling of the USSR. Certainly the most difficult challenge is to appraise the likely influence of Islam on world history in the next 50 years: feelings of despair, fear of violence, and expectations of continued failure abound, but a number of factors suggest possibilities of better trends.
I am sure I will be criticized for not devoting a separate account to India in this brief examination of the “new world order,” but I defend this decision and comment on the country’s key strengths and weaknesses in the closing section of this chapter, where I survey the modern history of global leadership. Who is on top matters in many tangible and intangible ways, and the next two generations will almost surely see an epochal shift, the end of the pattern that has dominated world history since WW II. India will definitely be a part of this rearrangement. I also comment on increasing intraand international inequality, which I see as the most important long-term destabilizing consequence of globalization.
Several recent publications have been quite euphoric about Europe’s prospects, leaving little room for doubts about the continent’s future trajectory. The director of foreign policy at the Centre for European Reform predicts, astonishingly, that Europe will economically dominate the twenty-first century (Leonard 2004). The former London bureau chief of the Washington Post maintains that the rise of the United States of Europe will end U.S. supremacy (Reid 2004). And Rifkin (2004) is impressed by the continent’s high economic productivity, the grand visions of its leaders, their risk-sensitive policies and reassuring secularism, and the ample leisure and high quality of life provided by caring social democracies.
Such writings make me wonder whether the authors ever perused the continent’s statistical yearbooks, read the letters to editors in more than one language, checked public opinion polls, walked through the postindustrial wastelands and ghettos of Birmingham, Rotterdam, or Milan, or simply tried to live as ordinary Europeans do. A perspective offered by the author, a skeptical European who understands the continent’s major languages, who has lived and earned money on other continents, and who has studied other societies should provide a more realistic appraisal. Of course, this does not give me any automatic advantage in appraising Europe’s place, but it makes me less susceptible to Euro-hubris and gives me the necessary Abstand to offer more realistic judgments.
Russia, too, is part of my Europe. Arguments about Russia’s place in (or outside of) Europe have been going on for centuries (Whittaker 2003; McCaffray and Melancon 2005); I have never understood the Western reluctance or the Russian hesitancy to place the country unequivocally in Europe. Of course, Russia has an unmistakable Asian overlay—there must be a transition zone in such a large land mass, and centuries of occupation by and dealing with the expansive eastern nomads had to leave their mark—but its history, music, literature, engineering, and science make it quintessentially European. On the other hand, its size, resources, and past strategic posture make it a unique national entity, and there is a very low probability that Russia will be integrated into Europe’s still expanding union during the coming generation. For these reasons, I deal with Russia’s prospects in a separate section.
The argument about Europe being the leading economy of the twenty-first century is inexplicably far off the mark. The reality, illustrated by Maddison’s (2001) millennial reconstruction of Western Europe’s GDP and population shares, shows an unmistakable post-1500 ascent that culminates during the nineteenth century and is followed by a gradual descent that is likely to accelerate during the coming decades (fig. 3.7). In 1900, Europe (excluding Russia) accounted for roughly 40% of global economic product; 100 years later it produced less than 25% of global output, and by 2050, depending above all on growth in the GDPs of China and India, its share of global economic product may be as low as 10%. By 2050, Europe’s share of global economic product may be lower than it was before the onset of industrialization, hardly a trend leading toward global economic dominance. In addition, the continent has no coherent foreign policy or effective military capability. As Zielonka (2006) argues, the current European Union is simply too large and unwieldy to ever act like state; rather than a coherent actor on an international scene, he sees a “maze Europe.”
The European Union’s member states do not see eye to eye even on a major issue whose excesses and burdens simply cannot continue: the bloated, trade-distorting agricultural subsidies that have been swallowing about 40% of the EU’s annual budget. Europeans, so eager to sermonize about their superior economic and social policies and higher moral standards, and so ready to voice anti-Americanism, could do with some introspection in all of these respects. Europe’s labor productivity and ample leisure time have been bought by mass (and in some countries, persistent) unemployment, roughly twice the U.S. rate for the entire work force (~10% vs. ?5%). In some parts of the continent more than 25% of people younger than 24 years are jobless, and in 2005 the peaks were above 50% in three regions in Italy, two in France, and Poland (Eurostat 2005b).
Just two months after the self-congratulatory comments on the United States’ ineptitude in dealing with the hurricane Katrina (“something like that could never happen here”), banlieus were burning all across France, the flames of thousands of torched cars and properties illuminating segregation, deprivation, and neglect no less deplorable than the reality of New Orleans’ underclass exposed by Katrina’s breached levees. (The torchings of about 100 cars a day have continued ever since.) As for the EU’s morally superior risk-sensitive policies (as opposed to what Europeans see as unsophisticated, brutal, and blundering U.S. ways), they allowed EU member states to watch the slaughter of tens of thousands of people and the displacement of millions of refugees in the Balkan wars of the 1990s. Only the U.S. interventions, on behalf of Muslim Bosniaks and Kosovars, prevented more deaths (Cushman and Meštrovic 1996; Wayne 1997).
As for the grand visions of professional politicians, they have been rejected in a referendum even by France, the European Union’s pivotal founding nation. Many managers as well as ordinary citizens would characterize the EU’s modus operandi as bureaucratic paralysis instead of caring social democracy. Additionally, and sadly, the continent’s ever-present anti-Semitism is undeniably resurgent (requiring the repeated assurances of political leaders that it is not); it surfaced in some stunningly direct comments during the Israeli-Hizbullah war of August 2006. Finally, a multitude of national problems with European integration will not go away.
The presence of a supranational entity like the EU has the effect of weakening ancient national entities. Spain has its Euskadi (Basque) and Catalunyan (and Galician) aspirations. Combined challenges from devolution (Scotland, Wales, Northern Ireland), European integration, multiculturalism in general, and large Muslim populations add up to a trend that may see the end of Britain (Kim 2005).
Germany, France, and Italy, the continent’s three largest nations whose unity is not threatened by serious separatist movements (episodic Corsican violence or Lega Nord are not going to dismember France or Italy), have their own deep-seated challenges. The German and French economies underperformed during the last two decades of the twentieth century. In the German case there was the partial excuse (or terrible miscalculation) of the economic cost of absorbing the former East Germany (DDR) (Bentele and Rosner 1997; Zimmer 1997; Larres 2001; Bollmann 2002). The French situation is a textbook illustration of failures arising from an overreaching dirigisme, economic planning and control by the state. Neither country offers a model for a dynamic reinvention of Europe during the coming two generations.
As for Italy, the EU’s third-largest economy, I can do no better than to quote an acute observer of his patria (Severgnini 2005, 77-78): “Life in Italy is so pleasant it becomes narcotic. . . . Italy, it would seem, suffers from a ‘squirrel syndrome’: every-body finds a comfortable hole, and hunkers down. The problem is that there are so many holes in the national tree that it may topple unless something is done soon.”
But this comfort cannot last. Italy’s economy suffers due to the combined impact of rapid aging (rivaled only by Japan), precipitous destruction of traditional smalland mid-scale artisanal manufacturing by Chinese imports, and growing numbers of immigrants. The deep economic and cultural breach between the Nord and Centro on one hand, and Mezzogiorno on the other, has not diminished. Decades of massive (and largely wasteful) investment have not lowered chronic unemployment. And the Mafioso culture of violence and corruption still operates (Gambetta 1992; Cottino 1998 and 1999; P. Schneider and J. Schneider 2003).
Even if one were inclined to see the post-WW II path of Western European as an undisputed success story and assumed that recent economic problems could be addressed fairly rapidly by suitable reforms, there is one inescapable factor that will determine the future of Europe’s most affluent western and central parts and poorer eastern regions: a shrinking and aging population. After many generations of very slow demographic transition (Gillis, Tilly, and Levine 1992), Western Europe’s total fertility rates slid below replacement level (2.1 children per mother) by the mid-1970s. A generation later, by the mid-1990s, the total fertility level of EU-12 was below 1.5; the new members did nothing to lift it: by 2005 the average fertility of EU-25 was 1.5 (Eurostat 2005b). Europe’s population implosion (Douglass 2005) now appears unstoppable.
Naturally, the reliability of long-term population projections declines as the projection’s final date advances, and a new trend of increased fertility cannot be categorically excluded. However, it is unlikely that it would last very long. The last notable regional rebound lifted the Nordic countries from an average of about 1.7 in 1985 to almost 2.0 by 1990, but by the end of the decade fertility was back to the mid-1980s level. More important, it is unlikely that a meaningful rebound can even begin once the rate had slipped below 1.5. That is the main argument advanced by Lutz, Skirbekk, and Testa (2005).
Once the total fertility rate reaches very low levels, three self-reinforcing mechanisms can take over and result in a downward spiral of future births that may be impossible to reverse. First, delayed childbirths and decades of low fertility shrink the base of the population pyramid and produce sequentially fewer and fewer children. Second, as fertility plummets, its norms will be even lower during subsequent generations, and even the perception of ideal family size (the number of children wished for under ideal conditions) shifts below the replacement level (already the case in Germany and Austria). Third, low fertilities, aging population, and shrinking labor force lead to cuts in social benefits, higher taxes, and lower expected income, working against higher fertility.
The economic consequences of population aging have been examined in considerable detail, and none of them can be contemplated with equanimity (Bosworth and Burtless 1998; England 2002; McMorrow 2004). Many of them are self-evident. Older populations reduce the tax base, and hence they lower average per capita state revenues and increase the average tax burden. Falling numbers of employed people push up the average dependence (pensioners/workers). Europe’s already high pensioner/worker ratios (Britain being the only exception among the continent’s largest economies) mean that old-age dependence ratios will typically double by 2050 (Bongaarts 2004). And in some countries most of this rise will happen during the next generation; for example, in 2001, Germany had 44 retirees (60+ years old) per 100 persons of working age, but the Federal Statistical Office (2003) predicts that the rate will jump to 71 by 2030.
As most countries finance the current retirement costs of their workers by current contributions from the existing labor force (pay-as-you-go arrangement), increasing retiree/worker ratios will bankrupt the entire system unless current contributions are sharply raised, pensions substantially cut, or both. Older workers may be more knowledgeable, but they still tend to be less productive because of physical or mental restrictions, higher disease morbidity, and a greater tendency toward workplace injuries and hence more frequent absenteeism. Higher dependence ratios and a higher share of very old people (>80 years old)—for example, every eighth person in Germany will be that old by 2050—will put unprecedented stresses on the cost and delivery of health care.
But as health care and pension expenditures rise, the average savings rates of the aging population will fall. This will affect capital formation, change the nature of the real estate market, and shift retail preferences for commodities ranging from food to cars. Despite the tightening labor market, many younger people may find their choice of jobs limited as some companies prefer to relocate their principal operations to areas with plentiful and cheap labor. Most new companies are started by individuals 25-44 years of age, and the shrinking share of this cohort will also mean less entrepreneurship and reduced innovation.
In addition to these consequences, which aging Europe will share with lowfertility societies in other parts of the world, the continent faces a specific problem whose resolution may crucially determine its economic and political future. As Demeny (2003) has noted, the process of moving toward a smaller and older population could be contemplated with equanimity only if Europe were an island, but instead “it has neighbors that follow their own peculiar demographic logic” (4). This neighborhood—Demeny calls it the European Union’s southern hinterland— includes 29 states (counting Palestine and Western Sahara as separate entities) between India’s western border and the Atlantic Ocean, all exclusively or predominantly Muslim (fig. 3.8).
By 2050, EU-25 is projected to have 449 million people (after losing some 10 million from the present level and an assumed net immigration of more than 35 million, 2005-2050), half of them older than 50 years. The population of its southern hinterland is projected to reach about 1.25 billion by 2050. Immigration to the continent from this hinterland is already the greatest in more than 1,000 years. During the previous period of mass incursions, intruders such as Goths, Huns, Vikings, Bulgars, and Magyars destroyed the antique order and reshaped Europe’s population. So far, the modern migration has been notable not for its absolute magnitude but for five special characteristics.
First, as is true for immigrants in general, the Muslim migrants are much younger than the recipient populations. Second, the migrants’ birth rate is appreciably (approximately three times) higher than the continent’s mean. Third, the immigrants are disproportionately concentrated in segregated neighborhoods in large cities: Rotterdam is nearly 50% Muslim; London’s Muslim population has surpassed 1 million, and Berlin has nearly 250,000 Muslims. Fourth, significant shares of these immigrants show little or no sign of second-generation assimilation into their host societies. A tragically emblematic illustration of this reality is that three of the four suicide bombers responsible for the July 7, 2005, attacks in London’s underground were British-born Pakistani Muslims. Fifth, whereas Christianity has become irrelevant to most Europeans, Islam is very relevant to millions of these immigrants.
Europe’s traditional ostracism has undoubtedly contributed to the lack of assimilation, but more important has been the active resistance by many of the Muslim immigrants—whose demands for transferring their norms to host countries range from segregated schooling and veiling of women to the recognition of sharī’a law. What would happen if this influx of largely Muslim immigrants were to increase to a level that would prevent declines in Europe’s working-age population? In many European countries, including Germany and Italy, these new Muslim immigrants and their descendants would then make up more than one-third of the total population by 2050 (United Nations 2000).
Given the continent’s record, such an influx would doom any chances for effective assimilation. The only way to avoid both massive Muslim immigration and the collapse of European welfare states would be to raise the retirement age—now as low as 56 (women) and 58 years in Italy and 60 years in France—to 75 years and to create impenetrable borders. The second action is impossible; the first one is (as yet) politically unthinkable. But even if a later retirement age were gradually adopted, mass immigration, legal and illegal, is unavoidable. The demographic push from the southern hinterland and the European Union’s economic pull produce an irresistible force.
Two dominant scenarios implied by this reality are mutually exclusive: either full integration of Muslim immigrants into European societies, or a continuing incompatibility of the two traditions that through demographic imperatives will lead to an eventual triumph of the Muslim one, if not continentwide, then at least in Spain, Italy, and France. I do not think that the possibility of a great hybridization, akin to the Islamo-Christian syncretism that prevailed during the earliest period of the Ottoman state (Lowry 2003), is at all likely. The continent’s Christians are now overwhelmingly too secular-minded to be partners in creating such a spiritual blend, and for too many Muslims, any dialogue with “nonbelievers” is heretical.
Other fundamental problems will prevent Europe from continuing to act as a global leader. Europe cannot act as a cohesive force as long as its internal divisions and disagreements remain as acute as they have been for the past three decades despite the continent’s advances toward economic and political unification. Yet the ruinous agricultural subsidies, national electorates alienated from remote bureaucracies, Brussels’s rule by directive, and inability to formulate common foreign policy and military strategy are, in the long run, secondary matters compared with the eventual course of the EU’s enlargement. Even an arbitrarily permanent exclusion of Russia from the EU leaves the challenge of dealing with the Balkans, Ukraine, and Turkey. The EU’s conflicting attitudes toward Turkey—among some leaders an eager or welcoming, economics-based embrace, among others a fearful, largely culture-based, rejection—capture the complexity of the challenge.
Turkey’s exclusion would signal an unwillingness to come to terms with the realities of the southern hinterland. And, as the Turkish Prime Minister said, Turkey’s achieving membership in the EU “will demonstrate to the world at large that a civilizational fault-line exists not among religions or cultures but between democracy, modernity, and reformism on the one side and totalitarianism, radicalism, and lethargy on the other” (Erdoğan 2005, 83). Admirable sentiments, but only if one forgets a number of realities. The wearing of hijā b has become a common act in Turkey, overtly demonstrating the rejection of Turkey’s European destiny (even Erdoğan’s wife, Emine, would not appear in public without it and hence cannot, thanks to Atatürk’s separation of Islam from the state power, take part in official functions in Ankara or Istanbul). The Turkish police and courts habitually persecute writers and intellectuals who raise the taboo topic of Armenian genocide and question the unassailability of “Turkishness.” The Kurds, some 15% of Turkey’s population, are still second-class citizens. So much for “democracy, modernity, and reformism.”
And how could one posit a rapid cultural harmonization (integration would be the wrong word here) of what would be EU’s largest nation with the rest of the Union when Turkish immigrants have remained segregated within Islamic islands in all of Europe’s major cities? Perhaps the only quality that might endear the Turkish public to Europeans is the fact that the former’s share of very or somewhat favorable opinion of the United States is even lower (<20%) than in Pakistan (Pew Research Center 2006). Europe’s anti-U.S. elites would thus have a multitude of new allies. But if the EU admits Turkey, why not then the neighboring ancient Christian kingdoms of Georgia and Armenia?
And if the EU, as Erdoğan says, is not a Christian club, why not admit Iraq, one of the three largest successor states of the Ottoman Empire, ancient Mesopotamia, a province of the Imperium Romanum? And, to codify the inevitable, why not make the EU’s southern borders coincidental with those of the Roman Empire? Why not embrace all the countries of the Arab maghrib and mashriq, that is, North Africa from the Atlantic Morocco (Roman Mauretania Tingitana) to the easternmost Libya (Cyrenaica), and the Middle East from Egypt (Aegyptus) to Iraq? Their populations will be providing tens of millions of new immigrants in any case.
No matter how far the EU expands, what lies ahead is highly uncertain except for one obvious conclusion. An entity so preoccupied with its own makeup, so unclear about its eventual mission, and so imperiled in terms of its population foundations cannot be a candidate for global leadership. But it already is the planet’s foremost destination of tens of millions of tourists. And many more are poised to come. When one sees the endless procession of travelers in today’s Rome, Prague, Paris, or Madrid, one can imagine a not-too-distant future (2020?) when the intensity of Chinese Europe-bound travel will surpass today’s U.S. rate (12.5 million visitors in 2005), and Europe will see every year more than 20 million Chinese tourists. This is perhaps the likeliest prospect: Europe as the museum of the world.
Japan’s rise, more phenomenal than Europe’s recovery after WW II, lasted less than two generations, between 1955, when the country finally surpassed its prewar GDP, and the late 1980s, when it was widely seen as an unbeatable economic Titan. This surge was all the more remarkable considering its near-total dependence on imported energy and the OPEC-driven oil price shocks of 1973-1974 and 1979-1980. At that time its admirable dynamism and enviable economic performance earned it widespread admiration and generated apprehension and outright fear regarding its future reach. Its rise was not derailed even by the Plaza Accord of September 22, 1985, by the G-5, the group of five nations with leading economies, which eventually led to near halving of the yen/dollar exchange rate (from 254 by the end of 1984 to 134 by the end of 1986) and to a spree of foreign acquisitions by Japanese companies and record purchases by the country’s art collectors (Funabashi 1988).
As Japan’s high-quality exports kept rising, Ezra Vogel, Harvard University’s leading expert on Japan, published Japan as Number One (1979). Japan’s expansive trend defied the revaluation of the yen in 1985 and accelerated during the next four years; the Nikkei index was at just over 13,000 by the end of 1985 and peaked at nearly 39,000 in December 1989 (fig. 3.9). But right after that, Japan’s bubble economy burst in spectacular fashion (Wood 1992; Baumgartner 1995). History has no other example of a country whose standing switched so rapidly from that of a globally admired technical and manufacturing superpower to that of a deeply ailing economy.
As just about everything began to unravel, the critics of Japan’s bubble, ridiculed during the 1980s, became the new prophets. By the end of 1990, as Nikkei index fell to less than 24,000, many experts still foresaw an imminent recovery. But the decline continued, and by 2002 the index fell to less than 8,600. Then it rose to about 11,400 by the end of 2004, and by the beginning of 2008, it was at 15,308, still 60% below its record level. In contrast, the Dow Jones index reached 13,624 by the beginning of 2008, 16% above its pre-9/11 peak of 11,722. Because so much of Japan’s inflated stock market was propped up by a real estate price bubble, the slipping Nikkei index devastated property values. By 1995 the index of urban land prices in Japan’s six largest cities fell to half of its peak 1990 level, and then it continued to decline; by 2005 it was 25% of its top bubble value (see fig. 3.9; JREI 2006). More important, Japan, previously the paragon of value-added manufacturing, began losing jobs to other East Asian countries. In 1989, Japan derived more than 27% of its GDP from manufacturing; by 2005 that share fell below 20% (Statistics Bureau 2006). Complaints about the hollowing-out of the economy, heard strongly in the United States for the first time because of the country’s huge trade deficits with Japan during the 1980s, became common in Japan. And every passing year has failed to arrest, much less reverse, Japan’s profound and longlasting retreat from its place as a top economy and a leading technical innovator (Yoda 2001; Callen 2003; Hutchison and Westermann 2006).
Japan’s stagnation has given rise to many unaccustomed sights, such as homeless men living in cardboard boxes in railway stations, parks, and side streets, and dismal statistical indicators. The unemployment rate, which was mostly 2%-2.5% during the 1980s, rose to 5.5% by the end of 2001; the suicide rate, traditionally higher in Japan than in Europe or North America, increased from 16.4/100,000 in 1990 to 25.5/100,000 by 2003 (Statistics Bureau 2006). And in 2007 the first large cohort of elderly baby boomers launched the country’s mass retirement wave (typically at age 60). At the same time, increasing numbers of young people (more than 1 million) have opted out of the labor market. This NEET generation (not in employment, education, or training), which prefers just hanging out in strange clothes and hairdos, can be seen as a sign both of Japan’s national decline and its continuing personal affluence.
But some things have not changed. Japanese women still live longer than women elsewhere (their average life expectancy at birth surpassed 85 years in 2003, compared to 83 in France and 82 in Canada), and the mean per capita GDP is (in terms of PPP) only marginally behind the French or Canadian level. There have even been some welcome gains. After two generations of very high savings, people began spending more freely, on air conditioners, bathrooms, cars, and flights to Thailand or Europe. To be sure, the savings rate plummeted, but more Japanese enjoy life in greater comfort at home and more of them spend their vacations (still short, even when compared to U.S. vacations) abroad. In 2005 more than 17 million Japanese tourists (or nearly every seventh person) left the archipelago.
Japan’s prospects are discouraging. Despite the prolonged economic shock, the country still has not made sufficient adjustment to its peculiar banking, management, and decision-making systems, which are generally considered the preconditions of a new beginning (Carlile and Tilton 1998; Lincoln 2001; Grimond 2002; Tandon 2005). Prolonged recovery has become much harder because of a combination of economic and political factors: the relentless rise of China and its confrontational style of foreign policy, increasingly precarious dependence on the grossly overextended United States, and the perennial danger of irrational North Korea. By 2005 there were many signs of a real turnaround, and a key question seemed to be: If Japan’s rise during the 1980s was uncritically hyped by most of the country’s admirers, are not the country’s detractors now repeating the same mistake in reverse by degrading Japan to a category of lasting underperformer?
The editor of The Economist argued that this is precisely the case and concluded that the country “is at last ready to surprise the world how well it does, not how badly” (Emmott 2005, 3). There has been no shortage of statistics to buttress an optimistic case. By 2003 Japan’s annual GDP growth rose again to more than 2%, and many large companies became profitable again (some because of their links with manufacturing in, China, others thanks to growing worldwide demand for Japan’s well-known brands of manufactures). By 2005 newly available jobs nearly matched the number of applicants (the ratio was below 0.5 in 1998), and in July 2006 seven years of deflation (as high as ?0.9% of the consumer price index in 2002) came to an end as the Bank of Japan raised its interest rate from 0 to 0.25% (the rate was 6% in 1990).
There are at least three major reasons that I do not foresee Japan’s regaining a status comparable to its position during the 1980s. The first reason is perfectly captured in Donald Richie’s perceptive Japan diaries, in his entry for February 12, 1999. When Karel van Wolferen, who authored a book on the enigma of Japanese power (Wolferen 1990), remarked that the only way out of Japan’s dilemma was some kind of revolt that he could not imagine, Richie (2004, 429) told him “that Nagisa Oshima had said that this occurred only three times in Japan’s history; the Tempo Reforms, the beginning of Meiji, and in 1945. And each time the structure recrystallized, and petrified.” This may be dismissed as too deterministic, but no diligent student of history can deny the existence of national predilections.
The second reason is that the signs of Japan’s domestic renaissance have been accompanied by unsettled relations with its three western neighbors, by their deepening distrust and dislike of Japan, whose manifestations include mass demonstrations in China’s cities, South Korea’s frequent diplomatic protests, and the undisguised hostility of North Korea that provoked the government to contemplate a future possibility of a preventive defensive strike. These seemingly intractable external factors are the main reason that even if a widely discussed change in its constitution removed the restrictions on Japan’s military actions (Nippon Keidanren 2005), the country would remain no less dependent on its strategic ties with the United States.
The third reason is Japan’s declining population. By far the most fundamental obstacle to Japan’s reincarnation as a great power of the twenty-first century is the fact that the country’s partial economic recovery came so late that it merged with the onset of Japan’s depopulation and with a globally unprecedented aging of its people. Two generations of decreasing total fertility rate—from a post-WW II peak of 2.75 in the early 1950s to about 1.3 (well below the replacement level of 2.1) by the early 2000s—have made it inevitable that Japan’s total population will decline. Only a massive, Canadianor Australian-style immigration that would admit at least half a million people per year could reverse this, but such a policy change is most unlikely. Consequently, the only uncertainty concerning Japan’s aging is the pace. Its many socioeconomic consequences should be similar to those that will be affecting other countries (England 2002; McMorrow 2004; MacKellar et al. 2004).
The medium variant of the best Japanese projections of the early 2000s expected peak population in 2006, at 127.74 million (NIPSSR 2002), but the preliminary counts of the 2005 census (held on October 1) showed that the total population of 127.76 million was about 19,000 below the estimate for October 2004. Apparently, Japan has already entered a long period of depopulation. If there are no dramatic changes in Japan’s fertility rate, the country’s population would decline slowly at first, to about 121 million by 2025, then to only about 100 million people by the year 2050 (NIPSSR 2002). For comparison, the latest United Nations (2005) forecast sees only a marginal decline by 2025 (nearly 125 million) and the total of about 112 million by 2050. But these differences matter less than what the absolutes hide: it is virtually certain that by the middle of this century Japan will become the most aged of all aging high-income societies.
According to the medium variant of the latest Japanese projections (NIPSSR 2002), the country’s median age will reach 50 years by 2025. Whereas in 2005 one out of five people was 65 years or older (the highest share worldwide), by 2025 the share will be nearly 30%, and it will reach 35% by 2050. Japan’s age-gender population structure will assume a cudgel-like profile, in contrast to today’s barrelshape and the classical pyramid of the early 1950s (fig. 3.10). The share of adults of economically active age will drop from 66% in 2005 to 53% by 2050, when astonishingly, about one out of seven people will be 80 years or older. This would mean that there would be more very old people (80+ years) than children (0-14 years; their 2050 share is projected to be less than 11%), a situation that would create the world’s first truly geriatric society (United Nations 2005).
Japan’s depopulation is already starkly evident in many of the country’s rural areas, where entire villages are abandoned and shrinking municipalities are merging to boost their tax base. This trend is evident not only in remote areas. A few years ago I was offered (if I’d move in) a large, well-built house overlooking a lovely valley in a small village only a little more than one hour north of Ky–to that had already lost its post office, its school, and all but a few of its inhabitants. The implications of this continuing depopulation and the aging trend would be far-reaching, and some are difficult to imagine. There has been never a society, much less a major nation, where octogenarians outnumbered children.
The absolute drop would push Japan from being the world’s tenth most populous nation in 2005 (after Nigeria and ahead of Mexico) to thirteenth place by 2025 and to seventeenth place by 2050, behind Vietnam and ahead of Turkey (United Nations 2005). And because most Japanese companies still have a minimum mandatory retirement age at 60, there will be a wave of retirees between 2010 and 2020 as the high-fertility cohort of the 1950s quits working. A new law passed in 2004 raised the minimum mandatory retirement age to 65 by 2013. While many people will want to work past 60, none of these adjustments will provide more workers for occupations that require more demanding physical exertions, which will be in greater demand as both Japan’s infrastructure and its population age rapidly.
Reconstruction of crumbling highways (air hammers, concrete pouring, laying down reinforcing steel bars), repairs of buildings damaged by earthquakes, or the care of bedridden patients cannot be done by octogenarians. Japan’s much admired robotization has often been offered as a partial solution of the aging challenge: instead of importing foreign labor Japan leads the world in using industrial robots. By 2005 the country had about 356,000 robots, more than 40% of the worldwide total and nearly 90% of the combined stock of these machines installed in Europe and North America (IFR 2005). The country’s many makers of robots include such leading producers as FANUC, Fujitsu, Kawasaki, Mitsubishi, Muratec, Panasonic, and Yaskawa. But the actual gap between Japan and the rest of industrialized world is not that large because Japanese statistics also include data on simple manipulators that are controlled by mechanical stops, and these machines would not pass a stricter definition of industrial robots used in the United States and the EU.
Moreover, Joseph F. Engelberger (who in 1956, with George Devol, established the world’s first robot company, Unimation) has been very critical of the direction taken by Japan’s leading robot researchers: “Nothing serious. Just stunts. There are dogs, dolls, faces that contort . . .” (cited in Cameron 2005). Instead, he advocates, as he did for the first time in the late 1980s (Engelberger 1989), an intensive development of household service robots to help the elderly and infirm, an advance that would particularly benefit the world’s soon-to-be most geriatric nation.
I do not see robots saving Japan, and I do not see, as Pyle (2007) clearly does, the 15 years since the end of the Cold War as a time of transition, with the country now standing on the verge of new triumphs as it reasserts not only its economic power but also its military capacity. Of course, Japan will not—as so dramatically portrayed in a recent sci-fibestseller (Komatsu and Tani 2006)—become a victim of mighty geotectonic forces and sink into the ocean, and the Japanese will not become refugees in Papua New Guinea, the Amazon, and Kazakhstan, but even if there were a newly assertive Japan, its immediate impact and its long-term global influence will not be anything like its achievements up to the 1980s.
If the odds do not favor either Europe or Japan filling the roles of strong, dynamic, nimbly adaptive great powers of the first half of the twenty-first century, we are left with four choices. Can resurgent Islam realize an oft-stated goal of some of its militant leaders and establish a caliphate even more extensive than that of the Umayyad dynasty (661-750), whose power reached from the Iberian Peninsula to Afghanistan and from Armenia to Yemen? Can Russia, strategically downsized by the collapse of the USSR, regain its superpower status? Can China translate its economic power into a wider strategic dominance? And is it possible that the United States can successfully cope with many signs of advanced decay and retain its place on top for at least another two generations?
The question regarding the establishment of a new centrally governed Muslim caliphate extending from Spain to Pakistan that I posed in the preceding section is easy to answer. Because the Muslim world is too heterogeneous (in sectarian, economic, cultural, and political ways), the chances of seeing such an extensive, coherent, and globally powerful political and economic entity before 2050 are vanishingly small. That leaves us with questions that are much harder to answer. Why is the current Muslim world (with a few exceptions)—taken as the entire population of believers, al-’umma al-islā mīya or al-dar al-islā m (fig. 3.11)—so poorly governed and undemocratic? Why does it lack an acceptable human rights record, why is it so stridently intolerant, why does it relegate women to second-class status? Why is it so prone to internal violence, and why is it the leading global perpetrator and exporter of suicide terrorism?
Reasoned answers to these questions should tell us what we should really be worried about in the future, and what is the outlook for defusing and reducing threats. The Muslim faith itself offers at best a limited explanation. The now common Western perception of Islam as a zealous, monolithic faith with a nearhypnotic power over populations imbued with a uniform hatred of the West is a figment of historical ignorance and unhelpful generalization. Islam’s comprising so many populations and different traditions makes the faith fairly heterogeneous; its practitioners span a wide continuum of commitments and devotions (B. Lewis 1992; Esposito 1999). More important, a large part of the Muslim world is no different from the Christian world in its adherence to (some or most) major tenets of a traditional religion, observation of principal holidays and ceremonies, participation (by the more devout) in pilgrimages, and pride in the associated literary and artistic heritage.
These Muslims—be they in China, Indonesia, India, Iran, Russia, Bosnia, Egypt, Tunis, Mali, Niger, or Tanzania—are not going to upend the history of the next 50 years any more than will the hundreds of millions of their (lukewarm to earnest) Christian counterparts in Europe, the Americas, and Africa. Nor should the term Islamic fundamentalism generate reflexive fears. The term has entirely different connotations when understood as a strictly religious identifier or as a modern call to arms against “infidels” (Sidahmed and Ehteshami 1996; Davidson and Miller 1998; Moaddel and Talattof 2002; Khaled 2005; Margulies 2006). On one hand, it may mean only exemplary piety, but it could also describe a strident insistence that only the Koran and sunna, the traditions based on the sayings, actions, and approvals of the Prophet Muhammad incorporated in the books of hadīth, are the exclusive guides of the faith. Or it could be a label for the sunnī intolerance of their shī‘ī co-religionists, who do not accept sunna as binding.
On the other hand, Islamic fundamentalism has come to designate religiously inspired violence perpetrated by Muslims. Most Muslim fundamentalists (sensu lato) will never commit any acts of violence, and the actions of Muslim terrorists have been condemned (albeit belatedly) by a fatwā authored by the Fiqh Council of North America (2005) as well as by a Saudi fatwā (Sandi Committee 2004) issued by the most fundamentalist (sensu stricto) body of all, Saudi Arabia’s Permanent Committee of Religious Research and Ifta, which is chaired by the kingdom’s leading religious authority, Grand Mufti Skaikh Abdulazī z Al-Shaikh. The Saudi fatwā, released a day after the Riyādh bombings of April 21, 2004, not only condemned them as “a forbidden and sinful act” but also forbid “to justify the acts of these criminals.” Tellingly, no such fatwā followed the destruction of the Twin Towers on September 11, 2001.
Islam’s guiding texts do not give us, contrary to what is claimed by ill-informed commentators, any unequivocal guidance regarding aggression and compassion. Much like the contradictory instructions and enjoinments of the Torah, and the stark dichotomy between vengeful passages in the Old Testament and the submissive and compassionate preaching of Jesus Christ, the Koran and hadīth offer many contradictory messages that a determined exegesis can use to reach opposite conclusions. I juxtapose just two sets of quotes in order to exemplify this contrast between militant and compassionate Islam. The third sī‘ī ra of the Koran (verse 151) puts it unequivocally: “We will put terror into the hearts of the unbelievers. . . . Hell shall be their home.” But Abu Daī‘ī d (hadī th 4923) states with equal clarity: “If you show mercy to those who are on Earth, he who is in the Heaven will show mercy to you.”
The fifth sī‘ī ra (verse 32) says, “We laid it down for the Israelites that whoever killed a human being, except as a punishment for murder or other wicked crimes, should be looked upon as though he had killed all mankind; and whoever saved a human life should be regarded as though he had saved all mankind.” This eloquent passage was cited in the fatwā against terrorism issued in July 2005 by the Fiqh Council of North America, albeit leaving out the preamble “We laid it down for the Israelites.” But the very next verse (33) of the same sī‘ī ra carries a very different message, one that could easily be used to justify violence against infidels: “Those that make war against Allah and His apostle and spread disorders in the land shall be put to death or crucified or have their hands and feet cut off on alternate sides, or be banished from the country.” Notice that “spreading disorders,” a conveniently elastic category, is enough to qualify for such a treatment.
The term jihā d is now a common synonym for holy war (the term for which there is actually no word in classical Arabic), but the noun, derived from the verb jahā da, to exert, struggle, or strive, originally signified any sort of exertion to the utmost, striving toward an exalted goal. Muhammad himself called the moral struggle against one’s self al-jihā d al-akbar, the greater jihā d, in contrast to the lesser jihā d of the sword. Contrary to the common Western perception that jihā d is the guiding star of Islam, hadī th 48 orders a different sequence:
I asked Allah’s Messenger (may be peace upon him) which deed was the best. He (the Holy Prophet) replied: The Prayer at its appointed hour. I (again) asked: Then what? He (the Holy Prophet) replied: Kindness to the parents. I (again) asked: Then what? He replied: Earnest struggle (jih–d) in the cause of Allah.
First piety, then compassion. This sequence could have come straight from the sermons of Jesus.
And there is more. How can any jihā d be compulsory when the 256th verse of the second sī‘ī ra says quite explicitly that “there shall be no compulsion in religion”? And when in hadī th 1009 the Prophet explicitly warns: “O you men, do not wish for an encounter with the enemy. Pray to Allah to grant you security,” much as the 190th verse of the second sī‘ī ra admonishes the faithful “to fight for the sake of Allah those that fight against you but do not attack them first. Allah does not love the aggressors.” Perhaps I have already belabored the point enough. The problem is not Islam, a religion with tenets as contradictory, as open to diverse interpretation, and as confusing in its totality as are its two great monotheistic inspirations, Judaism and Christianity. The problem is political or politicized Islam, Islam tendentiously interpreted, or not interpreted at all and hence stubbornly anchored in its medieval origins.
But if arguments have no place in dealing with the Koran (because it is God’s revealed word), it must either be accepted in its entirety by the faithful or rejected by unbelievers. Doubting parts of it amounts to rejection, as does any adaptation of the text to modern needs. The challenge to literalists is not how to reconcile the revelation with modernity but how to adapt life in the modern society to the revealed teaching. This approach leaves no room for any interfaith dialogue that so many Westerners call for. More important, it leaves little hope for any true modernization of the Muslim world because it prevents any adaptation to challenges ranging from open discussion of key problems facing the Muslim societies to environmental change. It rejects functional fusion of an ancient religious tradition with the needs of the modern world. And it rules out the broad-minded tolerance that is required for international cooperation in a globalized world. In sum, this approach immures the faith.
Not surprisingly, many Muslims feel that their religion has been hijacked by extremists (be they madrasa-bound rigid traditionalists or modern descendants of Ibn al-Sabbah’s al-hashashīn who are bent on murder by suicide) and that it must be reclaimed (Khaled 2005). Perhaps none has outlined this great Muslim dilemma better than Muhammad Shahr–r, by profession a civil engineer, by vocation a passionate advocate of modernized, tolerant, enlightened Islam. In two path-breaking books (Shahr–r 1990; 1994, neither of them in the Library of Congress, a telling commentary on Americans understanding of Islam) and in many presentations and interviews, he argues boldly for a fundamental reinterpretation of the faith: “We cannot go on without radical, religious reform . . ., like the one initiated by Martin Luther. We have reached a dead end; we are stuck in a dark tunnel. . . . If we don’t, there is no hope for us, because we will continue living in the past” (Shahr–r 2004).
His most programmatic statement is a proposal for an Islamic covenant based on “interpreting the Book in a contemporary way” that uses the Koran to justify such fundamental tenets of modern, tolerant faith as a full exercise of all personal freedoms (“freedom is the only form in which man’s worship of God can be embodied”), no religious coercion (“The fact of having no compulsion in religion is too fundamental to be subject to elections or debate”), no persecution of other religions (“Therefore this distinction does not justify enmity, hatred, and killing”), the need for democracy (“democracy is so far the best relative standard man has achieved”), and rejection of religiously motivated violence (Shahr–r 2000).
Islam thus faces a fundamental, long-lasting, contentious, even outright dangerous internal fight over its modern identity, similar to what Christianity experienced during the centuries between the impassioned preachings of Jan Hus (1369-1415; a reformist Czech priest who preceded Martin Luther (1483-1546) by a century) and the separation of church and state, which in some nations was legally completed only during the twentieth century. I do not claim that what the non-Muslim world does (being considerate, delicately critical, disdainful, indifferent) is irrelevant, but its actions or inactions are not the key determinant of the outcome of Islam’s internal conflict.
The contrast between Islam’s two choices is as profound as in any previous clash of incompatible ideologies. On the one hand is the extremism of al-wahhā bī ya (a Saudi sect often presented under the label of Salafism, al-salā fīya al-jihā dī ya), which accuses other Muslims of apostasy (takfīr) to justify such killings as the sunnī massacres of shī‘ī (Schwartz 2002). Its means of exalting terrorism include widespread distribution of religious cassettes that advocate violent jihā d by emphasizing sexual rewards for suicide “martyrs” (MEMRI 2005a). On the other side, with Shahrūr, are spokesmen who call for modernization, including Muhammad bin ‘Abd Al-Latī f Aal Al-Shaikh, a Saudi writer, who sees the ideology of terror groups as very similar to Nazism (both based on hatred and physical elimination) and asks why, given this similarity, Muslims are not fighting against the foundations of this religiously inspired terror, against “its religious scholars, its theoreticians, and its preachers—just as we deal with criminals, murderers, and robbers?” (MEMRI 2005b).
Of course, not in all countries with Muslim majorities do people feel that this contest is a critical part of their societies or a key determinant of their futures. But certain aspects of this conflict are present in almost all countries, and the outcome, all too uncertain, will determine Islam’s global impact during the next 50 years.
Although this internal contest over the meaning and the mission of the faith is important, it is not the only Muslim challenge. There are at least three intertwined problems and trends that are influenced by this contest and whose evolution will, in turn, help to determine its outcome: the lack of secular sources of state legitimacy, Muslim countries’ modernization deficit, and a tardy demographic transition.
The lack of secular sources of state legitimacy leads to what Shahr–r (2005, 39) calls “the bizarre combination of ruling regimes” with their default reliance on traditional religious sources and emergence of archaic despotic authorities. Saudi Arabia and Iran (and Afghanistan under the Taliban) are the most extreme examples of this reality. But its persistence is clearly evident even in Turkey eight decades after Kemal Atatürk secularized the country in 1924 (fig. 3.12)—a nation that seeks entry into the EU and but a large part of whose population still imbues a piece of cloth covering a woman’s head with transcendent qualities and sees it as a sacred banner of non-negotiable identity. Pamuk’s (2004) book dealing with these matters should be required reading.
This default reliance on religious sources of state legitimacy explains the prominence of sharī’a (literally, a path, a way), the Islamic code of law and living whose precepts deal with personal affairs, as well as crime, commerce, finance, and education, and thus preempt adoption of laws that form the foundation of modern states. Sharīa reinforces the persistence of an archaic worldview dependent on rigid interpretations of religious texts, and its reach is not weakening under the pressure of modernity. Just the opposite is true, owing to demands to make it a law for Muslim minorities in Western countries (Marshall 2005).
The modernization deficit in the Muslim world is in many ways the result of the situation just outlined. It ranges from repression of individual freedoms to a parlous state of scientific research to a relatively poor quality of life experienced by the majority of Muslim countries. In 2006 only 3 out of 46 countries with Muslim majorities had the highest ranking (1 or 2) in a global comparison of political rights and civil liberties (Freedom House 2006). The famous Syrian poet Ali Ahmad Sa’–d (living in exile in Paris) noted that in today’s Arab world “words are treated as crime” (MEMRI 2006b, 1). This situation is common even in Turkey, where ultranationalist tendencies lead the authorities to repeatedly accuse and prosecute writers who are seen as disrespecting or denigrating “Turkishness.”
Nothing spoke as eloquently of the vengeful intolerance of retrograde Islam’s usurpation of modern state authority as did the most famous modern fatwā, issued on February 14, 1989, by Ayatollah Khomeini. He asked the faithful to kill Salman Rushdie for writing The Satanic Verses, a book that neither he nor any but a few score among his tens of millions zealous supporters ever read. (Contrary to widespread reports, this fatwā was never formally repealed; indeed it was reaffirmed by Khomeini’s successor, Ali Khamenei, in 2005.)
How can any modern state, how can any self-respecting individual engage in a dialogue with those who see death as a fitting verdict for a fictional tale, a beheading for a work of art? Unfortunately, this intolerance is deeper, aimed at all nonMuslims and fully institutionalized. While new mosques (including some large and magnificent buildings constructed with Saudi money) rise in every major city in Europe, North America, and Australia, in July 2000 the Saudi Permanent Council for Scholarly Research and Religious Legal Judgment reaffirmed the ban on construction of churches in any Muslim country because, in its view, all religions other than Islam are heresy and error (MEMRI 2006a).
The modernization deficit is also obvious in the dismal state of learning and research in the Muslim world. Reiterating the glories of medieval Muslim science and engineering (Al-Hassani 2006) will not change the fact that at the beginning of the twenty-first century researchers in Muslim nations (with more than 12% of the world’s population) authored only 2% of all scientific publications, and 60% of that minuscule share came from a single country, Turkey (Kagitçibasi 2003; Butler 2006). No Muslim country ranked among the 20 states with the highest overall scientific output (a group that contains such small and resource-poor nations as Belgium, Denmark, and Israel), and the same is true for health-related publications (Paraje, Sadana, and Karam 2005). Countries of North Africa and the Middle East also rank near the bottom of the worldwide record for patents and trademarks.
These realities led Sa’–d (MEMRI 2006b, 2) to compare the achievements of Arabs, resources and great capacities, with what others have done over the past century and to conclude, “I would have to say that we Arabs are in the phase of extinction, in the sense that we have no creative presence in the world.”
The modernization deficit is also responsible for a belief in bizarre conspiracy theories that pervades the Muslim world. Even many Western-educated Arab intellectuals believe in the authenticity of The Protocols of the Elders of Zion, an antiSemitic fabrication of the early twentieth century. In 2003, Nigeria’s northern Muslim states suspended polio vaccination because of a rumor of deliberate contamination of vaccines with hormones aimed at making young Muslim girls infertile to depopulate the region. And an opinion survey of 13 countries (Pew Research Center 2006) found that large majorities of Muslims do not believe that their coreligionists carried out the 9/11 attacks. In Indonesia, the world’s largest Muslim country, as well as in Turkey, only 16% of people thought so, hardly a basis for rational dialogue on responsibility and the need for modernization.
As far as ordinary Muslims are concerned, the modernization gap is most obvious with respect to their quality of life. As a group, Muslim countries, including those fabulously rich, small Persian Gulf states, do not enjoy the high quality of life that could be expected based on their high per capita GDPs. Even among the richest Muslim countries of the Middle East, only the Gulf states have infant mortalities at modern levels (near or below 10/1,000 live births); in the early 2000s the Saudi rate was above 20, and the Iranian and Turkish rates were above 30 (UNDP 2006). And, perhaps most tellingly, in 20 out of 24 Muslim countries of Europe’s southern hinterland, the difference between GDP per capita and Human Development Index rank is negative (fig. 3.13).
The third great challenge facing the Muslim world is its tardiness in completing the demographic transition, a reality that becomes particularly important given the just-described lack of secular state authority and adequate scientific advances, two necessary mainsprings of modern economic growth. These deficits will continue to hamper any progress in coping with the aspirations of expanding populations, in particular of assertive young men. At the beginning of the twenty-first century the only countries with Muslim majorities whose total fertility was near replacement level were Iran, Indonesia, and Malaysia. In all populous Muslim countries of North Africa and the Middle East, as well as in Pakistan and Bangladesh, total fertility was 50%-100% above replacement.
Arab countries of the Middle East (from Egypt to Iraq, and from Syria to Yemen, excluding North Africa) had about 190 million people in 2005, but the UN’s medium forecast expects the doubling of this population to about 380 million by 2050 (United Nations 2005). Populations of those Muslim societies that have his-torically been most prone to various forms of fundamentalism (al-wahhābīya, militant shīa) and that harbor deep anti-Western sentiments (Pakistan, Afghanistan, Iran, Saudi Arabia, Yemen) are projected to rise from nearly 300 million in 2005 to about 610 million by 2050, when they could be nearly as large as Europe’s declining population (the UN’s medium forecast puts it at about 650 million by 2050).
Most important, by 2025 nearly every seventh person in that region will be a man in his late teens or twenties (in Europe it will be every sixteenth person). Young (usually unmarried) men have always been the most common perpetrators of deadly violence. This is not only an impression gathered by watching the images of Somali gangs in Toyota pickups or Taliban attacks in Kandahar. Analyses of demographic and war casualty data show that this variable consistently accounts for more than one-third of the variance in severity of conflicts (Mesquida and Wiener 1996; 1999). Large numbers of young unattached males will thus be a major source of internal tension, instability, and violence in dar al-islām that can only be accentuated by politically organized terrorism.
The combination of these trends guarantees decades of worrisome social and political convulsions. The Muslim world would present a considerable global security challenge even if it did not harbor any radical movements engaged in external terrorism. The global repercussions of this internal instability will be compounded by the fact that five Persian Gulf countries (Saudi Arabia, Iran, Iraq, Kuwait, and United Arab Emirates) control about 65% of the world’s remaining reserves of inexpensive liquid oil (BP 2006) and that they will strengthen their dominance of global oil production by 2030. In 2005 they produced roughly one-third of all oil; by 2030 this share will rise to at least 40%. At the same time, another rapid slide of oil prices could have even more negative impacts on these major oil producers than the post-1985 slump in revenues.
It is highly unlikely that either of the two dominant oil producers (Saudi Arabia and Iran) or Pakistan will be able to maintain their present regimes, but it is impossible to predict the timing and the mode of coming power shifts. (Assorted Middle East experts have been predicting a violent collapse of the House of Sa’–d for decades.) Even if the new regimes were to be more democratic and secular, they might not be less assertive and more difficult to deal with. Beyond that, it is unfortunately all too easy to sketch some very scary scenarios, ranging from the emergence of a new militant shīa state in al-Sharqīya, the Eastern province of Saudi Arabia (it would control one-quarter of the world’s oil reserves) to deliberate nuclear proliferation by Pakistan and (perhaps soon) Iran. For sleepless nights, think of a future nuclear Sudan or Somalia.
Counting Russia out would be historic amnesia. There is nothing new about Russia’s leaving the great power game and then reentering it vigorously decades, even generations, later. The country was out of the contest for great power status in 1805 as Napoleon was installing his relatives as rulers of Europe. But less than a decade later, as Czar Alexander I rode on his light-grey thoroughbred horse through defeated Paris, followed by thousands of his Imperial Guard, Bashkirs, Cossack, and Tartar troops, the country was very much an arbiter of a new Europe (Podmazo 2003). Russia was sidelined again in 1905, convulsed by its first bloody revolution, but 15 years later a victorious revolutionary regime regained control over most of the former Czarist territory and inaugurated seven decades of Communist rule. Then, as the first waves of revolutionary fervor subsided, Russia turned inward, and by 1935, with Stalin plotting murderous purges of his comrades, his army, and millions of innocent peasants, the country’s international role seemed once again marginalized.
Fourteen years later Russia was not only a victorious superpower with troops stationed from Korea to Berlin but also the second nuclear power with a successfully tested fission bomb (August 29, 1949). This last episode of Russia’s militant role on the global stage lasted nearly 50 years, between 1945 and 1991, cost the West many trillions of dollars in armaments and dubious alliances, prolonged many violent proxy conflicts in far-flung places from Angola to Afghanistan, and brought the world to the brink of thermonuclear war (Freedman 2001). Moreover, it contributed massively to environmental degradation with hundreds of highly contaminated nuclear weapons sites in both the United States and the USSR, and with particularly high levels of radioactive wastes detectable in Russian rivers (including Siberia’s main artery, the Yenisei) and in the Arctic Ocean (USDOE 1996).
But any suggestion of Russia’s reemergence as a great power seemed far-fetched in the years immediately following the demise of the Soviet Union (officially on December 8, 1991, at a meeting of Russian, Ukrainian, and Belorussian leaders in a hunting lodge at Belovezhskaya Pushcha, the virgin forest reserve near the Polish border). Russia seemed to be assaulted by too many intractable problems. The sudden demise of the command economy that had governed the country since the Stalinist five-year plans of the 1930s and a messy creation of new (often criminalized) business structures were accompanied by years of declining GDP. This shrank by 5% in 1991, 14% in 1992, 9% in 1993, and 13% in 1994; by the time it bottomed out in 1998, its per capita rate (in constant monies) was only 60% of the 1991 level, and real income had fallen to about 55% of the peak Soviet rate.
The country also faced a new, determined challenge from resurgent Islam. The first war in Chechnya (1994-1996) turned the republic’s capital into ruins and ended in a fragile truce, and the second war, which began in 1999, continues at a low level, with intermittent large-scale brutalities and terrorist attacks. To this must be added widespread social ills, such as the parlous health of Russian men resulting from widespread alcoholism, smoking, poor diet, low physical fitness, and mental problems (World Bank 2005). From a long-term perspective the most worrisome challenge is a steep decline in Russia’s fertility rates and the prospect of considerable depopulation.
But Russia rebounded once again. By 2005 its economic performance had improved to such an extent that it actually became fashionable to speculate about its renewed superpower status. Its economic recovery was driven largely by the exports of mineral riches, in particular oil and gas, and this accelerated with the post-2004 rise of crude oil prices. In addition to ranking among the world’s leading producers of such strategically important metals as nickel (used in specialty steels), titanium (for alloys and pigments), and palladium (essential in catalytic converters), Russia is an unmatched treasure house of energy resources (Leijonhielm and Larsson 2004).
The country has the world’s second largest coal reserves (after the United States), second-largest potential for hydroelectricity generation (after China), seventh-largest crude oil reserves, and by far the world’s largest natural gas reserves (BP 2006; Merenkov 1999). Russia’s reserves of natural gas amount to about 27% of the global total, nearly as much as the combined reserves of Iran and Qatar and almost ten times larger than those of the United States.
Russia’s oil production peaked at nearly 570 Mt/year in 1987-1988, fell to 303 Mt/year in 1996, and rose to 470 Mt by 2005, accounting for about 12% of the world’s total (BP 2006). Concurrently, Russia’s crude oil exports rose from about 100 Mt/year to nearly 250 Mt/year, and forecasts envisage sales of more than 350 Mt/year by 2010 (Lee 2004). Russia is Europe’s largest supplier of natural gas, delivered through the world’s largest and longest pipelines from western Siberia (fig. 3.14). In 2005 its exports of about 151 Gm3 were largest in the world, with nearly one-quarter destined for Germany and 15% for Italy. Even larger projects are planned to carry Siberian natural gas to Japan and China and to sell liquefied natural gas around the world (Smil 2003; USDOE 2005). The expected boom in natural gas exports is based on the fact that the fuel can substitute for oil in many of today’s uses and on its inherent environmental advantage: its combustion generates about 30% less CO2 per unit of liberated energy than oil.
Russia has another strength in its intellectual capacity. The country has always had many highly creative scientists and engineers, whose fundamental contributions are generally unknown to the Western public. How many people watching a scanner toting up their groceries know that Russian physicists, together with their U.S. colleagues, pioneered masers and lasers. (Nobel prizes in physics were awarded to Nikolai Gennadievich Basov and Aleksandr Mikhailovich Prokhorov in 1964 and to Zhores Ivanovich Alferov in 2000.) How many people seeing the images of the U.S. Air Force stealth planes know that this class of aircraft began with Piotr Iakovlevich Ufimtsev’s (1962) equations for predicting the reflections of electromagnetic waves from surfaces? How many patients appreciate that Russian surgeons pioneered such remarkably unorthodox methods as extending leg bones shortened by bone deficiencies, deformities, and fractures (Gavril Abramovich Ilizarov) and treating nearsightedness by radial keratotomy (Svyatoslav Nikolaevich Fyodorov)?
The Russian economy is reenergized by Siberian gas and oil, profiting from rising exports of hydrocarbons and strategic metals, and drawing on admirable scientific and engineering expertise. The Russian military is being rearmed with a new generation of weapons, and Russian diplomacy is vigorously reengaging in the world’s major conflict zones. These factors add up to an undeniable reassertion of the country’s global primacy. But are these natural, economic, and intellectual advantages enough to outweigh the country’s many problems and elevate it once again to superpower status? A closer look indicates that the prospects for a strong and lasting comeback is not encouraging.
True, the economy has turned around. Between 1998 and 2005, Russia’s GDP rose nearly 60% (inflation-adjusted), but it was still some 10% behind the peak Soviet level in 1990. Personal consumption, which reached its low point in 1992, did somewhat better, but in 2005 it was no higher than in 1990. When converted using the most likely PPP, Russia’s per capita GDP of about $10,000 in 2005 was comparable to that of Malaysia or Costa Rica and lower than the Spanish level. The country’s reliance on its natural resources is both a strength and a great potential weakness. A sudden fall in cyclical commodity prices would deprive the government of nearly two-thirds of its revenue. And there is plenty of evidence that the so-called resource economies, particularly the oil-dependent ones, are neither as efficient nor as politically stable as countries that have to rely on imports of basic resources (Friedman 2006).
Politically, Russia has a long way to go to become a stable democracy. Doubts about its progress and hopes for gradual improvement are reflected in contradictory characterizations of the country’s president, Vladimir Vladimirovich Putin, as a traditional Russian autocrat with a high-level KGB pedigree or a sui generis democrat (Astrogor 2001; Blanc 2004; Zdorovov 2004; Herspring 2006). But these evaluations might seem quaint in the future if Putin were to be supplanted by a much more inward-looking, xenophobic leader. Strategically, Russian leaders know (despite the reflexive protests) that NATO at its borders is not a mortal threat, but their post-1991 worries about resurgent Islam are likely only to intensify.
The Chechen conflict keeps spilling into neighboring Dagestan, Ingushetia, and Georgia. Oil-rich Azerbaijan and Kazakhstan are Russia’s only relatively stable southern neighbors, whereas the former Soviet Central Asian republics with Muslim populations (Uzbekistan, Tajikistan, Kirgizstan) and their neighbor, Afghanistan, have a much more uncertain future. The centuries of migration and the expansion of Czarist Russia brought relatively large resident Muslim nationalities into the Slavic state. They now amount to nearly 15% of its population and are concentrated in such oil-, gasand mineral-rich regions as Tatarstan, between the Volga and the Kama, and in neighboring Bashkortostan in the southern Urals (Crews 2006; Bukharaev 2000). War in Chechnya, instability in the North Caucasus regions, the post-Soviet Islamic rebirth, and the nationalist aspirations of Muslim minorities worsened the relations between Russians and Muslims both on the state and individual levels, but because Muslims inside Russia are divided along cultural, ethnic, and religious lines, they are not a monolithic power confronting the state (Gorenburg 2006).
Above all, Russia’s renewed economic growth and rising resource exports have done little to alleviate the country’s social ills and population challenges that became so prominent during the 1990s and that do not appear to have any effective nearterm solutions. Russia’s decreasing population and poor health statistics have attracted a great deal of attention at home and abroad (Notzon et al. 1998; DaVanzo 2001; Powell 2002; World Bank 2005). The country shares the problem of aging population with the European Union and Japan, but the rapid rate of its population decline and the principal reasons for it have no counterpart elsewhere in industrialized world.
The country has a peculiar aging profile. The decline in its share of children (0-14 years of age) is expected to be similar to that of Japan. By 2025 children will make up less than 16% of the population in Russia and less than 13% in Japan. But in 2025, Russia’s share of people 65+ years of age will be much smaller than Japan’s: less than 18% compared to about 29%. Unlike in Japan or some European countries, Russia’s aging population will not include large numbers of old and very old people because people are simply dying too young. Hence population is shrinking not only because of falling fertility but also because of excessive premature mortality.
Russia’s total fertility rate dropped (with a temporary uptick during the 1980s) from about 2.8 in the early 1950s to about 1.2 in the first half of 2000s (fig. 3.15). When the USSR disintegrated, Russia had about 140 million people; by 2000 that total had declined by 5 million, and by 2005 the population shrank by another 3 million. The medium variant of the United Nation’s (2005) long-term forecast puts the number at about 129 million by 2025 and less than 112 million by 2050, when the high and low variant totals are expected to be, respectively, 134 million and 92 million. No other modern industrialized country is expected to experience such a population decline: in absolute terms, ?35 million people, or a Canada-sized population loss, in just two generations; in relative terms, a decline of ?25% between 2000 and 2050. And none share a major reason for it, Russia’s falling life expectancy.
This reality runs against a long-standing trend of rising life expectancies in the richest nations to more than 75 years for men and more than 80 years for women. By the late 1960s, Russia’s post-World War II health and nutrition gains brought its combined (male and female) life expectancy to within about 4 years of the EU average, but while the latter’s kept improving, Russia’s came to a standstill, rose briefly during the late 1980s, and then plunged. By the year 2000 the gap between Russia and the EU was more than 12 years: less than 66 years versus slightly more than 78 years (United Nations 2005).
Several factors account for this large gap. Russia’s infant mortality has been falling, but by 2000 it was still 15 deaths/1,000 live births, roughly twice the mean of Western nations. At about 40 per 100,000 live births, Russia’s maternal mortality is more than four times as high as in the West. But the main reason for the gap is an extraordinarily high adult mortality, in particular, male mortality. A life expectancy gender gap is normal in every society (?5 years in UK to ?7 years in Japan), but in Russia it was 13 years by 2002: 72 years for women, 59 years for men (see fig. 3.15). This means that by the early 2000s Russian male life expectancy at birth was lower than the Indian mean of about 63 years and far behind the Chinese average of about 70 years (United Nations 2005).
A number of factors contribute to this deficit. Russian cardiovascular disease mortality is about three times higher than in the West; premature cancer mortality is also considerably higher. In the early 2000s the Russian death rate due to road accidents was nearly 0.4/1,000, compared to about 0.15/1,000 in the United States (WHO 2004a). During the late 1990s the Russian suicide rate peaked at more than 50/100,000, and since 2000 it has been about 40/100,000, four times the rate in the EU and the United States (World Bank 2005). Russia is unequaled in its toll due to alcohol poisoning, over 30/100,000.
Poor nutrition (saturated fat), high rates of smoking (60% of adult Russian men smoke, more than twice as many as in the United States), spreading drug use (a total addict population estimated at ~4 million in 2005), inadequate preventive health care, and mental stress contribute to Russia’s high premature mortality due to noncommunicable diseases. All these factors have one common denominator, the chronic epidemic of alcoholism. The overall level of officially reported average adult per capita alcohol intake in Russia has been only 10%-25% above the Western mean. This difference alone cannot explain the extraordinary impact of drinking on Russian society, but are explanation can be found in the actual consumption of alcohol and the kind of drinks consumed. The unrecorded consumption of samogon (home brew) and bootleg liquor may as much as double the official total, which means that an average annual intake per adult Russian amounts to at least 15 liters of pure alcohol, roughly 50% more than in Western countries (McKee 1999; WHO 2004). A 2002 survey found that about 70% of Russian men and 47% of Russian women were drinkers, and whereas 50%-70% of Western alcohol intake is in the form of beer or wine, in Russia spirits, especially vodka, account for 80% of all alcohol consumed. The history of Russia’s attempts to deal with its epidemic of vodka alcoholism shows how intractable it is (Herlihy 2002; Tartakovsky 2006; Vodka Museum 2006).
The first temporary prohibition was introduced in 1904 during the time of RussoJapanese war, the second one at the beginning of World War I, on August 2, 1914. Because these cut state revenues by more than one-quarter, they helped to speed up the demise of the Czarist empire, but they did nothing to change the population’s attitude toward “little water.” Bolsheviks reintroduced the ban in 1917, but it was reversed by 1925. Stalin, Khrushchev, and Brezhnev enforced the state monopoly on vodka but did not dare to proscribe the drink. That was done again only by Mikhail Gorbachev in May 1985, but his Proclamation (“On the Improved Measures Against Drunkenness and Alcoholism”), which once again cut state revenues, increased the consumption of dubious (and often deadly) substitutes, and it was reversed five years later.
Yeltsin’s new market economy included the abolition of the state monopoly on vodka (on June 7, 1992), a decision that led to a large increase in hazardous drink and alcohol poisoning as well as to significant revenue loss. The latter was the main reason why the monopoly was reestablished just a year later. Not surprisingly, with the average Russian adult drinking some 20 liters of vodka per year (~10 liters of pure alcohol), a large share of adult nonaccident mortality (male cardiovascular disease, liver cirrhosis), at least half of all automobile accidents, and many suicides, homicides, and fatal workplace and home injuries are related to this intractable vodka abuse. And the latest concern aggravated by alcohol is the rapidly worsening HIV/AIDS problem. This addiction to vodka has no effective short-term solutions. The resulting population trends cannot be readily reversed, a fact well understood by Russian leaders.
In his first States of the Nation address in July 2000, Putin acknowledged that Russia is “facing the serious threat of turning into a decaying nation.” Six years later he called the nation’s demographics—a loss of almost 700,000 people per year—“the most acute problem facing our country today.” He singled out new programs for improving road safety, preventing the import and production of bootleg alcohol (in his 2005 address he noted that some 40,000 people died each year from alcohol poisoning), and the early detection, prevention, and treatment of cardiovascular diseases. He also called for the encouraging immigration by attracting skilled compatriots from abroad. Even if these measures are successful, the positive impacts will not be seen for many years.
That is perhaps why Putin emphasized ways to increase the birth rate by a introducing a number of pro-natal policies, including doubling the benefits for a first child (to 12,500 rubles/month), doubling that amount for a second child, giving maternal leave of 18 months at 40% of pay, and more generously subsidizing preschool child care. Altogether, these financial incentives should total at least 250,000 rubles (about ~$9,300) per family, a very large sum in a country where the average 2005 gross national income (in terms of PPP) amounted to about $10,000 and the average monthly wage was 8,530 rubles (CBRF 2006). But pro-natal programs have a decidedly unimpressive record, and it is unlikely that Russia’s population decline can be stopped or reversed.
Russia’s strengths have improved the country’s domestic situation and international influence far above the levels to which it sank during the post-Soviet convulsions, but its current weaknesses are most likely to keep it from reoccupying its once undisputed superpower position.
If, as is highly probable, Russia’s superpower resurgence fails to materialize, there is already an established trend that alone is powerful enough to accelerate the relative decline of Western economic and military power during the first quarter of the twenty-first century: China’s rise to superpower status.
Historians of dynastic China would say that return would be a more accurate description than rise. For about two millennia China was the preindustrial world’s largest economy. Maddison (2001) credited it with some three-quarters of the global economic output at the beginning of the Common Era, two-thirds by the year 1000, and still nearly 60% by 1820. There is little doubt that under Qianlong (1736-1795), the longest reigning of all Qing dynasty emperors, it was on average more prosperous in per capita terms than England or France (Pomeranz 2001). That was surely the emperor’s perception as he tersely dismissed the British offer to trade and ordered George III to “tremblingly obey” his warnings (Qianlong 1793). Half a century later, Britain inflicted the first Western defeat on China, and soon afterwards the ancient empire was fatally weakened by the protracted Taiping rebellion, one of the key transformational mega-wars of the past two centuries (see chapter 2). The empire staggered on until 1911; its dissolution was followed by four decades of internal and external conflicts.
The establishment of Maoist China in 1949 did not end violence and suffering. Collectivization campaigns and anti-intellectual drives of the 1950s were followed by the world’s worst famine, overwhelmingly Mao-engineered, which claimed at least 30 million lives between 1959 and 1961 (Ashton et al. 1984; Chang and Halliday 2005). Then came the decade of the incongruously named Cultural Revolution (1966-1976), which ended only with Mao’s death. By the end of 1979, Deng Xiaoping began to steer the country toward economic pragmatism and reintegration with the world economy. This process actually intensified after the 1989 Tian’anmen killings as the ruling party kept its priorities clear: maintain firm political control by buying people’s acquiescence through any means that strengthen the age-old quest for wealth. China is thus finally reclaiming what its leaders feel is its rightful place at the center of the world: ReOrient, Frank’s (1998) apt label.
China’s rapid rates of economic growth (though not as rapid as indicated by the country’s notoriously unreliable statistics) have been the result of several interacting factors. Since the mid-1980s China has been receiving a rising influx of foreign direct investment, with the 2005 total surpassing $60 billion, compared to less than $4 billion invested by foreigners in India’s economy. China’s one-child policy cut the population growth rate, with the total rising from 999 million in 1980 to 1,308 million by the end of 2005 (NBS 2006).
The country thus has a huge pool of cheap and disciplined labor (about 760 million people in 2005, compared to 215 million in the EU and 150 million in the United States), which has been moving from the impoverished interior to coastal cities, where most of the new manufacturing plants are located. It is the largest and most rapid urbanization in history. China has followed the Japanese and South Korean example by promoting export-oriented, labor-intensive manufacturing, but rapid economic growth has also created a new huge domestic market for costly consumer items. Finally, China has shown a readiness to innovate, unfortunately not only by setting up well-supported research facilities but also through widespread infringement of intellectual property rights and massive commercial and industrial espionage.
Some 200 million workers have been deployed in China’s new manufacturing enterprises since 1980. The unprecedented conquest of global markets has helped to decimate U.S. and European manufacturing. China produces more than 90% of Wal-Mart’s merchandise and contributed just over $200 billion, or 26%, to the U.S. trade deficit of $767.5 billion in 2005 (USBC 2006). What a remarkable symbiosis: a Communist government guaranteeing a docile work force that labors without rights and often in military camp conditions in Western-financed factories so that multinational companies can expand their profits, increase Western trade deficits, and shrink non-Asian manufacturing (ICFTU 2005). More of the same is yet to come, and before this wave is exhausted, the country’s manufacturing will dominate the global market for common consumer items as completely as it now dominates Wal-Mart’s selection.
This economic surge has attracted a great deal of attention (Démurger 2000; Fishman 2005; Shenkar 2005; Sull and Wang 2005; Prestowitz 2005; Kynge 2006; Hutton 2006). Unfortunately, many of these writings fail to acknowledge some fundamental difficulties with Chinese statistics (inflated estimates and fabricated data are common), provide no long-term context of these necessarily temporary developments, and contain “abundant sycophancy, though nowadays the kow-tow is to the managed market system rather than to a single individual like Mao” (E. L. Jones 2001, 3).
Continuation of high growth rates would make China the world’s largest economy at some time between 2025 and 2040. Wilson and Purushothaman (2003) project China’s GDP (in exchange-rate terms) to surpass that of Germany by 2007 and that of Japan in 2016 and to reach the U.S. level by 2041 (fig. 3.16). But in per capita terms China would still be far behind the United States. By 2040 the two countries will have, respectively, about 1,430 and 380 million people, so identical GDPs would leave China’s per capita level at roughly one-fourth of the U.S. rate. A projected per capita GDP rate of about $19,000 per year (in 2003 monies) would make China as rich as today’s Greece. In PPP terms, China is already the world’s second largest economy, but in 2005 it was still less than half the United States’ size.
What will China do with this new power? During most of the 1990s, China’s external actions were seen as overwhelmingly mercantile as the country appeared to be preoccupied with employing its huge surplus rural labor force. But since then, China has become aggressive in a global quest for raw materials, in particular oil (Zweig and Bi 2005). Many commentators see the flood of China’s manufactured products as an entirely welcome trend (they keep Western inflation rates low), and many CEOs speak favorably about the strategic partnership of the Untied States with China. But some Chinese strategists and policymakers think differently. Their arithmetic is made clear by the following calculation, which I have heard most frankly expressed by a senior Chinese governmental adviser on strategic affairs. By 2020, China’s continuing high economic growth rate will allow it to spend on its military as much as the United States spends today, and this will make it a real superpower impervious to any threat or pressure.
This may not be wishful thinking. All these calculations depend on the conversion rates used to compare Chinese and U.S. GDPs. Official exchange rates pegged China’s GDP in 2005 at only about 18% of the U.S. total ($2.25 trillion versus $12.5 trillion), whereas the adjustment for PPP (up to $9.4 trillion) put China’s GDP at about 75% of the U.S. total (IMF 2006). Wilson and Purushothaman (2003) projected China’s 2020 (exchange-rate) GDP at 6.5 times the 2000 level, or about $7.1 trillion (in 2003 monies) compared to $16.5 trillion for the United States. They also estimated that the value of Chinese currency could double in ten years’ time if growth continued and the exchange rate were allowed to float. This adjustment would lift China’s 2020 real GDP close to $15 trillion, near the U.S. level at that time. With a higher share of it going to the military, China could indeed match U.S. defense spending by 2020. The Pentagon estimates that by 2025, Chinese defense spending will be as high as $200 billion (USDOD 2004). Again, multiplied by 2, this gives a level above the U.S. FY 2004 defense budget.
Moreover, in contrast to the decades of the Cold War when the United States was in no way economically dependent on the Soviet Union, China is already helping to prop up the U.S. economy by supplying it with essential goods at cut-rate prices and by buying up part of ballooning U.S. debt. Not everyone sees this as threatening: some see peaceful economic cooperation (Zheng 2005; Zhu 2005); others, an unpeaceful expansion (Mearsheimer 2006), China’s becoming a strategic adversary of the United States, and a Sino-U.S. military contest in the Pacific (Kaplan 2005). Both views have a great deal of validity.
For example, a complete ban on China’s imports (if such a thing were practical) would bring a surge of products from other Asian exporters (South Korea, Taiwan, Vietnam, Indonesia, Thailand) but hardly a resurgence of U.S. manufacturing (many of its sectors are simply defunct). And anticipations of high military spending by China are based on its continuing a defensive and offensive military buildup that has been under way for years and on bellicose statements of some of its policymakers. But such views ignore a multitude of internal and external weaknesses that militate against the China’s becoming a superpower during the next two generations.
All large, populous countries face limits and challenges, but in China’s case these checks are uncommonly numerous, and ignoring them is to repeat the mistakes made before 1990, when the West saw the USSR as a formidable superpower to be feared or Japan as inevitably becoming a global economic leader. I note here key trends concerning China’s population, economic progress, environmental degradation, and power of ideas. These trends, rather than the endlessly discussed possible outcomes of the China-Taiwan dispute (Bush 2005; Tucker 2005), will determine the reach and limits of China’s rise during the first half of the twenty-first century.
China’s population rests on an uncommonly uneven foundation. Since 1979 the traditional preference for sons has been exacerbated by the compulsory one-child policy, and this has left China with a shockingly aberrant sex ratio at birth. A 2001 national family planning and reproductive health survey showed total fertility below the replacement level, the ratio of males to females at birth at 123 (Ding and Hesketh 2006). The normal ratio of boys to girls is 106, the global mean (affected by Asia’s preference for boys) is 105, but some Chinese provinces are above 115, and a recent study found 20 rural townships in Anhui province with a ratio of 152 (Walker 2006; Wu, Viisainen, and Hemminki 2006).
This reality will disrupt China’s social fabric in several worrisome ways. Leaving aside the fundamental moral question (what happened to all those missing girls?), it condemns tens of millions of men to spouseless (shorter, less satisfying) lives. It has already led to waves of rural abductions of women by criminal gangs that supply brides to urban bachelors willing to pay a price. And a surplus of young unattached men, who are responsible for most of the crime in any society, could be a factor in contemplating foreign aggression with more equanimity (Hudson and den Boer 2003). A single war could go a long way toward returning China’s skewed gender ratio closer to normal.
China’s birth-planning policy will also result in a rapid aging of the country’s population (England 2005; Jackson and Howe 2004; Frazier 2006). In 2005 about 11% of China’s population was 60 years and older, compared to some 17% in the United States, but by 2030 the two shares will be equal, and China’s age structure will resemble that of Japan in 2005 but with only about one-fifth of its per capita income. By 2050, China will have more old people (about 30%) and a higher dependence ratio than the United States (fig. 3.17). The proportion of persons of working age in the total population, currently about 68%, will fall to 53% by midcentury, equaling the corresponding figure for the G-6 countries. China’s pension reforms already require transfers of large subsidies from the central government to local levels, and the possibility of a government default on a substantial part of the rising pension debt poses a threat to the country’s future social stability (Frazier 2006), particularly given the strongly held expectations regarding the state’s role in the provision of security.
This burden will be aggravated by the fact that some three-quarters of all Chinese have no pension plans, leaving tens of millions of young men responsible for two parents and four grandparents. The problem will be particularly acute in China’s villages (Zhang and Goza 2006). Unlike the cities, rural areas usually do not have any institutions to care for the elderly, but they have proportionately larger numbers of elderly people than urban areas, which attract a young labor force. As Dali Yang (2005) notes, the aging process will be felt during the next 15 years as the number of entry-level, low-skilled workers keeps on shrinking, making it more difficult to recruit migrant labor at depressed wages.
Economic reforms have employed tens of millions in new industries, transformed villages to large cities in less than a single generation, attracted enormous inflows of foreign investment, conquered global markets in many industrial categories, and elicited worldwide admiration of Chinese economic progress. But they have been also responsible for one of the world’s fastest increases of income inequality (Khan and Riskin 2001). They have brought poverty and marginalization, and created a massive urban underclass of uncounted destitute migrants and unemployed city workers (Solinger 2004). Economic reforms have created an elite enjoying excessive levels of private consumption and interested in maintaining the marriage of convenience between the unchecked power of the ruling party and illicit wealth (Pei 2006a; 2006b).
Tens of millions of peasants lead a precarious existence subject to the arbitrary actions of party leaders, state officials, and ambitious businessmen, including violent (and uncompensated) expropriation of their land and punishing taxation (Chen and Wu 2004; Friedman, Pickowicz, and Selden 2005). Poverty also keeps rural China unhealthy (Dong et al. 2005), and corruption is severe and endemic (Manion 2004; Wedeman 2004; Ying 2004). Transparency International (2006) puts China into the same class as the family-run Saudi Arabia, hardly a sign of a progressive society aspiring to a global leadership.
The degradation of the environment has been exceptional in China in its extent and intensity. Pre-1949 China was extensively deforested, suffering from heavy erosion and regional shortages of water: Maoist policies made all problems much worse and added enormous burdens of industrial air and water pollution even as its propaganda was brainwashing Western admirers with tales of exemplary environmental achievements. I will never forget the disbelief and doubts with which my first survey of China’s environment (Smil 1984) was met in the United States and Europe: things could not possibly be that bad. Eventually China opened up most of its territory to visitors, Earth observation satellites provided more detailed coverage of many environmental phenomena, and the results of pollution monitoring of China became publicly available. During the 1990s there could be no doubt that the country had few rivals in the extent and intensity of its air and water pollution and chronic water shortages (Smil 1993; World Bank 1997).
But when people asked me how soon China would reach an environmental breaking point, I could not give unequivocal answers. As Deng Xiaoping’s reforms led China to quadruple its GDP within a generation, the country’s environment got better in some respects and worse in others, and this dynamic situation made it difficult to assess the net outcome. The post-Mao leadership adopted a number of measures that abolished the most irrational Maoist policies, including the conversion of orchards, wetlands, and slopelands to grain fields and a ban on private wood fuel lots. The quality of afforestation efforts improved quite impressively, major cities acquired at least primary waste water treatment, particulate air pollution from large stationary sources was controlled by electrostatic precipitators, and higher energy efficiency of modernized industries reduced the waste streams per unit of products.
Three decades after Mao’s death China is definitely greener and more efficient, but a close look makes it clear that we should be worried about the state of its environment more today than we were a generation ago. China’s food production and energy demand are the two key reasons. In 2005, affluent Western nations had about 800 million people, or some 12% of global population, but they cultivated nearly one-quarter of the world’s farmland, averaging about 0.5 ha per capita (FAO 2006). Western populations will grow by less than 5% by 2025, but because about three-quarters of their staple grain harvests are fed to animals to support high average meat and dairy intakes, North America, the EU, and Australia could forgo the cultivation of a large share of their farmlands by simply eating less animal protein. For decades China’s statistics greatly underestimated the country’s cultivated land, putting it at just 95 million hectares in 2000 (Smil 2004). Then the total was revised to 130 million hectares, about a 37% increase, but by 2005 it was reduced to 122.4 million hectares largely because of conversions to nonagricultural uses (NBS 2006).
This means that China, with 20% of the world’s population in 2005, had only 9% of the world’s farmland, or just a little over 0.1 ha per capita. The only two poor populous countries with less farmland per capita are Egypt and Bangladesh, but nearly 300 million Chinese already live in provinces where the per capita availability of arable land is lower than in Bangladesh. Moreover, as China undergoes the biggest construction boom in its history, the conversion of farmland to other uses as well as excessive soil erosion (over at least one-third of China’s fields), salinization, and desertification have steadily reduced the country’s arable land. Even without further acceleration of recent trends, the average per capita availability of farmland will be no more than 0.08 ha/person by the year 2025 or not much more than the Bangladeshi mean. And the loss of usable farmland will continue. For example, these are plans to build a national trunk highway system whose length, 85,000 km, will surpass that of the U.S. interstate highways by more than 10%.
Deng Xiaoping’s agricultural reforms made China basically self-sufficient in food, and at a higher level than at any time in the country’s long history. It basically matches Japan in average daily per capita food availability, although not in nutritional quality (Smil 2004). But because of large regional disparities China still has several tens of millions of malnourished people. In order to eliminate this deficit, to produce adequate food for the additional 135 million people to be added between 2005 and 2025 (United Nations 2004), and to improve the quality of average intakes, China would have to expand its food output by at least 20% by 2025.
This would require an incremental food supply roughly equivalent to the total current food consumption in Brazil. Yet this food production would have to come from a steadily diminishing area of farmland because the country will not have the Japanese or South Korean option. Those two countries import most of their food in exchange for value-added industrial products. China could certainly produce enough manufactures to buy most of its food, but that amount of grain and meat is simply not available on the global market. In 2005, China produced about 428 Mt of food and feed cereals, whereas in 2004 the global exports of all grains amounted to 275 Mt (FAO 2006).
China could thus absorb all of the world’s grain exports and still satisfy less than two-thirds of its demand. The meat situation makes for an even wider gap: about 28 Mt of all meat varieties and processed meat products entered the global market in 2004, but China’s meat output was put officially at 77 Mt (NBS 2006). Even if all of the world’s traded meat were shipped to China, it would meet less than 40% of domestic demand. The official meat output total is almost certainly exaggerated, but this in no way changes the basic conclusion. China can never rely on imports to satisfy most of its enormous food demand. Cropping intensification is the only way to produce more food from less arable land, and irrigation is key. In absolute terms China already irrigates more land than any other country, and in relative terms it ranks only behind Egypt and Israel, but its water supply situation is already very precarious.
China has only 7% of the world’s freshwater resources. The provinces north of the Yangzi, with some 40% of population and GDP, have only about 20% of the southern average, or just over 500 m3 per capita. In 2000, China’s nationwide mean of annual freshwater availability was about 2000 m3 per capita, and in 2030 (when China’s population is ~1450 million) this will fall to less than 1800 m3 per capita (and in the northern provinces to barely half of that). By contrast, global availability in 2000 averaged about 7000 m3 per capita, in the United States nearly 9000 m3 per capita, and in Russia, 30,000 m3 per capita (WRI 2000). Even if it were possible to use every drop of northern stream runoff, per capita water supply in China would be less than one-quarter of America’s actual U.S. per capita water consumption.
Actual per capita northern supply in China for all uses—agriculture, industry, services, and households—amounts to little more than the amount used in the United States to flush toilets and wash clothes, dishes, and cars. In addition, 90% of water sources in China’s urban areas are polluted, and in some northern provinces water tables have been sinking by several meters per year. Such are the northern water shortages that the Huanghe, the region’s principal stream, regularly ceases to flow long before it reaches the sea. Between 1985 and 1997 the river dried up in some sections every year. In 1997 the stream did not reach the Bohai Bay for a record 226 days, and the dry bed extended for more than 700 km from the river’s mouth (Liu 1998). Massive, costly, and environmentally risky south-north water transfer from the Yangzi to the Huanghe basin is a controversial (and hardly a lasting) solution (fig. 3.18) (Smil 2004; Stone and Jia 2006). The need to feed at least 100 million additional people, to satisfy rising urban demand, and to secure water for growing industries means that the northern China’s strained resources will be, even with the south-north transfer, under more pressure during the next two decades.
Nitrogen fertilizer is the other key ingredient of cropping intensification, but China is already the world’s largest user of synthetic fertilizers, with average per hectare rates four times as high as U.S. rates and with some double-cropped rice fields receiving annually in excess of 400 kg N/ha (Smil 2001). Higher rates of average applications may make little difference because the crop response to high applications of nitrogen has been declining, overall efficiency of nitrogen use in China’s cropping is very low—below 50%, and for rice often less than 30% (Cassman et al. 2002)—and nitrogen leaching causes widespread contamination and eutrophication of streams, reservoirs, and coastal waters.
Future energy demand will impose a tremendous pollution burden on China. In 2005, China’s consumption of primary commercial energy amounted to about 9% of the global total, again much less than the country’s population share. And less than 15% of the low per capita rate, equivalent to about half a tonne of crude oil per year, is used by households, compared to about 40% in the West. In order to join the ranks of developed nations China’s per capita energy consumption would have to be at least twice the current mean. China has been the world’s largest source of sulfur emitted from combustion since 1987 (Stern 2005), and only extensive desulfurization of flue gases could prevent it from producing an even larger share. Nitrogen oxides from large power plants and rising combustion of refined fuels, in particular from vehicular traffic (crude oil demand tripled between 1990 and 2005) will increase the levels of semipermanent photochemical smog that aggravates respiratory illnesses and reduces crop yields (Aunan, Berntsen, and Seip 2000).
In addition, by 2007 China has already become the world’s largest emitter of greenhouse gases, and it will take a crucial and contentious role in any effort to stabilize and reduce their generation. The most appealing nonfossil alternative has its own environmental problems. Accelerated development of hydroelectric generation, exemplified by the controversial Sanxia mega-project (Smil 2004), has caused extensive flooding of high-yielding farmland, mass population resettlement, and rapid reservoir silting. Even a partial quantification of China’s environmental burdens reveals a considerable impact. Economic losses attributable to China’s environmental degradation have been conservatively quantified as equal every year to 6%-8% of the country’s GDP, or almost as much as annual GDP growth (Smil 1996; World Bank 1997).
Finally, there is the intangible but critically important power of ideas. No aspiring superpower can do without them, and it can be argued that they are as important as economic or military might. In this respect, China has no stirring offerings; the power of the ruling party still derives from stale and rigid Marxist-Maoist tenets. As for Beijing’s “socialist market economy with Chinese characteristics,” this is only a label for a mixture of relatively free enterprise and continued party control, a rather unoriginal idea with elements copied from Japan, Taiwan, and Singapore. Even more fundamentally, China has yet to face old deep internal wounds; official government policy still silences any probing discussions of the two greatest catastrophes that befell China after 1949, the world’s largest, Mao-made, famine (1959-1961) and the Cultural Revolution (1966-1976).
Postwar Germany has faced the horrors of the Third Reich, and it has worked in many ways to atone for its transgressions. Russia began to face its terrible Stalinist past when Khrushchev first denounced his former master (in 1956), opened the gates of the gulag, and had the dictator’s corpse removed from the Red Square mausoleum. But the portrait of Mao still presides over the Tian’anmen, hundreds of his statues still dot China’s cities, and Maoism remains the paramount ideology of the ruling party. This amnesia is hardly a solid foundation for preaching moral superiority. And as for serving as a social and behavioral model, China—despite (or perhaps because of) its ancient culture, and in a sharp contrast with the United States—has little soft-power appeal to be a modern superpower of expressions, fashions, and ideas.
Its language can be mastered only with long-term devotion, and even then there are very few foreigners (and fewer and fewer Chinese) who are equally at ease with the classical idiom and spoken contemporary dialects. Its contemporary popular music is not eagerly downloaded by millions of teenagers around the world, and how many Westerners have sat through complete performances of classical Beijing operas? China’s sartorial innovations are not instantly copied by all those who wish to be hip. Westerners, Muslims, or Africans cannot name a single Chinese celebrity. And who wants to move, given a chance, to Wuhan or Shenyang? Who would line up, if such an option were available, for the Chinese equivalent of a green card?
In the realm of pure ideas, there is (to chose a single iconic example) no Chinese Steven Jobs, an entrepreneur epitomizing boldness, risk taking, arrogance, prescience, creativity, and flexibility, a combination emblematic of what is best about the U.S. innovative drive. And it is simply unimaginable that the turgid text of the country’s Communist constitution would be ever read and admired as widely as is that hope-inspiring 1787 document, the U.S. Constitution, whose stirring opening, I assume, you know by heart. Here is the first article of China’s 1982 constitution:
People’s Republic of China is a socialist state under the people’s democratic dictatorship led by the working class and based on the alliance of workers and peasants. The socialist system is the basic system of the People’s Republic of China. Sabotage of the socialist system by any organization or individual is prohibited.
Those who laud the new China might re-read this a few times. And anybody familiar with today’s China knows how eagerly the Chinese people themselves imitate U.S. ways even as they profess nationalistic, anti-American fervor.
This brings me to the hardest task of all national assessments—a look at prospects for the United States. I tip my hand before I begin. Even a problem-ridden China, a self-absorbed Europe, a faltering Russia, and a relatively nonconfrontational Islam would not guarantee the continued global primacy of the United States. External forces are important, but as with all great states, a great danger is the rot that works from within.
The United States is a superpower in gradual retreat. Its slide from global dominance has been under way for some time, but in the first years of the twenty-first century many components of this complex process have become much more prominent, coalescing into a new amalgam of worrisome indicators that point unequivocally toward a gradual decline.
I emphasize gradual. Unless the country sustains a massive unanswered nuclear attack (an event of negligibly low probability; its nuclear triad guarantees devastating retaliation), there is no way its power can vanish instantly. Nor is it easy to come up with rational scenarios (large-scale urban terrorist attacks, collapse of the country’s economy that would leave other major players unaffected) that would see its power drained away in a matter of months or years. Its hegemony will devolve at varying speeds and in a nonlinear fashion across decades.
Using an irresistible analogy with Rome, and assuming the end of that Western empire at 467 c.e., we cannot know if the United States is already at a point comparable to when the Roman legions began withdrawing from England (383 C.E.) or when Diocletian first divided the empire (284 C.E.), but it is surely not at a point comparable to the time before the Roman empire’s extensive conquests in the Middle East (90 b.c.e.). Signs of ennui, of unmistakable overstretch, classic markers of an ebbing capacity to dominate, have been noted by historians and political commentators (Nye 2002; Ferguson 2002; Johnson 2004; Cohen 2005; Merry 2005). The title of Wallerstein’s (2002) article, “The Eagle Has Crash Landed,” referring to the 1969 moon landing at the pinnacle of U.S. power, perhaps most evocatively summarizes these analyses. They say the Pax Americana, the period of relative peace in the West since the end of World War II coinciding with the dominant military and economic position of the Untied States in the world, is over.
The limits of U.S. military power have already been demonstrated, for instance, by the wars in Korea and Vietnam and an ill-fated 1992-1993 Marine mission in Somalia. But the attacks of 9/11; the occupation of Iraq; Iran’s bid for the Middle Eastern leadership; a defiant North Korea; a newly confident Russia, economically powerful and assertive; a militarizing China and Europe, more reluctant than ever to follow almost any U.S. initiative—all of this has left the United States in such a bind that no Houdini-like contortions can release it with its power and global influence intact.
Other foreign challenges demonstrate the limits and essential irrelevance of U.S. military power, and these realities leave policymakers with no appealing options. Foremost among them is the inability to prevent millions of illegal immigrants, especially Mexicans, from crossing the country’s borders, an influx that divides populace along many split lines (Hanson 2005). The current immigration challenge is not comparable with the task of absorbing massive waves of immigrants from Europe between 1880 and 1914. These inflows were controlled (the names of virtually all arriving immigrants can be checked in passenger lists and immigration records), and it was not dominated by any single national or religious group, a reality conducive to relatively rapid acculturation and integration.
By contrast, the United States has no control of the latest immigration wave. The Border Patrol will not even speculate how many border crossers evade capture (the official total of those captured crossing from Mexico was 1,241,089 in 2004), and the total number of illegal immigrants is not known with an accuracy better than ±5 million. More important, Most Mexican immigrants actually do not feel that they are emigrating. In a 2002 survey, 58% of Mexicans maintained that the U.S. Southwest “rightfully belongs to Mexico” (Ward 2002). Reconquista of the U.S. Southwest (Aztlan, in the parlance of radical groups) has been in high and accelerating gear since the mid-1980s, and the fences that have been built or planned along parts of the U.S.-Mexican border are nothing but useless, pathetic attempts that do not (and will not) prevent anybody’s entry into the country (Skerry 2006).
Some fence designs actually facilitate ingress: horizontally laid corrugated steel panels unintentionally provide ladder-like supports for easier scaling, and in order not to “offend” Mexico, the flange on top of the fence points toward the U.S. side, making it easier to roll over. Huntington (2004) argued that the new immigration is fundamentally different from previous (transoceanic) inflows because it taps into a large pool of poorer, less well-educated migrants who come from a culturally resilient place and who thus may not be amenable to swift integration with U.S. culture. Illegal immigrants waving Mexican, not U.S., flags during their protest marches in U.S. cities only reinforce this observation.
Even if such views only partly identify the true situation, the political and economic consequences of such a change would be profound. After all, if everyday U.S. realities came to resemble Mexico’s, the change would be in a very undesirable direction: toward a narcotraficante economy governed almost entirely by corrupt arrangements. This would further alienate U.S. citizens from participating in the political process (the percentage of U.S. population voting has dropped steadily and is now lowest among all Western countries). A key comparison hints at the shift that could occur: Mexico ranks 70 on the 2006 Transparency International scale of corruption perceptions (the same as China), whereas the United States ranks 20, ahead of France and Japan. Everything from politics to policing, would change if U.S. conditions cause to resemble Mexico’s in this respect—a clear reason to worry about a rapid, nonassimilating Hispanicization, and indeed balkanization (as other immigrant groups are assimilating more slowly), of the country.
Even if there were no challenges from abroad, there is no shortage of homegrown trends whose continuation will undermine U.S. global primacy. A prominent economic concern is the country’s rising budget deficit and deteriorating current account balance. Despite a brief spike of budget surpluses between 1998 and 2001, the cumulative gross federal debt had doubled between 1992 and 2005 to about $8 trillion, or 65% of the country’s GDP in those years (fig. 3.19) (USDT 2006). The country’s net international investment position was positive (in market value terms) from the end of World War I until 1988 (USBC 1975; Mann 2002). A subsequent slide brought it to about -$300 billion by 1995; more than -$1 trillion in 1998; –$1.58 trillion in 2000; -$2.33 trillion in 2001; and -$2.5 trillion in 2005, equal to about 20% of the country’s GDP in that year (BEA 2006).
By 2005 the annual current account deficit reached about $800 billion, or 6.5% of GDP, and it has been absorbing some two-thirds of the aggregate worldwide current account surplus, both unprecedented levels. Some critics see these deficits inevitably ending in a severe crisis. Cline (2005) asked how long the world’s largest debtor nation can continue to lead. Obstfeld and Rogoff (2004) concluded that after taking into account the global equilibrium consequences of an unwinding of this deficit, the likelihood of the collapse of the dollar appears more than 50% greater than their previous estimates. And Fallows (2005) used the term meltdown.
In contrast, others see this deficit as a sign of economic strength. Dooley et al. (2004) believe that a large current account deficit is an integral and sustainable feature of a successful international monetary system. R. W. Ferguson (2005) and Bernanke (2005) concluded that the deficit is primarily driven not by U.S. extravagance but by the slump in foreign domestic demand, which has created excessive savings, resulting in the large current account surpluses in Asia, Latin America, and the Middle East being invested in the United States. But Summers (2004, 8) asked, “How long should the world’s greatest debtor remain the world’s largest borrower?” and suggested the term “balance of financial terror” to describe “a situation where we rely on the costs to others of not financing our current account deficit as assurance that financing will continue.”
The countries that finance the U.S. current account deficit will not do so indefinitely. Even now, as Summers (2004, 9) stressed, “A great deal of money is being invested at what is almost certainly a very low rate of return. To repeat, the interest earned in dollar terms on U.S. short-term securities is negative.” In addition, the investor countries with fixed exchange rate regimes (most notably, China; its revaluations of yuan have been, so far, an inadequate adjustment) are losing domestic monetary control. Summers also noted that a similar kind of behavior was behind Japan’s excesses of the 1980s as “much of the speculative bubble . . . that had such a catastrophic long-run impact on the Japanese economy was driven by liquidity produced by a desire to avoid excessive yen appreciation.” A house of cards comes to mind when thinking about these arrangements.
Are these the fiscal foundations of a superpower? By October 2007 the two largest holders of U.S. Treasury securities were Japan, with about $592 billion, and China, with about $388 billion (USDT 2007). Hence one can think of China as financing the U.S. Department of Defense bill for forces and operations in Iraq and Afghanistan for almost three years, and of Japan covering the U.S. budget deficit for almost two years. One can—but is such a thinking correct? Not according to Hausmann and Sturzenegger (2006), who deduce that the United States must have more foreign wealth than is apparent (“black matter”) and conclude that it is actually still a net creditor nation. This is their best way to explain how despite its enormous accumulated deficit, the country still earns more on its foreign assets than it pays out to service its foreign debt and has no current account deficit. U.S. Treasuries still have the highest rating, and its currency is still the global standard.
Given the disagreement among economists regarding even the near-term outlook (ranging from the dollar’s imminent collapse to a comfortable continuation of rising deficits that might actually be “black matter” surpluses), I think it is more useful to point out the two most worrisome features of the U.S. trade deficit, namely, the degree to which it has become embedded and hence impervious to any near-term reversal, and its consequences for the country’s position as a global leader in manufacturing and scientific and technical innovation. Both of these are perilous trends; both steadily diminish the country’s great power status, and neither of them can be denied, argued away, or eliminated by creative accounting.
After decades of trade deficits, caused by the demands of a rapidly developing post-Civil War economy with its huge infrastructural needs, the United States began to run a surplus on its foreign trade in 1896 and maintained it throughout economic cycles and wars for the next 75 years. An uninterrupted but fluctuating sequence of deficits followed between 1974 and the late 1990s, and then came a free fall. The 1997 deficit more than doubled in just two years, and the record 1999 deficit nearly doubled by 2003 and then grew by another 24% in 2004 and 17% in 2005, when it stood nearly four times above its 1996 level (fig. 3.20).
This transformation has not been merely a matter of high consumer spending or, as some economists maintain, a benign unfolding of a new global system. Continued enlargement of the trade deficit is unsustainable (it would have the country importing every year more than its current GDP in less than 25 years). This long-term trend will result in structural deficiencies and strategic vulnerabilities. A resolute administration can drastically cut, even eliminate, the budget deficit, but the elimination of the trade deficit is highly unlikely.
The U.S. trade deficit was $494.9 billion in 2004, $611.3 billion in 2005, and $716.6 billion in 2006 (USBC 2006). Almost exactly 40% of the 2005 deficit was due to imports of industrial supplies and materials. The country has become steadily more dependent on foreign energy, metals, chemicals, and other raw materials, and in 2005 it bought 2.2 times as much of these goods as it sold abroad. In 2005 crude oil imports accounted for one-quarter of the total trade deficit. In January 2006, President George W. Bush, a Texas oil man, acknowledged the country’s addiction to oil purchased from a Venezuelan Castroist (11% of U.S. oil imports in 2005), Nigerian kleptocrats (the country ranks next-to-last on the global corruption perception index) who supplied more than 8% of U.S. oil in 2005, and a country run by a family of princes in Riyadh, Saudi Arabia, that supplied more than 11% of U.S. oil in 2005 (EIA 2005).
After adding other fossil fuels and electricity, the total deficit in the U.S. energy trade amounted to $265 billion in 2005. In order to satisfy its excessive energy use, the country thus had an embedded annual deficit of about one-quarter trillion dollars, and with higher world prices this has increased substantially since 2005. Still, these large imports would not have to signify any large overall trade imbalance. Japan or South Korea import virtually all of these basic supplies and materials but then turn them into value-added products.
By contrast, the United States has a huge deficit in trading manufactured goods, mostly because of its imports of automobiles, parts, and engines, which in 2005 was about 2.4 times larger than the country’s automotive exports. In addition, by 2005 the United States had a deficit in 17 out of 32 major categories of capital goods (led by computers and their accessories: $45.5 billion sold, $93.2 billion bought) and in 26 out of 30 categories of consumer goods. The aggregate deficit in the trade of consumer goods was $291 billion; the only (small) surpluses were from toiletries and cosmetics, books, records, tapes and disks, and numismatic coins (USBC 2006). All affluent countries have seen a relative decline in manufacturing, but in the United States these losses have gone farther than in Europe or Japan. The sector employed about 30% of the labor force during World War II, but by 2005 the share was less than 12%, compared to 18% in Japan and 22% in Germany (USDL 2006).
Many economists, unconcerned about this decline, repeat the mantra of new jobs in services providing greater prosperity. But this ignores both the large embedded trade deficits and increasing strategic vulnerability of a country that has already lost entire manufacturing sectors. Virtually every kind of mundane manufacturing has either completely or nearly vanished (cotton and wool apparel, cookware, cutlery, china, TVs) or has been reduced to a small fraction of its former capacity (furniture making, writing and arts supplies, toys, games, sporting goods). In consumer goods categories alone, these losses add up to an embedded deficit of about $200 billion per year. Car making, the largest manufacturing sector, has been in an apparently unstoppable slide as the adjustments made by the country’s leading automakers are never enough to prevent further losses of market share. In 1970, U.S. companies produced 85% of all vehicles sold in the country; in 1998 the share was still at 70%, but by 2005 it had fallen below 60%, and it may drop below 50% even before 2010. Automotive imports add almost $150 billion per year to the chronically embedded deficit.
Dominance in high-quality steel production, still the cornerstone of all modern economies, is a thing of the past in the United States. In 2005, U.S. Steel was the seventh-largest steel company in the world, outranked not only by European and Japanese producers but by giants whose names are unknown even to well-educated Americans: Lakshmi Mittal’s company, South Korea’s POSCO, and Shanghai Baosteel (IISI 2006). And the United States has been losing the leadership of one of its last great high-tech manufacturing assets, its aerospace industry. Wrested from Europe during the 1930s with such iconic designs as DC-3 and Boeing 314 (Clipper), it was strengthened during World War II with superior fighters and bombers and extended after the war with remarkable military designs and passenger jets.
The latter category included the pioneering Boeing 707, introduced commercially in 1957; the Boeing 737, the most successful commercial jet in history; and the revolutionary Jumbo, Boeing 747 (Smil 2000). By the mid-1970s the U.S. aviation industry dominated the jetliner market, and its military planes were at least a generation ahead of other countries’ designs (Hallion 2004). The subsequent decline of the U.S. aerospace industry forced the commission examining its future to conclude that the country had come dangerously close to squandering the unique advantage bequeathed to it by prior generations (Walker and Peters 2002).
By 2000 only five major airplane makers remained in the United States; the labor force declined by nearly half during the 1990s; Boeing was steadily yielding to Airbus in the global market (between 2003 and 2005 it sold fewer than 900 planes, compared to about 970 for Airbus); most top military planes (F-16,F/A-18, A-10) had been flying for more twenty-five years; and U.S. rocket engines were outclassed by refurbished Russian designs. In 2003, Airbus began intensive work on the A380 superjumbo jet (Airbus 2004). Boeing went ahead with a smaller but superefficient 7E7 (787, Dream Liner) but only after it subcontracted much of its construction to foreign makers (Boeing 2006). By 2005 the U.S. surplus in trading civilian aircraft, engines, and parts was less than $19 billion, considerably less than the imports of furniture or toys. So much for high-tech manufacturing making up for the losses in traditional sectors.
A rapid reduction of the country’s traditionally large surplus in trading agricultural commodities is the most recent, and perhaps most surprising, component of the U.S. decline. The country has been a net exporter of agricultural commodities since 1959, and the export/import ratio of the value of this trade was 1.83 as recently as 1995. By 2000 it was 1.30; in 2004, 1.18, with the surplus falling below $10 billion a year; and in 2005, 1.08. This trend points to deficits before 2010 (Jerardo 2004; USDA 2006). As a result, by 2005 the net export gain from the huge agricultural sector (just $4.74 billion) paid for less than half of the country’s rising imports of fish and shellfish in that year (worth $11.9 billion).
When the accounting is done in terms of a larger, actually more meaningful, category that comprises all traded crops (food and feed), animal products, processed foods, and beverages (including wine, beer, distillates, and liqueurs), the United States already had a substantial trade deficit in 2005 (about $9.1 billion), and it is almost certain that the country will become increasingly dependent on imported food. In 2005 imports (by weight) claimed nearly 15% of all food consumed, with individual shares surpassing 80% for fish and shellfish, 30% for fruits, juices, nuts, sweeteners, and candy, and 10% for meat and processed meat products. This has been an inexplicably unreported shift with enormous long-term impacts. After borrowing abroad to support its need for crude oil, gadgets, and toys, the United States will be borrowing just to feed itself. Autarky in food is an impractical and unattainable goal for many small nations, but were the United States to become a permanent net food importer, who would be left to do any net exporting—Brazilians and heavily subsidized French farmers?
There is no possibility that the trade in services can ever make up for the massive deficits in traded goods. Although the annual surplus in service trade rose from $52 billion in 2003 to $66 billion in 2005, all that the latter sum could do was to lower the overall trade deficit by about 8% (from $783 billion to $717 billion). In a world where the theft of intellectual property is a ubiquitous activity that is not only rarely prosecuted but that is tacitly condoned by many governments (particularly in Asia, with China being the most blatant transgressor), U.S. trade salvation will not come from royalties and licensing fees. Nor it will come from tourism, unless the U.S. dollar becomes incredibly cheap. Given an embedded trade deficit of about $0.5 trillion per year, which cannot be swiftly eliminated by policy changes, and a growing dependence on energy imports, it is hard to imagine how this dismal trading trend could be reversed short of a substantial devaluation of the dollar and a precipitous decline in the standard of living.
Even if some form of “black matter” assets existed, one would still have to ask, How long will the Japanese, Chinese, Middle Eastern and other OPEC creditors, and Caribbean banking centers holding recycled drug trade profits be willing to extend credit to the superpower at a cost to themselves? For pessimists, the only remaining uncertainty is how the end will come. Will it be a controlled, drawn-out fading or a precipitous fall? In any case, a United States forced to live within its means would be a very different place. It has already turned into a very different place because of a multitude of interconnected demographic, social, and behavioral trends that have weakened it from the inside.
The aging of the U.S. population, although far less pronounced than in Europe or Japan, and a multitude of social ills will only accelerate the inevitable transformation of the country. The aging of the population will have similar effects on health budgets, pensions, and the labor market as in Europe or Japan, but given much more widespread stock ownership, its principal undesirable effect may be on the value of long-term investments. There will be too few well-off people in the considerably smaller post-boomer generations to buy the stocks (and real estate) of aging affluent baby boomers at levels anywhere near peak valuations. That is why Siegel (2006) expects that stock prices in rich countries could fall by up to 50% during the coming decades, unless newly rich investors from Asia, the Middle East, and Latin America step in. But that intervention would have to be on a truly massive scale. Siegel’s calculations indicate that for rich countries’ stocks to perform at their long-run historic rate, most multinational corporations would have to be owned by non-Western investors by 2050.
Underperforming U.S. education leads to well-documented dismal scores in international assessments in math and science. For example, the latest international comparison of mathematical skills in 29 OECD countries puts U.S. 15-year-old students only above those of Portugal, Greece, Turkey, and Mexico (OECD 2003). Despite the decades-long war on drugs, street prices have remained low and stable (or falling), and distribution (particularly of highly addictive methamphetamine), is more widespread. Cross-border seizures and potency of marijuana are at unprecedented levels, and the overall trend in drug use has not shown a decline since 1990 (fig. 3.21) (White House 2006). Drugs are also a major reason for the country’s extraordinarily high rate of incarceration, and for what some commentators call America’s prisons nightmare (DeParle 2007). Troubled health care and pension systems may be headed toward bankruptcy, yet endless congressional debates cannot offer effective solutions (Kotlikoff and Burns 2004; Béland 2005; Derickson 1998).
A visible deterioration of the nation’s physical fitness makes the United States the most obese and physically unfit nation in Western history (CDCP 1998). No wonder, when the nation is the undisputed leader in supersized restaurant meals and highly popular, exceedingly disgusting, contests in competitive eating where the top competitors consume in a matter of minutes more than 20 grilled cheese sandwiches or hot dogs (Fagone 2006). Recent attempts to portray obesity as a fairly innocuous condition and grossly overweight people as victims are ludicrous. The links between obesity and greater morbidity are too well established, and in most cases obesity is a matter of choice, not of somatic inevitability. In 1991 only 4 of 45 surveyed states had an obesity prevalence rate of 15%-19%; none was over 20%. But by 2001 the nationwide mean was 20.9%, with 29 states having rates of 20%-24% (Mokdad et al. 2003).
Nader (2003) included displays of gluttony among his four signs of societal decay, together with electoral gerrymandering, ubiquitous corporate crime, and corporate excess. The flaunting of possessions has become a new norm, exemplified by the size of new homes and vehicles. By 2005 the national average size of new houses reached 220 m2 (12% larger than a tennis court and 12% above the 1995 mean). The average for custom-built houses rose to 450 m2, houses in excess of 600 m2 were not uncommon, and the mega-structures of billionaires claimed as much as 3000 m2 or even more than 4000 m2 (Gates’s house cluster measures to 4320 m2). The sizes of the largest vehicles used as passenger cars climbed past 2 and 3 tonnes to the most massive brands, weighing nearly 4.7 t (Hummer H1) and almost 6.6 t (CXT, designed to be just 1 pound lighter than the weight requiring a trucking license).
These displays of private excess have been accompanied by spreading public squalor (abandoned housing, derelict areas of former manufacturing compounds) and the unraveling of essential social supports, including one that people actually paid for, their pension plans. Many corporations (led by airlines, car parts companies, and steel companies) have defaulted on their pension responsibilities, transferring them to the Pension Benefit Guaranty Corporation, whose deficit reached about $23 billion by 2005.
One does not have to watch inane TV shows or read supermarket tabloids in order to feel that public mores and tastes are driven toward the lowest common denominator. This process entails a pathetically emotive Oprah-ization of America, an eager discarding of privacy, proudly vulgar displays of immature behavior, an endless obsession with celebrities (and a mass yearning to become one of them, fueled by so-called “reality” shows), rampant legalized gambling, and frenzied purchases of lottery tickets.
All of these symptoms have been discussed at length in vibrant and cacophonous electronic and printed media; scathing self-examination and self-criticism show no signs of decline (God bless America!). But this will make no difference as long as there is no commitment among the policy-making elites to address at least some of these matters in a practical, effective fashion, and as long as that commitment is not combined with a mature willingness among the country’s population to live at least closer to (if not entirely within) their means, to curb the worst excesses, and to think and act as if the coming generations mattered. Regrettably, the latter possibility is even less likely than the first proposition, and both could become real (perhaps) only if the country were thrown into a truly deep financial and existential crisis. But by that time it might be too late for the United States to regain its great power status. It is very clear that it is living on borrowed time and yet has no imminent intentions to do otherwise.
As with every change of global leadership, the retreat of the United States from global leadership will be widely felt, but because of the country’s pervasive impact (no matter if positively or negatively perceived), it will have many consequences whose definite contours we cannot foresee. For some four generations the country has been the world’s dominant agent of change, an unselfish savior, a reluctant arbiter, and a brash trendsetter. At the same time, it has never been averse to realpolitik, as attested by détente with the USSR, support of dictatorial regimes in Latin America, Africa, the Middle East, and Asia, and rapprochement with Communist China. But in its fundamentals, and very often in its execution, U.S. foreign policy has been imbued with moral (and moralistic) convictions about the duty to act as a global promoter of freedom, a call to action that unites John F. Kennedy and George W. Bush.
The dominant call would be very different if it came from the mullahs speaking for a resurgent Islam, from the énarques running a new United States of Europe, from the councils of a rejuvenated Russian military, or from confident politbureau strategists in Beijing. Before any of these alternatives might happen, role of the United States must weaken sufficiently, perhaps even to the point where it would cease to be the first among equals.
As I said earlier, I do not intend to offer any forecasts or suggest firm dates. There seem to be only two things that could be done. The first is to look at the fates of past powers and see if there are, despite the obvious singularities of every case, any helpful conclusions regarding their longevity in general, and the rate of their decline in particular. The second is to look at the most likely new configuration of power on top, appraising the chances for today’s other key international players to become the new trendsetters during the first half of the twenty-first century.
Falling from a position of power, dominance, and affluence (in absolute terms or relative to other contenders at a given time in history) is always a painful process, but the rate of decline makes a great difference. Think of Germany’s accelerated rise and demise, as the Thousand Year Reich was compressed between 1933 and 1945; or of the USSR’s demise and Russia’s ensuing pitiful economic and social position during most of the 1990s. As already noted, barring an unanswered (and hence extremely improbable) nuclear attack, such a precipitous fall is not in the cards for the United States. Under normal circumstances large and powerful states take time to unravel.
How much time? The Roman experience, even when limited to the Western Empire, is unlikely ever to be repeated. Taking Gibbon’s span of decline and fall (180-476 c.e.) it was nearly 300 years. But this is dubious, given the fact that the empire’s cohesion was a shadow of its former self and its dependence on foreign legions became critical long before its formal demise. The durabilities of other major premodern empires do not offer any better guidance because the common attributions of their average duration, such those used by Ferguson (2006), are similarly questionable. The Holy Roman Empire was legally dissolved by Napoleon in 1806, but to date it between 800 and 1806 is misleading; for most of its existence it was not a coherent entity. And the Ottoman empire actually spent more than half of its duration (1453-1918) in deep decay.
As for modern hegemonies, the British one, with the peak in 1902 (end of the Second Boer War) and the end in 1947 (quitting of India) took only 45 years to unravel. (The violent Malay and Kenyan conflicts after the British left India just wrapped up Britain’s retreat.) The Soviet retreat took almost exactly the same amount of time, from the WW II victory in 1945 to Yeltsin’s disbanding of the union in 1991. But the duration of less than half a century of the British and Soviet retreats provides no directly applicable insight into the likely extension of the United States’ place on top. Adding 45 to 1945 gives us 1990, yet by 1991, after its victory in the Gulf War (and with the USSR falling apart), the United States appeared to be at the apex of its military powers rather than on its way out.
There is no doubt that the closing months of 1945, after the defeat of Germany and Japan and after the country became the sole nuclear power, marked the peak of U.S. military and economic power (fig. 3.22). At that time the country was responsible for 35% of the world’s economic product; this share fell (adjusted for inflation) to a bit above 30% in 1950 (as the USSR, Europe, and Japan began to recover from war damages), and to just below 25% in 1970. During the following two decades it remained stable, but by 2005, after a generation of China’s rapid economic growth, it slipped to about 20% (IMF 2006) as China surpassed Japan to claim the second place with about 15% of the total.
But the Gulf War could be seen as just one last delayed victory, a blip briefly interrupting the declining trend, with a number of defensible dates for its onset. An economist might opt for August 15, 1971, when the Nixon administration ended more than a quarter century of the Bretton Woods global monetary regime by stopping the convertibility of dollars to gold. A geostrategist might date the beginning of the U.S. decline to April 30, 1975, when the North Vietnamese tanks drove into the former U.S. embassy in Saigon, ending a decade-long war in the Southeast Asia with a U.S. defeat, the country’s first in its 200-year history. (The Korean war was a draw, a return to status quo ante, and its eventual outcome was the creation of the world’s seventh-largest economy, a success by any measure.) But arguments can be made that in the long run the Vietnamese victory was not a strategic defeat because Southeast Asia did not become a Communist stronghold and Vietnam is now following China by integrating its trade-oriented economy into the global market (and it welcomed the U.S. president on a state visit in 2006).
That is why many would argue for 9/11 (“the date everything changed”) as the beginning of the end of U.S. supremacy. Admittedly, “everything” is a hyperbolic statement, but the notion that nothing much changed on that day (Dobson 2006) is indefensible. At the same times, I am not sure if the date fits into the lineup of events marking the decline of U.S. power. A generation from now, 9/11 may not seem so much a marker of an unstoppable retreat as the beginning of a temporary reassertion of U.S. power.
I note only that downward trends are no less susceptible to sudden shifts (which may lead to temporary changes and readjustments) than are upward trajectories. Moreover, the magnitude and complexity of interrelated financial, political, and strategic arrangements that bind the United States with the rest of the world provide a significant degree of buffering that makes gradual moves much more likely than any sudden catastrophic shifts.
Setting the arguable duration of U.S. dominance aside, the most important conclusion from reviewing the likely national trends is that U.S. global leadership is in its twilight phase, approaching the latest of the infrequent power transitions taking place on this scale. Indeed, an argument can be made that the coming transition will be unprecedented because the United States was history’s first true global power. None of the great powers of antiquity, not even the Roman Empire or the Han dynasty of China, were global powers; their reach was far too restricted for that. The history of global power began only with Europe’s grand maritime expansion, which gradually brought all continents into interaction.
Before the United States assumed the leading position, there were states with farflung possessions and commercial interests, but given the relatively weak level of global economic integration—and in earlier eras the absence of instant communication and the impossibility of rapid long-distance projection of power—their place on top was not of the same import as the U.S. primacy. This quasi-global role has been played by a single state only twice since the beginning of the early modern period and the U.S. ascent: by Spain from 1492 to 1588 (from the conquest of Granada and the Atlantic crossing to the defeat of the Armada) and, on a much grander scale, by Britain from 1814 to 1914 (from Waterloo to the trenches of WWI). During the seventeenth and eighteenth century centuries there was no clear hegemon. Powerful Qing China (particularly under emperor Kangxi, 1661-1722) dominated in the East and pulled in bulk of the world’s silver, but it did not engage the rest of the world either politically or militarily, while the West experienced a prolonged competition among waning Spain and expanding France and England.
But which power will fill the gradually expanding vacuum left by a strategically retiring, economically much weakened, technically less competent, more corrupt (looking more like Estados Unidos Mexicanos) United States, with its receding memories of superpower glory? Reviewing the likely national trends, I conclude that none of the possible candidates is likely to do that in ways even remotely similar to the U.S. post-1945 dominance. By 2050 the Muslim world of some 2 billion people will almost certainly wield more influence than today, but three major factors will prevent its emergence as a coherent, trendsetting actor on the global stage.
First, its ancient religious discord, the more than 1,300-year-old sunnī-shī‘ī rift, is not going to disappear in a generation or two: that feud and its passions cannot be easily set aside.
Second, the diverse national interests composing the Muslim world have repeatedly overridden the characteristically flowery language of pan-Arabism, and they would be even harder to set aside to achieve a new Spain-to-Pakistan caliphate.
Third, Muslim countries have a multitude of internal troubles and economic challenges. In a few decades Iran’s oil production will be in steep decline; Indonesia’s output has already fallen by one-third between 1991 and 2005; and it is very likely that the output of Nigeria (partly Muslim) will also fall. The overall economic progress of the Muslim world will be very uneven, further accentuating the current divide between de facto modern, relatively affluent states (Kuwait, Qatar, UAE, Oman) and overcrowded, unruly, and relatively very poor countries (Pakistan, Bangladesh). Such realities are not at all conducive to setting up a cohesive faithbased caliphate (or even fashioning an EU-like economically based entity).
Only a wishful thinking can conjure the transmutation that would be required to remake aging Europe into a new consensual hegemon, and a resurgent (but still depopulating) Russia could not fill the multifaceted superpower niche. After all, even the much mightier USSR was never a true peer of the United States. Leaving its enormous natural resources and militarized economy aside, its industrial and agricultural mismanagement (the country could not feed itself and had to rely on imparted U.S. grain), rigid political system, and lack of basic freedoms did not allow it to rise to that level. China will become the world’s largest economy (in absolute terms), but, as explained, its further rise will be checked by a internal limits and external complications.
This leaves India as the only remaining plausible contender. Barring major (and highly unlikely) shifts in fertility, its global population primacy is only a matter of time. The United Nations (2005) medium forecast has India surpassing China in population by 2030 (1.449 billion vs. 1.447 billion) and reaching a total of 1.59 billion people (vs. 1.39 billion for China) in 2050. This primacy and India’s belated economic takeoff (following the relaxation of autarkic and nationalistic policies that the long-ruling Congress Party imposed for decades on India’s businesses) have led commentators to extol India’s long-term advantages. Several expect India to be right behind (and eventually ahead of) China in the global race to the top (Huang and Khanna 2003; Srinivasan and Suresh 2002; Rahman and Andreu 2006; Winters and Yusuf 2007). These are understandable but unrealistic expectations.
I have mustered a variety of reasons, ranging from economic to environmental to cultural, that militate against China’s becoming the world’s leading superpower. Analogous reasons weigh even more heavily against India’s occupying that place. A long array of indicators quantifies India’s relative weaknesses vis-à-vis China. Even after one discounts the official exaggerated claims, China is well ahead in terms of economic performance. In 2005 the country’s PPP GDP reached nearly $5.5 trillion (almost 10% of the global level), or more than $4,000 per capita. By contrast, India’s PPP GDP was about $2.3 trillion (roughly 40% behind Japan and about 4% of the global total), or about $2,100 per capita (World Bank 2007). And while China’s manufactures have captured more than 6% of the global market, India’s have less than 1%.
China’s rapid economic growth spurt put nearly 85% of the country’s population above the poverty line, but in 2005 one-third of all people in India remained below it. And I have noted a 1 OM gap in foreign direct investment. India’s one notable positive indicator is lower income inequality (Gini index below 35 compared to China’s 45), but to a large extent this reflects, as was the case in Mao’s China, only more widely shared poverty. By most other measures India is a deeply unequal country because of its legacy of social stratification and exclusions (a de facto caste system persists), and as its economic growth accelerates, this inequality is almost certain to grow. More important, China is well ahead of India in the quality of human capital.
For some key indicators the differences are not just twofold (infant mortality) or threefold (in China ~15% of children younger than five years are stunted; in India ~45%) but even fivefold and sixfold (fig. 3.23). Low birth weight is an important predictor of future health; in China its share is 6%, in India 30%. China’s adult illiteracy is less than 7%, India’s more than 35%. In 2005, China’s share of people engaged in R&D was more than five times that of India (WHO 2006; UNDP 2006). And India’s rate of immunization against childhood diseases has actually been falling; for measles, it is well below the rate in Bangladesh and on a par with such modernization laggards as Haiti or Chad (IMF 2006).
Similar disparities are true insofar as modern infrastructures are concerned (see fig. 3.23) (World Bank 2006; Long 2006). Marginal differences include such important measures as access to improved urban sanitation (~70% in China, ~60% in India; India is a bit ahead in assured rural access to clean water). China has built more expressways in a year than India has since 1947; China’s per capita electricitygenerating capacity is nearly three times that of India; and China’s modern subways, airports, and container ports are superior (e.g., in 2005 China’s container ports handled about 17 times the volume of India’s) (IMF 2006).
India will find it hard to catch up with China in average per capita terms because its population still grows much faster (1.5% vs. 0.7% in early 2000s); its total fertility rate (3.0 vs. China’s 1.7 in 2005) is projected to equal the Chinese level only after 2040. To be sure, India has advantages. On a 1-7 scale (1 best) India outranks China in political freedom (2 vs. 7) and civil liberties (3 vs. 6). (World Audit 2006). And India has eagerly embraced just about every facet of the new electronics economy, from call centers and accounts-processing facilities serving numerous Western businesses (Bhagat 2005) to the world-renowned software designers of Bangalore who, at least in the eyes of Indian experts and an adoring public, are far better than Silicon Valley’s.
At the same time, India’s electoral process has been violent and corrupt, and in general India has few equals in the level of corruption and bribery permeating its personal, governmental, and business spheres. On Transparency International’s (2006) corruption perception’s index (0-10 scale, with 10 best) China scores 3.4 (71st of 146 countries), whereas India scores 2.8 (90th), and Indian companies have the highest propensity to offer bribes when operating abroad.
India’s progress will necessarily be complicated by the country’s heterogeneity; its diverse cultures and religions and its often extreme political groups (including some of the world’s last Marxists, Communists, and Maoists) do not make for natural consensus politics.
There are potentially serious external factors that can slow down or derail India’s progress. Any substantial change of the monsoonal flow caused by global warming (delayed onset, higher variability, more violent rains) would have enormous consequences, as would the disappearance of a large chunk of Bangladesh under repeated storm surges of an elevated Bay of Bengal and the migration of tens of millions of people into India. Long-term water supply problems and the contamination of streams are as acute in parts of India as they are in China. The future course and outcome of the 60-year-old (now nuclear-armed) conflict with Pakistan cannot be foreseen, nor can the state of India’s Muslims in a world of radicalized Islam.
But even if everything goes rather well, the India of coming decades will be too preoccupied with its economic and social modernization, its quest to move from large-scale poverty and meager subsistence to a modicum of widely shared income and lifestyle security, to become a globally dominant superpower. By 2050, India will have the world’s largest population. It might have the world’s second-largest economy, but in per capita terms it will still most likely be no richer than Mexico is today, and its rich native cultural heritage of Hinduism and Sikhism will not become any more transferable abroad than it is today. That India will be a major global power in 2050 is obvious; that it will be a globally dominant superpower is wishful thinking.
Who is on top matters—be it as savior, hegemon, pacesetter, model, irresistible attractor, or brutal enforcer. The United States may have been one or the other of these to different nations at different times, but its retreat from such roles will not create a more stable world, particularly if there is no clearly dominant power or grand alliance. This conclusion is perhaps the easiest to defend: the demise of U.S. global dominance will not bring any multilateral balance of power. Conditions in the absence of a global leader in a world swept by the forces of globalization would resemble those following the retreat of Roman power that underpinned centuries of coherent civilization: chaotic, long-lasting fragmentation, inimical to economic progress, which would greatly exacerbate many of today’s worrisome social and environmental trends.
About 2 billion people already live in countries that are in danger of collapse (fig. 3.24), and there are no convincing signs that the number of failing states will diminish in the future. A century ago a failure or chronic dysfunction of a small (particularly a landlocked) state would have been a relatively inconsequential matter in global terms. In today’s interconnected world such developments command universal attention and prompt costly military and humanitarian intervention. Prominent recent examples include Afghanistan, Bosnia and Herzegovina, Congo, East Timor, Iraq, Liberia, Sierra Leone, Somalia, and Sudan. Were a number of such state failures to take place simultaneously in a world without a dominant power, who would step in to defuse at least the most threatening ones? As Ferguson (2004, 39) has warned, “Be careful what you wish for. The alternative to unipolarity would not be multipolarity at all. It would be apolarity—a global vacuum of power. And far more dangerous forces than rival great powers would benefit from such a notso-new world disorder.”
Many Western strategic planners, forced to make contingency plans for a massive launch of thermonuclear weapons, wished for the end of the Cold War and yearned for a world without superpower confrontation. They got their wish sooner than any think tank could have imagined, and now they look back wistfully at a world with an identifiable, rational enemy: the Soviet politburo had no wish to immolate the country by launching the first strike. Today’s young European leftists may get their wish of a severely hobbled and introverted America even before they need glasses to read their copies of The New Statesman, Il Manifesto, or Junge Welt. But how much will they then enjoy the ambience of Eurabia with edicts defining permissible readings, clothes, and investments?
Much has been made of unfavorable sentiments toward the United States shown by public opinion surveys around the world. But do these sentiments indicate a U.S. retreat would be widely welcomed? The numbers, insofar as published surveys go, are real. Even in Australia the share of respondents who worried about radical Islam matched the share of people who perceived U.S. foreign policies as a potential threat. Commentators ascribe this trend to such mythical causes as an evolving national character (made up of values that “look increasingly ugly to many foreigners”) that is a liability to foreign relations (Starobin 2006), or to such predictably partisan reasons as anti-Bush sentiments that have mutated into broader anti-Americanism (Kurlantzick 2005).
Realities are not that simple. As Applebaum (2005) points out, the world is not uniformly awash in anti-American sentiment; the United States continues to exert a powerful inspirational and aspirational pull for hundreds of millions of people. A great deal of anti-Americanism has always mingled outrage and envy (and among European intellectuals, disdain) with grudging admiration and carefully suppressed affection, a stance summed up by “Yankees go home—but take me with you!” As Mandelbaum (2006, 50) noted, the world complains about U.S. recklessness, arrogance, and insensitivity, but we should “not expect them to do anything about it. The world’s guilty secret is that it enjoys the security and stability the United States provides.”
In addition to this complex assortment of national factors, we must consider globalization, a supranational trend that makes a single nation’s claiming an undisputed place on top even less likely. Globalization has profound personal implications for each of us because it affects our place along the ever-shifting continuum of individual and familial well-being. Perhaps its most personally relevant impact is the increasing frequency and range of income and social inequality.
Globalization was discovered by the media during the 1990s, and it has become one of the most prominent, loaded terms of the new century. The process is condemned by activists and enthusiastically greeted by free-marketers. What is new about it is its intensity and pace, but globalization has been under way since the beginning of the early modern era. Initially it was restricted to certain segments of the economy, and its benefits were enjoyed only by the richer strata in a small number of countries.
On a personal level the process manifested itself as an increasing accumulation of possessions made in different parts of the world. By the sixteenth century the homes of wealthy Dutch and English merchants commonly displayed paintings, maps, globes, rugs, tea services, musical instruments, and upholstered furniture of wide provenance (Mukherji 1983). By the eighteenth century such private collections had items from China, Japan, India, the Muslim countries, and the Americas. During the nineteenth century the single most important driver of large-scale commercial globalization was the accelerated expansion and maturation of the British Empire (Cain and Hopkins 2002).
At the end of the nineteenth century food grains (from the United States and Canada) and frozen meat (from the United States, Australia, and Argentina) joined textiles as major consumer items of new intercontinental trade, which was made possible by inexpensive steam-driven shipping. By the middle of the twentieth century the highly uneven distribution of crude oil resources elevated tanker shipments to the most global of all commercial exchanges. But only after 1950 did the trend embrace all economic activities. Inexpensive manufacturing was the first productive segment affected by it, beginning with the stitching of shirts and the stuffing of toy animals; later came the assembly of intricate electronic devices.
During the 1990s specialization and product concentration reached unprecedented levels as increasing shares of particular items originated in highly mechanized or fully automated facilities owned worldwide by just a few companies or located in just a few countries in East Asia. At the global level, two companies (Airbus, Boeing) now make all large jetliners; two others (Bombardier, Embraer) make all large commuter jets; and three companies (GE, Pratt and Whitney, Rolls-Royce) supply all their engines. Four chip makers (Intel, Advanced Micro Devices, NEC, Motorola) make about 95% of all microprocessors; three companies (Bridgestone, Goodyear, Michelin) sell 60% of all tires; two producers (Owens-Illinois, Saint Gobain) press two-thirds of the world’s glass bottles; and four carmakers (GM, Ford, Toyota-Daihatsu, DaimlerChrysler) assemble nearly 50% of the world’s automobiles.
This concentration trend did not bypass highly specialized industries or services. Japan’s Jamco makes virtually all lavatories and custom-built galleys and inserts in commercial jetliners; and no matter where a burning oil or gas well is located, one will call International Well Control of Houston or Safety Boss of Calgary. These examples make it clear that in the concentration of many productive sectors globalization has already gone about as far as possible. For many services a similar level of saturation is likely before 2020, and the process, although it may experience temporary setbacks, appears to be generally unstoppable. Its critics use this increasing concentration of economic activities as a worrisome example of dangerous usurpation of power by a handful of corporate entities, entirely missing the fact that the most long-term consequence of this trend is its impact on national aspirations for strategic dominance.
Perhaps the most helpful way to think about globalization would be to get rid of the term, and not just because that noun has become so emotionally charged. The term interdependence describes much more accurately the realities of modern economies. Once they left behind the limited autarkies of the preindustrial era, states have come to rely on more distant and more diverse sources of energy, raw materials, food, and manufactured products and on increasingly universal systems of communication and information processing. No country can now escape this imperative, and as this process advances, it will become impossible for any nation—no matter how technically adept or how militarily strong—to claim a commanding place on top.
Analogies with ecosystems are always useful when thinking about economic organization and modern states. The most complex ecosystems abound with many fierce forms of specialization, competition, and aggression, but they are fundamentally founded on enormous webs of symbiosis and cooperation. They have top (carnivorous) predators, many omnivores, and a very large number of herbivores, but there is no dominant species able to claim an excessively large share of resources, and all macroorganisms depend critically on environmental services provided by microorganisms, mostly bacteria and fungi. An outstanding example of this is the fact that African termites may consume annually as much biomass per unit of savanna as do elephants (Smil 2002).
Today’s global economy still has its dominant top carnivore. In 2005 the United States, with less than 5% of the world’s population, accounted for 20% of the world’s economic output and claimed about 23% of the world’s commercial primary energy. But its influence has been declining, and its relative importance will further diminish by 2050. As its complexities and interdependencies increase, the modern world thus begins to resemble a coral reef rather than a tundra, and there is actually no alternative to this shift, short of the system’s collapse and a return to premodern existence with all that it implies for the quality of life.
There is enough evidence to conclude that in natural ecosystems greater complexity promotes system stability. Analogously, greater interdependence of national economies is (in the long run) a stabilizing factor. But true stability also requires that the benefits of globalization be reasonably well distributed, both internationally and within individual countries. Globalization has been good for complexity and interdependence, but has it been good for most of the people whose lives it affects? This is, of course, a loaded question because the quality of life is notoriously difficult to measure, and if it were possible to demonstrate clear, defensible, measurable benefits of the process, then all ideologically driven debates could become irrelevant.
A telling and intuitively impressive way to gauge this performance (and one that also happens to be favored by economists, so they cannot complain about a biased selection of “soft” variables) is to ask, Has globalization, after a few centuries of slow progress and two generations of rapid advances, helped to narrow huge income gaps that exist on a global scale, that is, has it had tangible global rewards? And, fortunately, there is a revealing comparison of national macroeconomic achievements that can be used to measure the extent of economic inequality: its decline would demonstrate greater benefits of globalization; its increase would indicate more limited benefits.
The simplest choice is to use national averages of GDP per capita (in terms of PPP) derived from standard national accounts. A better choice is to weigh the national averages by population totals. And the most revealing choice is to use average disposable incomes (from household surveys) and their distribution within a country in order to assign a level of income to every person in the world. The best available studies (fig. 3.25) show that unadjusted global inequality changed little between 1950 and 1975 but subsequently increased (Milanovic 2002). Population-weighted calculations show a significant convergence of incomes across countries since the late 1960s, but a closer look reveals that this desirable shift was mostly due to China’s post-1980 gains. A weighted analysis for the world without China shows little change between 1950 and 2000. Similarly, comparisons of post-1970 studies that use household incomes and within-country distributions indicate only minor improvement, slight deterioration, or basically no change (Sala-i-Martin 2002).
Since 1945 inequality of per capita GDPs has been declining among the Western nations, and North America and Western Europe, the principal architects and beneficiaries of globalization, have been pulling ahead of the rest of the world. The U.S. GDP per capita was 3.5 times the global mean in 1913, 4.5 times in 1950, and nearly 5 times in 2000 (Maddison 2001). Even China’s spectacular post-1980 growth did not narrow the income gap with the Western world very much (but China pulled ahead of most of the low-income countries). The obverse of this trend has been the growing number of downwardly mobile countries. In 1960 there were 25 countries whose GDP per capita was less than one-third of the poorest Western nation; in 2000 there were nearly 80. Africa’s post-1960 economic (and social) decline accounted for most of this negative shift (by the late 1990s more than 80% of that continent’s countries were in the poorest category), but (China’s and India’s progress aside) the Asian situation also deteriorated. Income-based calculations confirm this trend.
Perhaps the most noticeable consequence of these inequalities is what Milanovic (2002) called the emptiness in the middle, the emergence of a world without a middle class. By 1998 fewer than 4% of world population lived in countries with PPP GDPs between $8,000 and $20,000, and 80% were below the threshold. Everything we know about social stability tells us that this is a most undesirable situation. Unfortunately, national statistics reveal that this trend is at work even in all the world’s four largest economies. Income inequality has been on the rise not only in the United States, where riches have always been rather unequally distributed, but also in Japan, for decades a paragon of well-distributed riches, in China, where globalization of the country’s economy lifted all boats but at the same time made them more unequal than anywhere else in East Asia, and in Russia.
The U.S. trend between 1966 and 2001 was analyzed in depth by Dew-Becker and Gordon (2005). During that period the median income rose by 11%, but the rise for people in the 90th percentile was 58%; for those in the 99th percentile, 121%; and for those in the 99.99th percentile (about 13,000 individuals), 617%. This trend severed the previously well-documented development whereby rising productivity helped to lift nearly all incomes. During the last three decades of the twentieth century, only 10% of the U.S. labor force had income gains that matched the growth of the country’s productivity, whereas between 1997 and 2001 the top 1% of earners had a higher income gain than the bottom half. There is little doubt that this redistribution of income upward has many undesirable, even poisonous (Lardner and Smith 2005) consequences. The fraying of the middle class, the bastion of U.S. stability and future hopes, has been a very worrisome trend.
Soviet Russia never had a strong middle class but, contrary to the official goal of creating a classless society, had substantial disparities between the ordinary populace and the ruling elite, including top Communist party members, secret police, and military-industrial bureaucrats. Given the fundamentally criminal nature of post-Soviet privatization and an unprecedented concentration of fabulous riches by assorted “oligarchs,” it is no surprise that Russia’s income inequality rose substantially during the 1990s, with the highest quintile claiming about 47% of the national income by 2000 and only a negligibly lower share by 2005 (Goldman 2006).
Japan’s income inequality has been on the rise since the early 1980s. According to the comprehensive survey of living conditions, done regularly by the Japanese Ministry of Health, the country’s Gini coefficient (perfect equality of income 0, perfect inequality 1), which used to be on a par with that of the Nordic countries (around 0.25), rose to 0.37, well above the German or French level (Ota 2005). The rise has been most pronounced among young workers because a rising share of young adults do not have regular employment (even Toyota, the world’s most profitable company, has increased its hiring of short-term contract workers). The actual Gini coefficient among these young cohorts is even higher than the official rate because the available wage data capture only a small part of nonregular earnings.
Before Deng Xiaoping’s reform began to dismantle the egalitarian edifice of Maoism, China did not have a broad middle class. Although its rising economic tide brought impressive reductions of extreme poverty and improved average incomes in all provinces, the country (in many ways the greatest beneficiary of globalization) has seen perhaps the fastest increase of economic inequality in history. The Gini index of rural income inequality rose from 0.21 in 1978 (still basically a Maoist system) to 0.416 by 1995 and then declined slightly to 0.375 by 2002 (SSB 2002; Khan and Riskin 2005). The 2002 coefficient is from the latest available nationwide household survey by China’s Academy of Social Sciences, which showed slight declines in income inequality in both rural and urban areas, with the urban Gini index falling marginally from 0.332 in 1995 to 0.318 in 2002.
But the overall nationwide Gini coefficient remained unchanged, at 0.45, because of a further increase in the income gap between villages and cities. The urban/rural income ratio rose by more than 20%, from 2.47 in 1995 to 3.01 in 2002 (Khan and Riskin 2005), and it increased further to 3.22 by 2005 (NBS 2006). Rural incomes only one-third of urban earnings represent an extremely high difference in international comparisons (ICFTU 2005). China’s nationwide Gini coefficient is also considerably above that of other major economies in Asia. Recent values are about 0.32 in South Korea and India and about 0.34 in Indonesia. Moreover, China’s real urban inequality is definitely much higher because the index excludes the massive transient urban labor force of rural migrants as well as all foreign employees, and as it also underestimates the highest incomes of China’s newly rich. When these realities are taken into account, China’s urban income inequality may be among the highest in the world.
Manifestations of this new reality are ubiquitous. On one hand, China’s nouveau arrive are pushing the sales of luxury products by 50%-60% per year, so China’s share of that market will rise from less than 2% in 2005 to 10% by 2010 (China Daily 2005). On the other hand, tens of millions of displaced peasants are exploited, often in demeaning conditions, to produce goods for the global supermarket. While China’s bureaucrats, managers, and middlemen are buying Prada perfumes and Bulgari brooches and installing Italian marble floors in their oversize suburban villas, the urban poor, who now can see everything but can afford little, are left with disappointment and anger (Davis 2005). Excessive banqueting leads to enormous waste of food, but China still has tens of millions of malnourished people.
Perhaps no other indicator of China’s growing inequality is as shocking as the disparity in medical care coverage. In 2003, 38.5% of people surveyed in large cities had no medical care coverage, but that share was 88.6% in those rural areas that are officially classified as places where the necessities of daily life have reached an adequate level (Zhao 2006). Consequently, it is hardly surprising that a ranking of the fairness of financial contributions to health systems makes China 188th of 191 countries, only marginally above Brazil, Myanmar, and Sierra Leone (WHO 2000). Surely these are not the solid foundations of a new superpower.
Intensifying globalization has had an equivocal effects on the overall stability of the global economic system. It is very difficult to decide if the positive (systemstabilizing) effects of increasing interdependence matter more than the negative (system-destabilizing) consequences of unchanged international inequality and increases in intranational inequality. But these two contradictory trends have similar long-term effects on the global distribution of power. The first weakens national dominance because it weaves individual economies tighter into the global web of interdependencies; the second weakens national dominance because it makes the greatest beneficiaries of globalization (United States, China, the EU) less stable in the long run.
But all of this jostling for a place on top may matter much less during the next 50 years than it did during the past half century. All human affairs unfold on the irreplaceable stage of the Earth’s biosphere, whose considerable resilience, elaborate integrity, and amazing complexity are being seriously endangered by human actions. Many consequences of this relentless impact are amenable to inexpensive social and economic fixes, whereas others have technical solutions but require considerable investment. But for the first time since the emergence of our species, some of these changes result in undesirable transformations on a global scale, and their pace and intensity may determine humankind’s fortunes much more decisively than any economic policies or strategic shifts.