The message of this book is that some of the most salient features of history can be explained by (1) differentials in the evolutionary heritage of various peoples, (2) climatic and geographic factors related to the spread of pathogens, and (3) the prevailing institutional background that frame the consequences of deliberate human choices. In this vein, an understanding of the American historical experience without a thorough examination of these elements is seriously flawed. A prominent example is the American Civil War; taking these factors and placing them in historical context, we can say that, in a sense, the American Civil War was attributable to parasitic infections. (Explicit in this argument is that the issue of slavery was a major root of the American Civil War.) Without the differential reactions of peoples of northwestern European and tropical West African ancestries to hookworm, malaria, and other warm-weather pathogens, slavery would neither have been as profitable nor as pervasive. Answering critics who would ask: Do we really believe that nematodes less than a fraction of a centimeter long were instrumental in the bloodiest war ever fought on American soil? The answer is an emphatic yes. Therein lies the problem of our message; it is extraordinarily difficult for people to think beyond the dimensions of the everyday; synonyms for small are trivial, inconsequential, and insignificant. This clouds thinking.
Humans are captive in time and space; our imaginations and thoughts are constrained by the dimensions in which we live our lives. Small things—things that can be captured or crushed by a child or that are invisible to unaided human vision—are naturally thought of as inconsequential. “Naturally” here is meant literally; is in our nature to think this way and only constant intellectual vigilance can prevent us from equating small with inconsequential. Constant intellectual vigilance is costly and often relaxed because of the efforts required to maintain it; the result is that world renown medical centers in the twenty-first century suffer from significant rates of easily avoidable nosocomial (originating in hospitals) infections close to a century and a half after Pasteur’s researches and the germ theory. Human senses do not naturally regard the unseen as threats. Human senses evolved to protect early humans (and their hominid ancestors) from the very real dangers they faced in the ancestral environment. Dangers from large predators, poisonous animals, and other humans were immediate; because the ancestral environment was not densely populated by humans, the pathogens that prey on humanity today were either rare or nonexistent. Evolutionary processes take literally a thousand years or so (about 50 generations) to begin to affect instinctive behaviors; as a result, we are not naturally fearsome of “new” things (in existence for less than a millennia or so) that pose significant dangers to us in the world in which we live.1 This creates paradoxes that are frequently commented: in modern high-income nations the dangers to life and limb from automobile accidents are literally thousands of times more likely than the risks from poisonous snakes, yet we jaywalk and fear snakes.
For hundreds of thousands of years the evolving hominid species lived in small groups. These populations were too small and too scattered throughout the earth to provide reservoirs for acute (as distinct from chronic) diseases.2 Early humans did not keep animal herds, consequently they rarely acquired zoonoses. Skeletal remains of evolving humanity are consistent with this story (indeed in some cases they are the progenitors). Skeletal remains of stone-age peoples suggest that the most recent ancestors of evolving humanity were, relative to primitive agriculturalists, disease free, well fed, and violent.
When history is written, dramatic human actions, foolish or heroic, are the focus of attention; microbes and the minute are typically ignored or given tertiary (or lower) considerations. Before the current age (pre–World War I), military deaths from diseases vastly outnumbered deaths directly caused by combatants; still it is a rare text that emphasizes the impact of diseases in war. Nearly two hundred years ago the Battle of Borodino was “won” by the Napoleonic forces; battlefield deaths at Borodino were large, yet they were dwarfed by the carnage caused by typhus in combination with its vector, the louse. This lesson has been forgotten, if indeed it was ever learned. The popular culture of the twenty-first century regards the louse and other vermin as things that are not fearsome but comic; they are frequently topics of humor, and cartoon films that anthromorphize vermin as furry, funny, lovable objects of entertainment, brave defenders of their liberties against evil villains (typically Homo sapiens). Humans do not fear them because innate fearful reactions to them are not embedded into our nature as were the fears of snakes and scorpions. Typhus, plague, AIDS, sexually transmitted diseases, and the other illnesses of civilizations were unknown (or extraordinarily rare occurrences) in the environment in which ancestral humans evolved; furthermore, these diseases escape detection by unaided human senses. Because we are unable to identify them with our senses, evolution has not provided us with an instinctive fear of microbial infection, unlike the instinctive aversions that many have toward snakes and scorpions.
Humanity’s concept of time is similarly skewed by our senses and experiences. The Darwinian theory of natural selection faced a great deal of resistance because the time scales inherent in evolutionary selection are so vast compared to our intuitive grasp of time. The human sense of time is similarly limited by what we know and grasp intuitively; our sense of time is bounded in the short run by seconds and in the long run by decades. The processes of evolutionary selection are affected over many generations; almost by definition they are invisible to any specific generation.
An analogous historical example that illustrates the point that glacial processes tend to escape human commentary is that of economic growth. In the British Colonies of North America that became the United States, incomes per capita are estimated to have risen at between 0.3 to 0.5 percent per year during the years 1650 to 1776. At a rate of 0.3 percent per year, incomes double about every 230 years. During the same period England was probably experiencing a slightly slower rate of growth; economic changes were not remarked upon during the eighteenth century. Consequently, it is no wonder that when Samuel Johnson, one of the great intellects of the ages, argued that the reign of Caesar Augustus in ancient Rome was humanity’s golden age, no one laughed. He could have actually thought that the arts, manners, and society had reached their apogee in Augustinian Rome. Like almost all of his age, Johnson had no conception of the economic changes and growth that were occurring in his lifetime because they were so small as to be unnoticed by contemporaneous observers. We can imagine a growth rate of 0.3 percent per year continuing over millennia, leading to enormous wealth and economic changes, yet to the human population alive at the time the world is unchanging and static.
Samuel Johnson was not alone in his belief that the world was unchanging (or even deteriorating); Adam Smith, David Hume, Voltaire, and the luminaries of the Enlightenment were all pre-modern in their conception of economic progress. Indeed the very conception of economic progress had to await the arrival of growth rates that had noticeable effects during the life span of an ordinary human. The intellect of the sentimentalist and writer Charles Dickens will never be confused with those of the likes of Johnson, Smith, Hume, and Voltaire; still Dickens writing in the mid-nineteenth century would never have identified Augustinian Rome with the apogee of human achievements. Whatever his intellectual failings, Dickens could not help but be smitten by the economic changes that enveloped his time. Adam Smith, an extraordinarily perceptive observer of the human economy, was unaware of the economic changes that were taking place in his age. Smith published his great work, The Wealth of Nations, in the monumental year of 1776; Dickens started publishing in the 1830s. Consequently, we can narrow the years that gave birth to modernity; sometime in the 60 years between Smith’s Wealth of Nations and Dickens’s first writings gave birth to the modern and the concept of material progress.3
People are generally unaware of processes that take more than two human generations to have noticeable impacts. Background conditions also matter. Suppose that a society has a demographic regime of high birth and death rates. If the mean birth rate is 35 per thousand and that is accompanied by a death rate of 33 per thousand, the population has a mean growth rate of 0.2 percent per year. If the birth rate is constant, and the death rate falls to 29 per thousand, the change has a material chance of being unremarked and unnoticed by contemporaneous observers (yet the growth rate is now 0.6 percent per year). But over 500 years the effects of this change will be immense. At a growth rate of 0.2 percent, population doubles every 345 years or so; after 500 years the population would have increased by approximately 2.72 times its original size. While at a rate of 0.6 percent per year, population doubles approximately every 115 years; after half a millennium population would have increased by nearly 20 times. The arithmetic is basic and unchallenging; the hard part is thinking in terms of centuries or millennia.
Yet these processes arguably have had the most profound impact on observational reality and history. In this book, we have shown how different biological environments affected evolution. People who have innate resistance to pathogens that predominate in the place where they reside will tend to live longer and have more surviving offspring than those who have no innate resistance. An innate advantage does not have to be large because over millennia evolutionary selection will magnify by orders of magnitude both resistance and the proportion of the population that possesses the trait.4 The time necessary to achieve these results is not easily comprehensible to humans. “Living memory” is, by definition, bounded by human life expectancy. What we have not witnessed fades like shadows at twilight; one wonders, were they real or imagined? They were very real, and were instrumental in shaping American history.
The conquest of the New World by Old World peoples was almost entirely due to biology and evolutionary selection. The introduction of Old World pathogens into the relatively disease free New World environment caused the aboriginal American population to collapse. Epidemiologists and biologists estimate that it takes five generations for the population of a species to stabilize in the face of an epidemic disease. A human generation prior to the twentieth century was about 20 years. Consequently, five human generations would be about 100 years. Yet in 1620, more than 100 years after Contact, the aboriginal American population had not stabilized; it was still in deep decline. This was because it was not just one disease that afflicted the native inhabitants of the New World but a host of them, some occurring simultaneously, others sequentially. Among them were influenza, smallpox, dysentery, malaria, yellow fever, dengue fever, typhus, tuberculosis, and cholera (the latter two becoming rampant in the nineteenth century). These diseases literally changed the complexion of the New World’s peoples.
The Old World peoples were relatively immune to the effects that Old World pathogens had on New World peoples. But all Old World peoples did not have the same innate responses to the diseases that were established in the New World post-Contact. In the tropics of the Americas, people whose ancestry made them relatively resistant to warm-weather diseases lived longer lives and produced more surviving offspring relative to those people whose heritage did not include an immune system that had been shaped by exposure to warm-weather diseases for millennia. People who had an inherited resistance to warm-weather diseases were from warm-weather areas in the Old World; in the historical context they were predominantly from tropical West Africa. Repeating points we made earlier, it is not because Africans came from areas where the climate was hot. The native Aborigines from equatorial America were after all, from a similar climate. Tropical West Africans were relatively resistant to diseases that came to predominate in the warm-weather area of the New World because the pathogens also were from tropical West Africa. The innate resistance their evolutionary heritage provided Africans gave them a survival advantage over other ethnic groups in tropical America, and this is why African slavery survived there.
Conversely, in the areas in the New World that were subject to frequent killing frosts, tropical pathogens had difficulty establishing themselves. Cold weather and killing frosts were not a deterrent to diseases such as influenza, pleurisy, tuberculosis, and other lung infections, hence the term cold-weather diseases. People from northwestern Europe had an ancestral heritage of generations of exposure to cold-weather diseases. Again, they were relatively resistant to the onslaught of these pathogens, and they and their offspring had a survival advantage relative to the other ethnic groups.
In the context of the historical experience, the disease environment in the tropics was more virulent for a number of reasons. The first reason is that most cold-weather pathogens can survive in warm-weather areas. Cold-weather diseases can frequently exist for years in the human populations in the tropics. Indeed most cold-weather diseases, even the ones with high case mortalities, do not kill quickly because their reservoirs tend to be in human populations.5 Outside a host they cannot survive killing frosts (nor can most pathogens); consequently, if they are quickly fatal, the pathogens will not survive for long. The reason that cold-weather pathogens do not thrive in the tropics is that their victims, because of their weakened state, frequently succumb to an opportunistic warm-weather infection before the cold-weather pathogen has had a chance to spread. Over years, unless continually reintroduced, the cold-weather pathogens will diminish and perhaps vanish. As a consequence, the tropics of the New World were subject to endemic warm-weather diseases and the periodic episodes of cold-weather diseases. In the nontropical areas of the New World, while the occasional outbreak of a warm-weather disease did occur, they were rare and irregular.
Second, the major export staple of the tropical New World was sugar. The production of sugar was subject to economies of scale; large plantations with hundreds of workers (slaves of tropical West African ancestry) were packed together in the slave quarters. The slave quarters facilitated the transmission of pathogens among slave and non-slave populations. Indeed in the tropics of the New World the mortality rates of people of European ancestry were higher than those of African slaves. This was due to the differential effects of the warm-weather pathogens and the efficiency in which they propagated in the slave quarters. From having a reservoir in the slave quarters to spreading to the non-slave population was an easy step. This had deleterious effects on the non-slave population, but what we are emphasizing here is that they had very detrimental effects on the slave population as well. Compared to the South of the United States the slaves of tropical America were decidedly unhealthy. This led to high death rates in the slave populations of tropical America. To replace the declining slave population, more people were brought in from tropical West Africa. Imported with them were old and new versions of virulent diseases, reinforcing the deleterious environment of tropical America.
These events were gradual; death rates even in the tropics and the New World while high were not close to the catastrophic death rates of European troops in tropical West Africa. Where population densities were high (the slave quarters on sugar plantations), and sugar production and slavery predominated, then warm-weather pathogens both deadly and chronic became endemic.
The parasitic diseases that had reservoirs in the human population, hookworm and malaria among the most prominent, did not require tropical conditions to survive. Human hosts harbored parasitic diseases during the season when the weather was inimical to the spread of the parasite. Longer warm-weather seasons were conducive to their spread but not necessary. Malaria spread as far as human hosts and mosquitoes went, and that was as far as the Arctic Circle in Siberia. Hookworm had a harder time in cold climates because even in the summer season most humans wore shoes in the latitudes where cold weather and killing frosts occurred. This was prophylactic against the spread of hookworm, but not entirely. In the deep mines and tunnels where outside temperatures were low, hookworm could and did thrive in the mild temperature that prevailed in deep shafts year round. Thus hookworm earned its sobriquet, the “miner’s disease,” in nineteenth- and early twentieth-century Europe.
It is instructive to note that hookworm was not discovered to be endemic in Europe until it was found to be widespread among the laborers constructing the Gotthard Rail Tunnel in the Alps in the second half of the nineteenth century.6 The Gotthard Pass is a frigid area much of the year, and in fact ski resorts are nearby; still hookworm thrived in the depths of the tunnel. When did hookworm first appear in Europe? We do not know but, as we indicated in chapter 6, parasitic diseases do not become established and highly endemic overnight. Angelo Dubini identified hookworm disease in an autopsy of an Italian woman conducted in 1838. The evidence does indicate that hookworm was prevalent in many areas before it was identified as endemic. But like evolutionary processes, hookworm so gradually weakens people that it is not recognized symptomatically, so it may have been common long before Dubini’s discovery.
Similarly, the accommodation of humans to disease, or the gradual eradication of diseases, goes unrecognized for the less dramatic persistent parasitic infections. For example, malaria had almost disappeared from northwestern Europe before it was discovered to be a parasitic disease. Even after its protozoa were recognized in 1880, there still was no effective campaign against it; malaria receded into a medical curiosity in northwest Europe even before Ronald Ross identified its mosquito vector in 1897. Changing living standards and changing land uses gradually reduced and finally eliminated malaria from northwest Europe; the decline in malaria was not noticed at the time. Likewise, gradual changes in the environment and the disease ecology are rarely noticed contemporaneously if unaided by conscious human actions. If they are ever noticed, they are noticed retrospectively.
Herein lie the difficulties of our story. The history and changes we have attempted to delineate occurred so gradually that they were invisible to those who were experiencing them. Prominent in our story is the relative economic merits of slaves vis-à-vis indentured servants. In the American South, the relative profitability of slavery over indentured servitude did not manifest itself immediately. Indeed, as shown in chapter 4, if slaves and servants had the same productivity, then throughout much of the slave importing era of American history the annual costs of using indentured servants were lower than those of slaves. Since we know that slaves were bought, then we also can deduce that slaves had a greater productivity in the regions in which they were habitually preferred to servants. But how did the potential slave/indentured servant buyer of the seventeenth and early eighteenth centuries learn this? The answer lies in trial and error. In environments where life expectancy and annual slave productivity were large enough to amortize the purchase price, the buyers of slaves profited, and slavery and slave imports expanded. In environments where life expectancy and annual slave productivity were not large enough to amortize the purchase price, slaveowners made losses, and slavery and slave imports diminished.
These processes were so gradual that contemporary observers did not feel them worthy of much comment. They tend to be invisible to observers distinct in both time and space. Centuries after the fact writers ascribe New England’s and northern antipathy and aversion to slavery as rooted in tastes and values. But are tastes and values endogenous to economic success, or are they completely divorced from it? People who do not wish to use slaves, for whatever reason, would be successful relative to slaveowners in regions where slaves have low or negative profitability. Where slavery was profitable, the converse is true. Successful people generally ascribe their success to their superior abilities, morals, and personal characteristics.
Prior to the American Civil War, generations of southerners grew up thinking that success and slavery were obviously correlated. The word “obvious” is deliberate; anyone who could see, saw that. Similarly, conventional wisdom associated success with free labor in the non-South. Chronologically, we know that African slavery in both the North and South preceded the divergence in regional tastes; consequently, the more logical inference is that the financial success or failure of slavery was substantive in the formation of regional tastes. There is no evidence of tastes being instrumental in the financial viability of slavery.
In American history, the reason why African slavery was geographically isolated was because only in the South was it sufficiently profitable to thrive and expand. The reason that it was more profitable in the South rather than the non-South was that the disease environment that eventually predominated in the South differentially favored people of tropical West African ancestry. Because labor of African ethnicity was more productive than labor of northwestern European ancestry, scholars have concluded that slavery was profitable. This is simply confusing cause and effect; it was not slavery that caused labor of African ancestry to be profitable but labor of African ancestry caused slavery to be profitable, or more profitable than it would have been. Without resistance to warm-weather morbid (and mortal) diseases, slavery would not have been as profitable as it was. Indeed the entire issue of slavery’s profitability may be in question; in comparing the production and efficiency of free versus slave labor in the antebellum South, how much of slavery’s advantage was due to the ancestral advantage that labor of tropical African ancestry had in warm-weather disease environments? And how much was due to the increased output that slave institutions could extract from an enslaved labor force? We know that people of tropical West African ancestry were more resistant to the diseases that infested the American South. Consequently, what proportion of slavery’s supposed economic advantage was due to the nature of its workforce in that particular disease environment is an unanswered question. It could have been relatively small (say 10 percent), relatively large (say 60 percent), or even greater than the so-called efficiency advantage of slavery (more than 100 percent). In the latter case, the institution of slavery would have reduced the output that would have been produced by free black labor; granted this is probably an unlikely alternative, but it is a conceptual possibility once we recognize the differential impacts of the disease environment.
Changes in environments, diseases, climates, or languages tend to go unnoticed and unmentioned because they change so slowly that they are rarely noticed. Similarly, economic changes are usually so slow as to be literally unmemorable. People in East Asia, Latin America, Europe, and North America who were over the age of 40 when this book was published (2011) have experienced (1) societal incomes that more than doubled during their lives, (2) increased life expectances (in some areas by decades), and (3) a generally much better life than their counterparts forty years earlier. Yet what do the populace, the media, and the intellectual class offer but gloom and doom. It is not just that bad news sells, but that we are not conscious of changes that took place in our lifetimes because many of these processes are similar to aging: we wake up one morning and wonder when did we become so old? We live our lives a day at a time; the passage of time escapes us. This is the same as it was for all people in different times and places. We are trapped in time; we try to make sense of our experiences with what intellectual capital we have. Once an explanation seems to make sense, it lasts whether it does, or does not, conform to observational reality. Ideas and coherent explanations have long lives, and they tend to overshadow mundane facts and actual history.
We have explained how nineteenth-century American economic growth was facilitated by population growth. Larger markets meant increased specialization. This is a fundamental insight of Adam Smith summarized in his aphorism: “Specialization is limited by the size of the market.” Increased population led to (1) increases in productivity (specialization), (2) increases in incomes, and (3) both of these fed back to increased population. This is the virtuous cycle of Smithian growth. Associated with an increase in human population is a concomitant increase in population density and an additional increased biomass consisting of the animals, organic materials, vermin, and waste products associated with human settlements. The increased human and nonhuman biomass was a resource for organisms that are inimical to human health. Some of the organisms (smallpox, plague) have high-case mortalities; these diseases have made their mark on history. Persistent diseases with low-case mortalities (parasitic infections, dietary deficiencies, and some bacterial and viral infections) rarely make the history books; they are like background music, easily ignored and forgotten. They may be ignored and forgotten, but their impact has left their mark both literally and figuratively.
The effects of morbid diseases and the effects of increasing specialization are alike because they are so easy to overlook. Yet our explanation of economic growth in America up to the early twentieth century is consistent with science, history, and theory. We contrast this to the Malthusian theory that increasing population leads to stagnation and immiseration. The Malthusian association of increased population with increased poverty has an intuition that appears to make sense, but on examination, the rationale is flawed. The reason for human poverty and misery is not because resources grow less rapidly than the human population, it is that the population of pathogens increases disproportionately more rapidly with an increase in the human population. A disproportionate increase in the number and variety of pathogens has a deleterious effect on human welfare. Increased pathogens lead to increasing rates of illness and death. Most infections have higher rates of case mortality to infants and children; their smaller bodies and less developed immune systems make children much more susceptible to death.
If there is an increased death rate among the young, the only ways a population can sustain itself are with an increased birth rate and/or migration. In other words, high death rates “cause” high birth rates.7 This is in stark contrast to the Malthusian model where increased births lead to poverty that in turn increases deaths; we envision an increase in deaths leading to both an increase in birth rates and poverty. In our view, poverty results from increased deaths because more resources are devoted to childbearing and rearing. Many childhood deaths imply a large dependent population that is not engaged in economic production because many of the young never become economically active adults; a relatively large number of people who survive to become adults will have reduced productivity because of physical and mental defects that the disease environment imposed upon them during their formative growing years. It is death and disease that cause poverty, not an excess of people. The Malthusian model relies on the lack of resources to cull the population; the historic facts, however, are the conjoining of high birth rates and high death rates. The difficulty with the poverty explanation for death is that nutritionally deprived women do not have many live births; nutritionally deprived women do have amenorrhea, still births, and spontaneous abortions. Relatively healthy women have more children, but high infant death rates reduce the rate of breast-feeding among women than in comparable societies with lower infant death rates. Breast-feeding tends to reduce the probabilities of conception. In sum, birth rates were high because death rates were high; in the face of a deteriorating disease environment, this is the only way a population can maintain itself. In the face of a high death rate (and in the absence of immigration) societies without a birth rate high enough to compensate for the death rate vanished.
The reduced productivity of the human workforce continually exposed to new diseases was another important factor causing poverty. People are not identical, some are active and healthy, others inactive and disease ridden. The more that a population consists of low- productivity, disease-ridden people, the more that observed wages will reflect their condition. This implies that wage data covering centuries have to be treated with extreme caution. The assumption that the health characteristics of a population are unchanging over the centuries is founded on nothing but hope. Similarly, climatic changes do occur; they are slow to contemporaneous humans, but over centuries these effects are bound to be reflected in data concerned with agriculture.
In the Northern Hemisphere starting in the mid-thirteenth century, a period of global cooling began. There is no definitive timing of the Little Ice Age, but periods ranging over 450 years are identified with the Little Ice Age. We do know that the cooling forced the evacuation of the Norse colonies in Greenland, and caused a massive decline in the Icelandic population. It defies all we know about agriculture to suggest that the Little Ice Age had no effect on labor productivity in the fifteenth, sixteenth, and seventeenth centuries. But, if we compare agricultural wage or productivity data over these centuries, either we have to assume that climate had no effect or that we are making adjustments for it.
Historians who have studied the centuries that encompass the Little Ice Age have been seduced by its gradualness. Climates did not change overnight, nor did the disease environment. But small changes accumulated over centuries had a massive impact. In his time, Malthus associated poverty with an increasing human population. He did not see how climate and disease environments had deteriorated nor was he aware of changing agricultural techniques nor changes in the treatment of diseases. The accommodation of the human immune system to the changed disease environment eventually led to lower death rates. The evolution of climate, agriculture, diseases, and people were so gradual as to be invisible to those experiencing them, and still they remain invisible in our histories.
Gregory Clark’s recent book A Farewell to Alms (2007) is the latest contribution to the Malthusian literature. It is an undoubted contribution, yet we spend some effort disputing it because it is entirely in the Malthusian tradition, and as such, Clark’s story is antithetical to our story. A Farewell to Alms ignores the impact of climate and persistent morbid diseases (as opposed to acute diseases) in the economic history of England from the thirteenth through the nineteenth centuries. Clark presents a clear and concise rendition of the Malthusian explanation for the history of Britain, and how Malthusian theory works within Britain’s history. His reliance on agricultural real wage data as a proxy for living standards is unfortunate because the data are neither corrected for changing climate nor corrected for the changing levels of morbidity in the agricultural workforce. An assumption that these variables do not change much year to year (the short run) is tenable; the assumption is not tenable over centuries. Nevertheless, Clark’s book is intriguing because it has an insight that evolution was at work, and because he has assembled what is probably the most comprehensive argument for applying the Malthusian doctrine to the British experience. His work has to be taken seriously; the hypothesis that increasing births lead to poverty and (increasing) mortality is clearly and carefully specified.
We have some fundamental disagreements with Clark (2007); we believe that history and the data support our explanation of the past. Our disagreements may appear innocuous; however, they are based on fundamental differences. One example is Clark’s discussion of the influence of the Suez (and Panama) Canal. Clark states that “The result of these technological changes was a significant decline in real transport costs by 1900” (Clark 2007, p. 309; emphasis added). At least for the Suez Canal, we disagree with his characterization of the change as being technological. The Suez Canal could have been, and indeed had been, built again and again since the time of the Pharaohs. Canals at Suez are built at sea level. No locks are necessary, and that is why the technology of canal building was easily within the grasp of Pharaohnic Egypt. Canals with various connecting routes from the Red Sea to the Mediterranean Sea existed in antiquity; Suez canals were periodically constructed, but they did not survive because they did not generate economic returns above their costs. The lack of profits was due to the enormous costs of canal building and maintenance, and the scanty revenues they generated. Revenues were small because the size of the market was too small to support the large investment in specialized capital that canals required. The nineteenth century saw an increase in population, incomes, and the volume of trade that made the construction of the Suez Canal viable. In other words, the Suez Canal is an example of our Smithian virtuous cycle (see chapter 3); its construction by a profit-seeking enterprise shows the beneficence of the cycle associated with increasing populations and incomes.
Another example of a seemingly innocuous statement that Clark makes is actually indicative of a major difference between his view and ours. Clark correctly outlines the nascent industrial progress that Japan and China were making through the nineteenth century, although he misdiagnoses the reasons behind the relatively slow rates of growth between these large Asiatic economies and England. Clark (2007, p. 262) maintains that “though these societies were on the path to an eventual Industrial Revolution, they were progressing more slowly than England, and they had not progressed as far as England by the late nineteenth century, when they ended their self-imposed isolation.” The question remains why were they progressing so slowly? If we accept Clark’s thesis, it is because of their demographics: populations were increasing too rapidly. Our explanations are different, they are rooted in history and economics. The nineteenth century was disastrous for both China’s society and economy; wars, rebellions, and foreign occupation wreaked havoc on China from early in the nineteenth century until the mid-twentieth century. This book has not focused on institutions, but when banditry, murder, and war are ever present, economies are much more likely to regress than progress.
In order to achieve sustained economic progress some semblance of the rule of law has to prevail, whereby trading activities and production can take place without fearing arbitrary confiscations. The rule of law was, at best, sporadic and haphazard throughout China in the nineteenth and twentieth centuries. China’s nonperformance in the international growth comparisons can be attributed in whole or part to its chaotic history. Japan, however, is different; it had internal stability and more than a semblance of the rule of law throughout the nineteenth and twentieth centuries. So why was Japan an economic backwater in “the late nineteenth century, when . . . [it] ended . . . [its] self-imposed isolation” (Clark 2007, p. 262). The answer is simple: Japan was an economic backwater because of its self-imposed isolation. Autarky is a sure recipe for economic stagnation, whether a country is in Africa, the Americas, or Asia. For all practical purposes, Japan had isolated itself from the rest of the world from the mid-seventeenth century to the mid-nineteenth century. Besides being unable to obtain expertise from the rest of the world, the Japanese economy was unable to specialize in world trade; during this period the Japanese impoverished themselves by abdicating specialization and trade. This explains economic retardation in Japan, but how can we explain why the Japanese were “progressing more slowly than England” in the late nineteenth century? That too is a simple exercise: the statement is simply wrong; the Japanese economy was growing very rapidly after it ended its “self-imposed isolation.” This is supported by both historical events and empirical data.
First, if Japan were not growing rapidly in the latter half of the nineteenth century, where did it get the resources to sequentially defeat the Chinese, and then the Russians, in two wars in the late nineteenth and early twentieth centuries? Objective reality contradicts the casual observation that the Japanese had low growth rates in the latter part of the nineteenth century; Japan’s economy had to have been expanding rapidly to defeat the much more populous and larger countries that she battled. The cause of accelerated growth was the end of isolation that was forced on the Japanese in 1858. With the end of isolation, Japanese growth accelerated. Second, Huber (1971) long ago estimated that Japanese income increased by approximately 65 percent because of the opening of international trade during the period from 1858 to the mid-1870s. The reason why Japan’s economy grew so rapidly in the latter part of the nineteenth century was because the Japanese were allowing themselves to take advantage of gains from trade that increasing specialization and increased markets allow. Additionally, before 1858, Japan’s knowledge of the techniques and sciences that Western countries were developing and using was extraordinarily limited. So not only did Japan gain from exchanging domestically produced goods for foreign goods, it also gained by using different production techniques and knowledge to augment its own production of both goods and services. Prior to 1858, Japan had sacrificed increasing incomes and economic development on behalf of political decisions to isolate the Japanese people from contacts with non-Japanese. The isolation was never complete, but it was effective in suppressing Japanese growth.
Japanese history illustrates our model of long-run economic growth; increasing output leads to increased specialization, and increased productivity and income, but the advantages of trade are most effective in increasing total income when there are substantial differences in factor endowments, be they differences in human capital, physical capital, or natural resources. Under a policy of autarky, a relatively small homogeneous archipelago of islands like Japan is not sufficiently diverse economically to gain as much from increased market size as countries like the United States or China under similarly autarkic policies. It is not that geographically large and diverse countries will not suffer, but they will suffer less than smaller, more homogeneous economies. In the world of today, a policy of autarky would damage the economic interests of the European Union, China, and the United States far less severely than the same policy would damage the economies of Brazil, Canada, and Japan. Size and diversity do matter. In terms of contemporary events, the young people with an excessive amount of ideology, indignation, and ignorance who continually assault trade conferences in the name of anti-globalization are actually advocating policies that will cause impoverishment in small economies. Adam Smith said it well, we say it less elegantly: trade enriches, it does not impoverish nations.
In any story there are nuances; to confine ourselves to one volume and to emphasize our message, we have admittedly neglected qualifying some of our statements that, perhaps, deserve qualifications. In our defense, the importance of evolution, diseases, and their interactions with the human economy and history have been slighted in traditional texts of history and economic history; we have attempted to bring attention to the undeniable impact they have had in molding history.
The Malthusian demographic model has increasing human numbers leading to the impoverishment of the overall human community. Here too objective reality rears its ugly head to slay a beautiful (?) theory. With increasing numbers humanity has become richer and healthier. Malthusian theory is contradicted because increasing numbers do not cause poverty and misery; it is increasing diseases that cause poverty and misery, albeit there is a link between population density and the disease explanation. Put simply, our story is that (1) many more people died of disease than starvation. (2) People were poor because they were chronically ill. (3) Many diseases were “new” to the areas brought by an integrating regime of world trade. (4) High birth rates were a result of the high death rates. (5) Evolution has bestowed on people of different ancestries different inherent immunities that give them partial protection to some (but not all) disease environments. (6) Disease ecologies have changed in the recent past because of inadvertent human actions, and these changes have had profound consequences on human history. And (7) all data have to be interpreted to reflect changes in human productivity due to changing disease ecologies and changing long-run climates.
People tend to believe that tomorrow will be much like today. As a matter of fact, that is literally true, but we must not commit the Fallacy of Composition. Because today is very similar to yesterday, we cannot assume that the same reasoning applies to years, decades, or even longer. Change is the enduring constant. Evolution pervades us at all times and in all places; yet it happens so gradually that it evades our senses. History and science have to guide us in assessing its effects over time. Today humanity is richer than centuries before, in part because we are healthier, more numerous, have a more beneficent climate, and not coincidentally have a better knowledge of how to manipulate the material world from the macro to the micro levels. Without a large population, we could not have the specialists that abound today making our world healthier and more productive. Take this book. There is the possibility that major parts of our thesis are not correct, but only a large and specialized economy could have two people devoting a good part of their working lives over many years investigating the impact of miniscule organisms on humanity. Not all endeavors will bear fruit, but enough have (and will) to transform the world we inhabit. Paradoxically our story is one of optimism, paradoxical because this book is about pathogens that have afflicted humanity more severely than all the tyrants and cutthroats combined. Our conclusion is that there is nothing in our genes or demographics that imply a destiny of poverty, misery, and destruction. But as “dismal” scientists we must end more ambiguously. Because we have no destiny that leads us to doom and destruction does not mean that it cannot happen.