Skip to main content

2. Biology, Diseases, and History: The Big Picture

Published onApr 08, 2020
2. Biology, Diseases, and History: The Big Picture

History is and has been an abiding concern of humanity; we want to know our past, how our ancestors lived, and how the world we live in changed. Humanity’s curiosity about its past has generated an immense literature not only about the history of specific events but also about the historical evolution of humanity itself: The Big Picture. Our book tells a big picture story of history. We view the history of humanity as the product of interactions among humans, economic constraints, and the physical environment. Historians generally treat the biological environment and ecology as exogenous factors; this is incorrect in all circumstances in the long run, and more than occasionally in the short run as well. Assumptions that the natural world does not impact history, or that its effects are constant and unchanging, are also incorrect. There are continual and changing interactions or processes between natural and human forces that affect history.

The issues that concern us are not easy; we rely on the sciences, economics, history, and evolutionary processes to advance our arguments. These processes are not easily grasped. Why should a reader invest time and effort in understanding our arguments? The reason is the issues we write about delineate the possibilities and limits that humanity faces; they are extraordinarily important for understanding human history and the human condition. Explicitly incorporating economics into history yields insights that are otherwise unavailable or obscure because economics is about resources and their uses in the face of pervasive scarcity; without resources beyond those minimally required to sustain existence, humanity is, by definition, living at a subsistence level. This means that (1) there is no art, (2) culture is brutish and superstitious, and (3) physical existence is one in which people spend their waking hours looking for things to eat and warm places to sleep. Without adequate resources, we are in a Dark Age of physical violence, unending poverty, superstition, and an environment that guaranties that most lives are short, disease-ridden, and miserable.

In present-day rich societies, it is fashionable to feign a disdain for “money-grubbing” economic activities, yet these activities provide the resources that allow humanity to rise above the slime and literally voyage to the stars. It is true that abundant resources and freedom have allowed some to dissipate their lives ignobly, yet it is also true that abundance and freedom are the hallmarks of civilization; we cannot have the good without the bad. The pursuit of wealth within the constraints of voluntary exchange and civil society has raised all boats. The increase in human well-being has done more, and continues to do more, to alleviate poverty and improve the welfare of the poor and least advantaged than all the benevolent institutions that ever existed.

The foundation of our argument may be obvious. An increase in resources available to the average family is desirable and a worthy goal.1 We explicitly embrace the sentiment of Protogaras: “Man is the measure of all things.” In judging welfare, we do not weigh nonhuman welfare against human welfare. Neither trees nor mountains speak; those who rhetorically ask “who speaks for the trees” are disingenuous. What they mean is that they have certain preferences (tastes) that involve maintaining and increasing wooded areas and forests. Similarly, those who argue for animal rights never ask the animals what they want; they just presume captivity and/or human intervention are “bad.” Captivity is undesirable to most humans; domestic animals might actually prefer it to an existence that does not rely on humans. This is not as preposterous as it sounds; animal behaviorists have conducted experiments in which animals are given choices (one was for chickens between sheds and yards) to determine preferences. It seems that animals might prefer food and security more than independence and freedom (Zeltner and Hirt 2003). Moreover, activists who are self-styled advocates for their favorite animals or things make no differentiation between their desires and the “welfare” of the animals or things on whose behalf they are supposedly acting.2

Our anthropomorphic perspective is not without difficulties; in particular, the view of Thomas Malthus on the impact of population growth begets an internal contradiction. Malthusian economics and population theory are derived from Malthus who maintained that people will reproduce too prolifically relative to the growth of resources, and as a consequence humanity was doomed to impoverishment. Malthusian theory maintains that increasing population will reduce the level of resources per capita. Left to their own devices and in the absence of “misery” and “vice,” the human population will grow until the average standard of living is at subsistence. All progress in science, technology, and the practical arts will be subsumed by population growth. Malthusians predict that all human striving is for naught because population growth will literally consume all progress. This view led to the sobriquet of the “Dismal Science” being widely accepted as an epithet for economics. The “Dismal Science” had some validity because the economic theory (as distinct from experience) conflated population growth with declining living standards; the theory and the “iron law of wages” continued apace in the economic literature throughout the nineteenth century.3

Malthusian doctrine has had an impact far beyond economics; the association of increased population with increased poverty and misery has a long tradition in historical studies. North and Thomas (1973) are explicitly Malthusian; they argue that population changes that lowered living standards led to the demise of the feudal system; these changes were irreversible and subsequent events led to the rise of capitalism in the Netherlands and England from whence it spread. Harris (1978, 1985) concentrates on the availability of food and protein, arguing that primitive peoples ordered their societies to protect and enhance their abilities to acquire proteins; this affected their societies in a variety of aspects from prohibitions on consuming certain animals to the frequency of conflict. Clark (2007) continues in the Malthusian tradition ascribing the lack of growth prior to the modern era to population growth that consumed all the benefits of economic growth. In his view, real growth in the average level of living took place only after cultural, societal, and (more controversially) biological changes allowed economic growth to exceed population growth.4

In contrast, Simon (1977, 1996, 1998) is explicitly anti-Malthusian, contending that the growth of real output is positively influenced by population, and declining populations are associated with economic retrogression. In her influential works on the transition to settled agricultural, Boserup (1966, 1981), while not explicitly anti-Malthusian, does relate increasing output per capita with increasing population; her nuanced argument is that the increase in output that accompanied population growth and settled agriculture may have been the result of an increase in the number of hours worked per day. Similarly, while not mentioning Malthus, Diamond (1997) in his prize-winning work was at least mildly anti-Malthus in arguing that the absolute size of population was positively correlated with economic growth. In a later work, Diamond (2005) espoused a much more Malthusian viewpoint. In the biological sciences, both Russel Wallace and Charles Darwin credit Malthusian doctrine as one of the inspirations that led them separately to develop theories of evolution and natural selection.

More recently and less auspiciously, Malthusian doctrine has been used to promote doctrinal belief in “sustainability.” The quotation marks around sustainability are there because we do not know what the proponents of sustainability mean, nor are we convinced that the proponents know what they mean. As far as we can determine, advocates of “sustainability” believe that resources are finite and that current rates of usage are unsustainable because they will all be used up in some foreseeable future. Aside from violating the First Law of Thermodynamics that energy (matter) cannot be destroyed only transformed, to make some sense of the doctrine of sustainability, we also have to assume that the current age is a golden age; that we have attained the peak of perfection in all scientific endeavors and technical excellence, and that no further refinements can be expected that will relax resource constraints. This is palpable nonsense; barring any natural or human-made holocaust the next century will see “sustainable” standards of living that are at least threefold or greater than that of the world of 2010.

“Sustainability” advocates are neo-Malthusians in the sense that they believe that higher living standards for larger populations are both unsustainable and undesirable. Unsustainable because the physical environment is finite and undesirable because the environment is “degraded” by the usages that humanity puts it to. What is a degraded environment is an interesting question and one that raises an important series of issues regarding history and pre-history. These issues are addressed analytically and empirically. In the next chapter, we present the outlines of an analytical apparatus that incorporates a Malthusian intuition within a model of long-run economic growth that is more consistent with empirical reality than Malthusian doctrine; in chapter 7, we provide empirical evidence in support of our model.

In the remainder of this chapter, we present a synopsis of the history of the role of population growth, environmental transformation, and economic change within the limits of what we know about the history of our species. This brief history of humanity, so to speak, not only places our story in historical context but also serves as an important introduction to the issues we address in this book—the role of microbiology, evolutionary theory, and diseases in understanding human history. The synopsis is essential for understanding that the biological environment changes over time and that disease environments and ecologies are endogenous to human actions.

A Brief History of Humanity

Scholars estimate that Homo sapiens first appeared as separate in the hominid line from about 195,000 to 200,000 years ago. DNA analysis suggests that H. sapiens became a distinct species about 200,000 years ago (Cann, Stoneking, and Wilson 1987); anatomically modern human fossils are dated about 195,000 years ago (McDougall, Brown, and Fleagle 2005). The hominid line goes back much further than 200,000 years, but this does not directly concern us. What does concern us is terminology. When we refer to “humans” or “humanity” we are referring to Homo sapiens. We realize that this may offend some who label some other of the hominid line “human”; we truncate the human line to include only H. sapiens for expositional ease.

Until about 50,000 years ago, human culture and living conditions were not much changed from the first artifacts that anthropologists and archeologists have identified as human, and dated at approximately 130,000 to 100,000 years ago. Why was there almost no change for (at a minimum) 50,000 or so years? And why was change so glacial from the mid–Paleolithic Age until the eighteenth century? Starting with the Paleolithic Age, recent studies by anthropologists and other students of prehistoric humanity argue that the human populations of the Earth until about 50,000 or so years ago were too small and too scattered to maintain techniques, skills, and knowledge that increased human productivity and enhanced living standards. This explains the absence of cave paintings, obsidian blades, throwing sticks, and bows and arrows or much else before 50,000 or so years ago. These techniques could have been invented before then, but with a small population and few peaceable interactions between roaming bands of hunter-gatherers, acquired skills were easily lost.

Diamond (1997, pp. 253, 256–57) describes how and why technological regression takes place in small populations; among Diamond’s examples is that of the Tasmanian Aborigines who were isolated after global warming and rising ocean levels separated Tasmania from the Australian mainland. Another example of technological regression is that of the indigenous Polynesians of New Zealand (the Māori) who migrated to the Chatham Islands around 1500 (Diamond 1997, pp. 53–66). Because the islands could support only a small population (estimates indicate there were only 2,000 Chatham Islanders in the early nineteenth century), the Chatham Islanders did not have a large enough population to maintain the level of material culture they possessed before they migrated. Subsequently, their knowledge of the skills and technology that allowed them to manipulate the physical world regressed. The Chatham Islanders reverted from primitive agriculture to a hunting-gathering society; they lost their sea-faring skills that facilitated their migrations, and with that also lost all contact with their Polynesian compatriots in New Zealand, becoming the Moriori people in their isolation. The regression was so severe that the Chatham Islands were easily invaded and many of the Moriori were massacred and eaten by their Māori cousins in the 1830s.

The maintenance of culture (religion, taboos, beliefs, skills, technology, knowledge, etc.) in small, isolated, pre-literate populations is difficult at best because the premature demise of a few or even only one individual may result in the complete loss of valuable skills and knowledge. This means that Paleolithic peoples literally had to reinvent the wheel every few generations. Powell, Shennan, and Thomas (2009, pp. 1298–1301) present a formal model and archeological evidence to support the contention that human progress in the Paleolithic Age was hampered by a human population that was too small to maintain and enhance societal capital. They contend that simple human demography can explain cultural development without recourse to evolutionary changes in human cognition. Notice that these observations are almost the exact opposite of the predictions that simple Malthusian theory would make about the effects of population growth; in the early Paleolithic Age, the human population was too small to maintain societal capital, and when unanticipated declines occurred, living standards fell. Contra Malthus, no amount of misery or vice could attenuate the decrease in per capita output.

It was only after the human population was large enough to retain and improve on societal capital that a blooming in the Paleolithic culture (about 50,000 or so years ago) took place. The blossoming of culture ultimately allowed humans to acquire and preserve a technology that allowed them to harvest, and eventually hunt to extinction most of the megafauna. In other words in the late Pleistocene epoch, Paleolithic peoples were able to harvest megafauna at an unsustainable rate because of rising cultural and technical abilities. If this scenario is correct, it means that an increasing human population was the ultimate cause of the extinction of the megafauna, but in a much more circuitous fashion than that implied in Malthusian Doctrine. The corrected sequence is that an increasing human population allowed societal capital to build up so that the harvesting of large game animals increased. The increase in food resources allowed populations to grow more rapidly thereby enhancing cultural evolution; this had a feedback effect on hunting technology and skills. When the rate of harvesting of large animals exceeded their ability to reproduce, they became extinct. The decline in easily available (low cost) animal protein forced humanity to rely on agriculture.

Now it is certainly true that the cause of the late Pleistocene megafauna extinctions has been much debated over the years and is still the subject of research among archeologists, paleontologists, and others. Nevertheless, few disagree that the scientific evidence points to a role for human influences. While the circumstances and timing of the extinctions varied depending on the continent, it is widely accepted that one or a combination of two major mechanisms (climate change and human overkill) were at play. In the Northern Hemisphere, the evidence suggests that human impacts likely interacting with climatic change determined the specific location and timing of the extinctions; in the Southern Hemisphere, the evidence is still unsettled; in Australia and most oceanic islands, the evidence indicates that humans were the likely cause of the extinctions (Barnosky, Koch, Feranec, et al. 2004). Our focus is on the North American continent where the evidence also points to the arrival of humans and human hunting. Many scientists maintain that the first Americans who migrated toward the end of the last Ice Age glacial (about 15,000 or more years ago) from eastern Siberia via the Bering Land Bridge, or along the near-shore coastal waters just south of it, harvested and eventually overhunted the North American megafauna, resulting in their permanent demise at the end of the Pleistocene (about 12,000 to 10,000 years ago).5 Some scientists argue that climate change that followed the last North American glacial disrupted the megafauna’s habitats and modes of living, leading to their ultimate extinction.6 But given the megafauna had weathered repeated glaciations, including several major ones, before their permanent demise, the climate explanation is less convincing than a role for human impacts. An alternative human impact mechanism might appear to be consistent with our disease story of history. When humans arrived and spread into new areas, they brought with them various pathogens and disease-causing parasites; these pathogens and parasites jumped from humans (and their dogs) to the megafauna, and because these were new diseases to which the megafauna were highly susceptible it led to the megafauna extinctions (see MacPhee and Marx 1997). However, as we explain in chapter 5, since the first Americans arrived in small bands with few animals in cold weather during the last glacial they were not likely to have arrived with many parasitic diseases.

Returning to the issue of sustainability and environmental degradation; can we say that Paleolithic peoples degraded their environment? Whether we label it degradation is entirely a matter of tastes; because of changes during the Paleolithic Age, we have no mammoths or mastodons, but, conversely, we have museums. The question cannot be answered definitively because there are no agreed upon measures (metrics) for desirability, but there is one discomforting fact: if humanity had not experienced the changes it did, the human population of present-day Earth would be less than two percent of what it is now, and the humans that did exist you would not want to meet.7

Population growth did not cause a sudden change in the Paleolithic economy because the change in population was not sudden; the change from hunting-gathering societies to permanent agricultural settlements occurred over millennia. When there is abundant game, primitive peoples find the lifestyle of a hunter-gatherer more attractive than that of an agriculturalist. (The vestiges of this preference for hunting remain in contemporary humans; many people pay vast amounts to go hunting and fishing, sometimes not even retaining the animals for food; the same people may pay someone else to maintain their lawns and gardens. Unlike hunters, we know of nobody who pays anyone to allow him to till, plant, weed, and harvest plants.) Regardless, when game animals became scarce and too time-consuming (costly) to obtain, humanity adapted to the decreased abundance of easily harvested game by incorporating some form of agriculture into their food producing activities. The transition to agriculture was innately tied to humanity’s use of fire. From early times fire was used as a hunting tool; fires set to entrap herbivores had the advantage of producing more grazing areas that served as a food for animal herds (Pyne 1991). From following migrating animals and using fire, it was only a short step to swidden (slash and burn) agriculture.

Swidden agriculture is typically one of the first steps on the agricultural ladder (Boserup 1966, 1981). A common practice in swidden agriculture is to girdle trees (rings are cut around them) in order to kill them; their death and subsequent desiccation allows the forest to be set ablaze. The fires do the hard work of clearing lands and enriching soils. Tribal peoples would then move in and establish primitive farms on a small portion of the cleared land, with the remaining portions left to grasses and shrubs. The grasses and shrubs would attract grazing animals and these would be hunted. After a few years, the land’s agricultural fertility would have been depleted by cropping, and the game reduced by hunting. The tribe would then move on to another section of the woodlands whose trees had been prepared and burnt for such an eventuality. Swidden agricultural cycles were sometimes as long as twenty years (the interval between which the land was in fallow; that is, not being used for agriculture).

Were killing trees and using fire to clear the land symptomatic of environmental degradation? Again, we cannot answer that question, but if they were, they were rectified and eventually superseded by settled agricultural communities. Swidden agriculture is tenable only if the population density is low enough to subsist on a relatively low yield per acre. (In a twenty-year swidden cycle, only a small fraction of a twentieth of the land area is cropped; the remainder of the twentieth that has been burned and not used in agriculture is devoted to grass lands to attract grazing wildlife.) Paradoxically, because swidden agricultural techniques were successful in increasing food resources for the human population, the increased population made swidden agriculture untenable.

The decline in game animals was the impetus that gave early humanity reasons to find alternative ways of acquiring food. Primitive humanity knew the relationship between seeds and plants long before cultivation became a permanent form of survival. When there were sufficient numbers of easily harvested animals, the lifestyle of hunter-gatherers was more attractive than that of primitive agriculturalists. Skeletal remains from hunter-gatherer societies indicate that they were more robust, taller, and healthier than early agriculturalists (Molleson 1994). So why did they give up hunting-gathering? It was because the game animals diminished and in some cases disappeared. North and Thomas (1977) have an explanation for the transition from hunting to farming that involves increasing human populations with diminishing animal populations.

The switch to agriculture was neither sudden nor irreversible. Hunter-gatherers persisted for long periods following herds and gathering plant foods; swidden agriculture arose and long-cycle swidden agricultural periods were shortened as population grew and became more dependent on crops and domesticated animals for food. We also have noted that the increase in population positively affected the acquisition of knowledge, skills, and techniques, but there were negative consequences as well—namely the increased frequency of disease among relatively sedentary agricultural populations.8

Accompanying the population increase were a number of changing natural and societal circumstances that altered the disease ecology. First, even before agriculture, the increased human population and contacts between animals and humans that resulted from a change in hunting techniques allowed the transmission of infectious diseases among people, and between peoples and animals. Second, an increased reliance on agriculture reduced the quality of proteins available and caused protein deficiency diseases and skeletal deformities. The physical evidence is unambiguous; the skeletal remains of hunter-gatherers show a much healthier population than that of their agricultural successors (Molleson 1994). Third, an increased reliance on domestic animals meant that some animal diseases would have jumped from herds of domestic animals to humans; these are called zoonotic diseases. As with all “new” diseases, zoonotic diseases could have devastating effects on humans (see Cohen 1977; Diamond 1997; McNeill 1976). Some diseases that originated in animal populations and severely affected humans are measles, chicken pox, rabies, sleeping sickness, and smallpox; these diseases had immense effects on humanity. A fourth reason for increased diseases was that food storage techniques in primitive times were worse than primitive. Food-borne diseases (ergotism, botulism, E. coli, Campylobacter, Shigella, hepatitis A, Giardia lamblia, Cryptosporidia, among others) and diseases spread by vermin that are attracted to stored foods are much more common in sedentary populations than among hunter-gatherers. Reliance on stored food means that food-borne diseases became much more prevalent in the human community as humans made the transition from hunting- gathering to agriculture. Fifth, diseases caused by pollutants from the wastes of humans and domesticated animals increased among sedentary peoples. Hunter-gatherers left the water and soil that they fouled behind during their migrations. Sedentary agriculturists lived (and died) with the polluted water and soil they created. Water-borne diseases are extraordinarily numerous; they include infections from bacteria, viruses, protozoa, and parasites. They became rampant among primitive peoples that are sedentary; hunter-gatherers largely escaped them. Finally, the increased population allowed more contacts between peoples. This created increased chances of cultural and technological interchange; it also created a mechanism whereby epidemic diseases could be created and spread easily. The typical hunting-gathering tribal society had too few contacts among tribal groups to allow pathogens to spread. This was one of the reasons behind the relatively robust health of hunter-gatherers. With sedentary agriculture and a denser population, diseases could spread more easily to previously isolated peoples and eventually become endemic. These diseases would have had devastating effects on the human population during the transition from epidemic to endemic diseases; McNeill (1976) recounts the classic story of this transition.

It should be explicitly recognized that parasitic diseases can have a devastating impact on human development; the human brain is an immensely complex organ that requires a great many resources to grow and function. In newborns, the developing brain takes about 87 percent of an adequate metabolic budget, in five-year-olds it takes 44 percent, and its requirements level off among adults at about 25 percent.9 Parasitic diseases reduce the quantity and quality of nutritional intake. This is particularly devastating among the young because infants and children denied adequate nutrition by being infected with parasitic diseases (or having lactating mothers infected) cannot acquire the brain development during adolescence that they missed as infants and young children. The increasing prevalence of diseases in sedentary populations would have left them stunted, physically and intellectually. So hunter-gatherers were stronger and smarter than primitive agriculturalists; the negative effects of diseases on human brain development would last as long as the bulk of humanity resided in a polluted and disease-ridden environment.

What we envision for the transition from hunting-gathering is an enormously long time (stretching over millennia) where the human population oscillated from one in which agriculture was very important in the human economy, to periods of population collapse where primitive agricultural societies reverted to hunting-gathering. Recall that primitive agriculture was hard work, producing few sources of high-quality protein and a continually disease-ridden community. Epidemic disease could have arisen from one of a number of sources. For example, the killer epidemics could have been derived from zoonoses that were random mutations of one of the numerous pathogens of animal herds. The diseases would, on occasions, have disastrous consequences on the population. Because these were “new” diseases (the population had not been exposed to them on a regular basis), there would be no acquired immunities and the frequency of innate immunities would be low.10 Under these circumstances, we would expect to see occasional population implosions. A dramatic decline in the human population would have a rapid positive effect on the animal populations that humans hunted.

The generational time of early humanity was about sixteen to twenty years, while the generational time of the animals that primitive agriculturalists still hunted was much shorter, varying from months to a few years. This implies that a human population implosion would be accompanied by an animal population explosion. Surviving humans would be likely to abandon sedentary agriculture for the more leisurely protein-rich life of hunter-gatherers. Agriculturalists would have declined relative to surviving hunter-gatherer tribes. This would have made them more susceptible to attacks from foraging bands of hunter-gatherers. One of the more unpleasant aspects of primitive peoples is their more than ceremonial cannibalism (White 2001); primitive peoples might not have had good taste, but apparently they tasted good.

The transition to agriculture was not a sudden one of centuries, but one of millennia for a number of reasons. First, the large animals that were sources of protein and relatively easily harvested had to have been sufficiently diminished that primitive agricultural practices were adopted. Fire allowed smaller animals (deer, antelope, wild cattle) an increased resource base (grasslands) that sustained the hunting economy for a while. Second, swidden agriculture arose and would have lasted for an indeterminate number of generations, possibly followed by a more settled agriculture. Third, increased population density would have introduced epidemic diseases that cause the human population to cycle. Fourth, a reversion back to hunting-gathering would mean a slower rate of population growth because mothers in hunting-gathering societies can only care for one child every five or so years. (People who are constantly on the move have to carry small children; mothers with a newborn and a two- or three-year-old would have to make unpleasant choices.) So the reversion to hunting- gathering would not end after just a few generations. Finally, the transition to agriculture would begin again with the human society acquiring the lost skills of swidden agriculture and animal domestication. From the prospective of an individual human, this process took an immensely long time. However, sometime around 15,000 years ago in the Fertile Crescent human settlements emerged that have been continuously occupied ever since.11

The point of the preceding discussion is to emphasize that environmental changes that affect “sustainability” were part and parcel of human experience for the past 40,000 to 50,000 years. To say that these changes degraded the environment raises an obvious question: to whom? Throughout history, in the past, present, and future, sustainability depended, depends, and will depend on the population, skills, and capital accessible at that time. Some have termed the transition to agriculture as humanity’s biggest mistake; not only is this misanthropic, it is somewhat self-contradictory. Only a civilization that records the history of humanity could produce someone who is capable of thinking that the past was more desirable than the present. It seems unlikely that Paleolithic hunter-gatherers would ever have had the ability (or the time) to contemplate the distant past; by necessity their lives were dominated by the immediate present, contemplating the past beyond a couple dozen or so years would be analogous to present-day humans living in the fifth dimension—literally incomprehensible.

The blooming of Paleolithic culture that took place about 50,000 or so years ago did not ensure increasing living standards over the ages, nor did permanent settlements persist after their first appearance. To explain why, we have to go back to the early Homo sapiens and their hominid ancestors. For literally millions of years, the hominid economy was one of foraging. It is true that early humans improved on skills and hunting techniques over the eons, but hominids left no discernible footprints on the natural ecology until the Paleolithic “renaissance.” This is significant because it means that hominid population growth was so low that it altered neither the number of species nor other aspects of nature. The reason why population growth was low was because fertile women could give birth and care for a child only once every five years or so. Hunting-gathering peoples were constantly on the move; to be safe, children had to be with their mothers (or some other caring adult) at all times. In hunting-gathering societies, mothers have to carry small children while traveling constantly, searching for and gathering foods. Women who had an infant and a toddler were almost sure to lose one (or both) to the environment or to inadequate nutrition (mothers would typically breast-feed children until well past the toddler stage). Consequently, women who had few years between births were likely to have had fewer surviving children than women who gave birth once every, say, five or more years.

Over generations, natural selection would alter the population’s gene pool to increase the relative frequency of genes that produce greater infertility in women who are lactating. Similarly, natural selection will tend to reduce the frequency of genes that are associated with multiple births. Evolution also affects cultural practices; in a hunting-gathering society, a culture that reduces the chances of lactating women becoming pregnant would have survival value. Individuals who abided by such practices would be more likely to reproduce successfully. This implies that early humanity had “naturally” low birth rates. With the Paleolithic blooming, the resource constraint on population growth would have relaxed, but we would not expect an immediate demographic change. About 40,000 or so years ago, the Paleolithic population grew more rapidly than before, but not what twenty-first century humans would call rapid. Biology and culture ensured that the Paleolithic populations did not explode.

But grow they did, and with the growth of population came increased human density, permanent settlements, and an environment in which a multitude of diseases proliferated. The diseases reduced population directly by causing death, and indirectly by increasing morbidity (illness/sickness) that reduced fertility (sick women have fewer surviving children).

What we envision is a scenario whose essence is captured in figure 2.1. (As in all of the book’s schematic diagrams, the direction of the arrows in figure 2.1 indicate the flow of causality from an originating box to a target box; the algebraic signs indicate the impact of the change in the originating box on the target box, holding all other factors constant.) Starting at the top, an increase in population increases density; increased density has two effects, one is that denser populations have more diseases; the second is that it creates permanent human settlements and agriculture (agriculture becomes more attractive than foraging). Permanent settlements encourage innovations that increase culture; offsetting these beneficial effects, a sedentary population encourages the growth of infectious diseases. Infectious diseases cause death and/or morbidity, which in turn reduce population. The decrease in population may be offset by the acquisition of skills and technology that permanent settlements provide.

<p>Figure 2.1: Demographic changes and Paleolithic transition</p>

Figure 2.1: Demographic changes and Paleolithic transition

With an increase in population, permanent agricultural settlements appear and proliferate. The settlements have two effects. (1) There is an increase in skills and culture that increase output and have a positive feedback effect on population. And (2) the disease environment becomes more deleterious, which causes an increase in death and morbidity that in turn has a negative feedback effect on population. The ultimate outcome is ambiguous because, if population falls sufficiently, hunting-gathering becomes a more attractive lifestyle. As long as diseases were deadly enough to reduce the human population sufficiently, the human community would cycle between wandering hunter-gatherers and societies where agriculture plays a greater role. Permanent settlements had ambiguous effects on Paleolithic peoples. On the one hand, they provided increased output, increased human numbers, and gave an impetus to civilization. On the other hand, permanent settlements subjected humanity to diseases that theretofore had been rare or unknown; these diseases reduced population, civilization, and made hunting-gathering a more attractive lifestyle. Humanity cycled that way for millennia, but at least by about 14,000 to 15,000 years ago permanent settlements did not vanish; agriculture was here to stay. The reason cyclical stasis did not persist was due to evolutionary processes.

The events depicted in figure 2.1 took place over millennia, and time changes all things. The changes we are concerned with are the impacts that these events had on the human population. We expect that the constant culling of the human population would have had two effects; one effect would be to produce a population that is genetically more resistant to these diseases. The second effect would be cultural; humanity would gradually acquire a collection of practices that both reduced the chances of infection and, if infected, ameliorated its severity.

Starting with the gene pool, we expect that over generations those people who were most susceptible to infectious diseases either to die without reproducing or to leave few descendants. Over generations, the population would become more representative of people who had constitutions that made them less vulnerable to infection and more resistant to the diseases of permanent settlements. The cyclical changes in the human population caused by the changing living conditions and changing disease environments altered the human gene pool.

The events depicted in figure 2.1 also affected cultural practices that ameliorated the diseases of permanent settlements. A process of trial and error (which is what evolution is) would produce a number of natural medicines that treated the diseases more or less effectively. Living styles also affect health; practices such as a reliance on spring or well water rather than ground water, marriage rules, birthing and lactation practices, rules on the disposal of human and animal waste products, care of the sick, cooking and meal preparation, and a host of other activities can affect the health of a population. The practices that had a positive effect on health would tend to persist relative to those that had the opposite effect or no effect. People who acquired the cultural practices that lessened the infectious disease penalty would leave more descendants.

Recurrent bouts of disease and permanent settlements also altered the evolutionary success of genes that affected fertility. In permanent settlements, small children are less of a burden to women than in nomadic foraging societies. In nascent agricultural societies, planting, weeding, food preparation and storage, home maintenance, and garment making are typically “women’s work.” Infants and toddlers are less of a handicap in doing these tasks than the tasks required of women in societies where there are no permanent settlements. Unlike their counterparts in hunting-gathering societies, women in permanent settlements who have smaller intervals between births do not experience extreme child mortality. Consequently, women with smaller birth intervals between pregnancies in permanent settlements are likely to have more surviving descendants than their counterparts in hunting-gathering societies. Like genes that conferred resistance to the diseases of settlements, genes and practices that reduced birth intervals and increased fertility would similarly bestow an evolutionary advantage to their possessors in permanent settlements. Similarly, genes that confer a measure of resistance to diseases gave people an evolutionary advantage; over evolutionary time, the relative frequencies of these genes would have increased in societies subject to cyclical exposure to these diseases.

The genes that conferred greater fertility and resistance to diseases persisted in the gene pool even when diseases reduced the population sufficiently enough to make a hunting-gathering lifestyle a more attractive option. The reduced population would have a gene pool that had fewer alleles (alternative forms of a gene) that made people more susceptible to disease because a disproportionate number of these people would have died without reproducing. The people reproducing would, in all likelihood, be more resistant to disease. The surviving breeding population would provide a gene pool that was different from that which the population had prior to the epidemics. The changing gene pool made the population slightly more resistant to epidemic disease; subsequent population growth made agriculture and permanent settlements a more attractive way of surviving.

Epidemic diseases can cull the gene pool dramatically. Genes that confer resistance would become substantially over-represented during epidemics. These genes come at a “cost”; they could have other effects in different environments that reduce their evolutionary success. But these effects would have noticeable impact on the gene pool only over many generations, by which time the population may be large enough to be forced to return to agriculture and permanent settlements. Figure 2.2 is a representation of this scenario. The vertical axis represents the frequency (with a maximum of 100 percent and a minimum of 0) of an allele in the gene pool that gives some resistance to diseases of settlement or enhances fertility. On the horizontal axis is time; since we are referring to evolutionary processes, the units of time are generations or centuries. The shaded areas moving along the horizontal axis represent periods when agriculture is ascendant. The peaks of relative frequencies coincide with the end of agricultural ascendance. The peak of agriculture coincides with a decline in population caused by diseases; it is followed by a period when hunting-gathering is ascendant, the unshaded areas. The decline from the peak of the frequency of an allele during a period of hunting-gathering is less severe than the increase when agricultural settlements are on the ascendance. This is because of the aforementioned severity of genetic culling in a disease environment of permanent settlements relative to that of nomadic foragers. There are an indeterminate number of cycles culminating in the final period when there is no reversion to hunting-gathering. The relative frequency of the allele becomes asymptotic to a maximum relative frequency (always less than 100 percent) as agricultural settlements become a permanent fixture on the landscape.

<p>Figure 2.2: Changing allele frequencies in different Paleolithic societies</p>

Figure 2.2: Changing allele frequencies in different Paleolithic societies

Permanent settlements had other profound effects on humanity besides altering the relative frequency of alleles. Because of deleterious disease environments, primitive agricultural settlements had higher death rates than hunter-gatherer societies. Given what we know about Paleolithic hunter-gatherer societies, they had very low rates of population growth. So, if death rates were higher in settled (agricultural) societies and their populations grew, this means that birth rates among primitive agriculturalists had to have been substantially higher than the very low birth rates of hunter-gatherers.12

The increase in birth rates did not happen instantaneously. In primitive agricultural societies, women who had short intervals between pregnancies were more likely to have more surviving descendents than women who had longer intervals between pregnancies. In periods when agriculture was in decline, the reverse would be true; shorter intervals would produce fewer descendents while longer ones would produce more. The cycling between agriculture and hunting-gathering would produce ambiguous results over time. However, it is indisputable that primitive agriculture and permanent settlements did eventually endure and hunting-gathering societies were pushed into lands that had low levels of agricultural productivity. Eventually birth rates in primitive agricultural societies must have increased sufficiently to offset the increased death rates that accompanied permanent primitive settlements.

The important points are that (1) death rates in primitive settlements were typically higher than those of hunter-gatherers and (2) birth rates in primitive settlements must have been higher; otherwise, primitive settlements would not have prevailed. Permanent settlements not only lasted, they grew, and again this means a birth rate higher than the death rate. The more deleterious the environment, the higher the birth rate had to be. High death and birth rates have an enormous impact on the age distribution of populations. Populations with these characteristics have an age distribution shaped like a pyramid. Many children are born but few reach adulthood. This means that the dependent portion of the population is relatively large and remains that way. There are few producing adults and many nonproducing infants and children. Resources per capita are few; the population is poor because people are dying, and few are working. This turns the Malthusian doctrine upside down. In this scenario death causes poverty, while the Malthusians say poverty causes death. This is our alternative to the Malthusian scenario; it is more consistent with history (demographic and medical) than the Malthusian story. Inadequate food supplies by themselves do not cause massive die-offs. High death rates during famines are typically caused by people migrating to where they perceive food is in greater abundance. Large numbers of people congregate and infectious diseases spread; during famines people typically die from infections like typhus, cholera, typhoid fever, diarrheal diseases, influenza, and other infections. Typically they do not die of insufficient nutrition but infectious diseases (Livi-Bacci 1991, 1992, 2000). Any increased morbidity that accompanies the disease environment also will have negative effects on productivity; this will exacerbate the effects of disease and increase poverty.

It is true that inadequate nutrition may reduce the effectiveness of the human immune system in combating disease; as a consequence those who already have compromised immune systems (the infirm, the elderly, and the very young) are disproportionately likely to die. It also is true that famines contribute to migration and crowding that accelerate the spread of infectious diseases. All these caveats aside, we would not expect to see famines kill many healthy adults; long-term declines in population are due to rising death rates that are invariably associated with infectious diseases.

Permanent settlements and the general increase in the density of the human population created a disease environment that increased death rates. In spite of this, the human population increased substantially from 12,000 years ago to the present (2011); the expansion in human numbers was sporadic with more than a few centuries witnessing massive population declines. Regardless, the expansion of human numbers means that over time the birth rate must have increased relative to that of the Paleolithic Age to compensate for the higher death rates caused by the disease environment, and to allow for the increase in human numbers.

Figure 2.3 illustrates the process of demographic change affecting the human economy. Ignore the bottom two boxes (“Increase in birth rates” and “Dependency”) for the time being. Starting at the top, an increase in death rates (due to infectious diseases) leads to both a decrease in population and lower incomes. Incomes fall because there is a decline in per capita output as markets shrink and become less specialized. Lower incomes reduce birth rates which have a negative impact on population. If this is all that happened then population would decline until society reverted to hunting-gathering. But populations did stabilize and increase despite the excess mortality that permanent settlements imposed on humanity. Now looking at the lower portion of figure 2.3 (lower left), this means that birth rates did increase. The increase in birth rates is depicted as being caused by events outside the purview of figure 2.3. (The causes of increased births can be attributed to genetic and cultural changes.) The increase in birth rates increases population but also increases the percentage of the population that is dependent, and that in turn lowers income. Again, this explanation turns the Malthusian view upside down; here death causes poverty, not vice versa.

<p></p><p></p><p>Figure 2.3: Changing demographic regimes</p>

Figure 2.3: Changing demographic regimes

There is some evidence that living standards in the beginning of the sixteenth century were no higher than that of antiquity (Clark 2007). Whether correct or not, stagnating living standards is consistent with both explanations—that poverty causes illness and death (Malthus) or illness and death causes poverty. We wish to emphasize that the bubonic plagues (Justinian Plague 541 to 542 and the Black Death 1347 to 1350) that caused major declines in the European population were infectious diseases unrelated to famine. A Malthusian “crisis” would not cause a sudden die-off; it would cause a relatively stable population or a slowly declining one.

Permanent settlements led to urban living and “civilizations”— societies whose political controls are centered in cities. The words civilization and cities have more than etymology in common; societies based on cities develop more rapidly economically than other non– city-based societies because they have more people in a relatively small space. This allows for the accumulation of capital (both physical and intellectual), increased productivity due to increased specialization, and a lower cost of protection from brigands, barbarians, and foes of all kinds. These advantages came with costs; an increasing biomass (humans, animals, waste products, food, and vermin) provides an environment conducive to pathogens and the spread of diseases. The human population housed in urban agglomerations fluctuated along with fluctuations in the disease environment and human adaptations to it. Human adaptations were typically not conscious, but a product of biological and cultural selection. For example, drinking beverages with boiling water or with an alcoholic content of 4 percent (or more) were effective ways of avoiding water-borne pathogens. The people who drank these beverages might not have known their beverages were safer than river water; nevertheless, whether by taste or intuition, a preference for these beverages developed in various societies. In the Orient, hot beverages were prophylactic against pathogens as were beer and wine in Europe.

These preferences were also more likely to be culturally transmitted to descendants. Dead people cannot tell their children to drink water, and the probability of dying was significantly higher for people who were resistant to the charms of tea and/or alcohol. Biological evolution also worked on populations, alleles that made one very susceptible to water-borne pathogens were more likely to be culled relative to alleles that conferred some resistance.

Still, population growth from antiquity to the beginning of the modern era (circa 1500) was subject to catastrophic declines. As McNeill (1976) points out, these catastrophic declines were in some part due to economic progress. As societies developed, trade links sprang up and with trade in goods and people came another more nefarious trade of pathogens. The Pax Romana allowed the spread of diseases throughout the Empire; this was arguably a major factor in the collapse of the Roman Empire. The visitation of the plague (pneumonic and bubonic) in Europe devastated its population in two distinctive periods that are cataloged as the Justinian Plague and the Black Death.

The history of humanity from antiquity into the early modern period (circa 1700) is explicable in both a Malthusian model and by our framework. The differences between explanations are revealed by history. We argue that people were poor because (1) they were sick and sick people have lower productivities than healthy people, (2) high death rates led to high rates of dependency and lower rates of output per capita, and (3) high death rates produced a younger and more violent population. Young men have a comparative advantage in warfare and violence; a population with the vast majority of people being less than 25 is more likely to be violent than a population with a greater mean age (Heinsohn 2006). The ceaseless violence from the fall of Rome to the early modern period can be, in part, attributed to the age structure, which, in turn is ultimately attributable to the disease ecology. High death rates also meant that investments in human capital had a low rate of return, and in societies with high levels of illiterate peoples some skills and techniques were invariably lost because of untimely deaths. This contributed to economic regression.

The European Voyages of Discovery

The assumptions that the biological environment is unchanging and that the ecology is exogenous to human actions are spectacularly incorrect when it comes to the Voyages of Discovery. The voyages led to the subsequent settlement of the New World with Old World peoples (African and European), animals, plants, and pathogens. The North American ecology that arose out of these migrations was endogenous to human actions. The Old World peoples, plants, and animals that came to North America set in motion interchanges of organisms and, in particular, disease pathogens that fundamentally altered the ecology of its various geographic regions. The changes in the disease ecologies were not uniform; conditions of climate and geography affected which diseases would predominate in the various regions. Altered disease ecologies affected the economic possibilities of settlers. The biological world places constraints on human choices. Before the late-nineteenth century advances in public health, sanitation, and medical science (about 1880 to 1900), the local and regional disease environments that evolved as settlement took place, and later the integration of these spatially distinct disease pools, had considerable consequences for Old World peoples (Africans and Europeans) and their American descendants throughout the New World.

In this book, we concentrate on the fundamental role played by microorganisms, evolution, and infectious parasitic diseases in the economic development of the United States in both the colonial and postcolonial eras, until circa 1900. Recognition of the economic and biological consequences and interactions of the African and European migrations to what became the United States has important implications for the interpretation of American economic history prior to the twentieth century. This biological and economic examination of the historical development of America indicates the importance of the physical environment in understanding the history of the health, life, and employment of Africans, Europeans, and their descendents. The concentration on the biological world suggests an interpretation of the economic history of African slavery in America that alters many of the prevailing views of American slavery. Recognizing the interactions of infectious parasitic diseases with economics yields important conclusions concerning the expansion of African slavery in America, the regional concentration of slaves in the South, the productive efficiency of slavery, and slave living standards.

Concluding Thoughts on the History of Early Humans

What can we conclude from the history of our ancestors, both historic and prehistoric? One conclusion is that the growth of the Earth’s human population cannot follow the trend of the last 150 or so years because, if it did, in the foreseeable future simple compounded growth would have the human population so large there would be more than ten people per square meter of the Earth’s surface (including the oceans). Nevertheless, we still can have a positive, but lower rate of population growth.13

In the recent past, cities served as “black holes” for population (McNeill 2007, p. 326). The death rates of cities up until the latter half of the nineteenth century exceeded their birth rates such that only a constant flow of migrants from the countryside prevented them from collapsing. Death rates in cities have plummeted, and in many countries, they are now lower than rural death rates. But as McNeill (2007) points out, the birth rates in cities have fallen almost as much or more than the death rates. Urban birth rates have fallen because modern women are very much like their Paleolithic progenitors. Modern women earn their living outside the home environs just like Paleolithic women, but having a small child in tow is probably a relatively larger handicap to a modern woman’s career. Female work outside the home reduces birth rates, and they are further reduced by the trends in many countries to have more families headed by women without partners. Monogamous heterosexual families produce more children per woman than alternative groupings; familiarity breeds. The increase in market-based female earnings means that single women may successfully reproduce, but statistically they produce fewer children than their counterparts in stable heterosexual unions. Birth rates for modern women dwelling in large cities may be lower than those of our Paleolithic ancestors for the same reason: cost. Children in the modern world are increasingly burdensome (costly) for women working outside the home.

Another conclusion we draw from the history of humanity is that demography does shape our destinies. High death rates cause poverty; and demographically young populations are likely to be more unstable than demographically older populations. Political and economic commentators are prone to worry about the stability of an undemocratic China in the contemporary world; we disagree. China’s one-child policy has ensured that for the foreseeable future China will have a rapidly aging population and a high mean age. This is a recipe for stability. Populations with a low mean age rate are more likely to be politically unstable whether they are democratic like India, or undemocratic like Saudi Arabia. An ingredient for societal instability is whenever young men constitute a relatively large percentage of the population. Young men make history more interesting, and add bite to the ancient Chinese curse: “May you live in interesting times.”

Copyright © 2011 Massachusetts Institute of Technology. (All rights reserved.)

No comments here