Ask coworkers or friends (or yourself) if they’re responsible in their work and life. Odds are the response will be, “Sure.” Ask if they’re ethical, moral people, and most will answer, modestly, “Well, I try.” Then ask if they sometimes feel a bit queasy about an assignment, a directive, a policy they are required to implement that they think, if not wrong, then wrongish. Most people will shrug, admitting, “Uh, sure. It happens.” In that case, you ask, have they hurt people they’re supposed to help? Those who are honest usually cringe and respond with a troubled “maybe,” if not a declarative “yes.” How do they feel about that? “Terrible.”
This book is about the queasy, inchoate feeling that arises when you’ve done everything right but know you’ve done something wrong. The result sits festering in the gut, waiting for a resolution that will not come. There is the sense of an ethic violated, a moral viewpoint denied by work we are directed to perform or policies we are supposed to promote. Social injunctions or professional strictures overshadow the personal in ways we do not like but cannot easily resist. The world is out of joint, its experience distinct from the way we thought the world to be. As a result, our days are troubled, and so, sometimes, are our nights. It’s not about balancing choices or just working through a problem to an acceptable solution. It’s about when every choice is bad and the only choice is between bad and worse. Philosophers call this “moral distress,” a condition that occurs when “one knows the right thing to do, but institutional [or other] constraints make it nearly impossible to pursue the right course of action.”1
It’s not a new idea. For Georg Wilhelm Friedrich Hegel (1770–1831), this sense of conflicted moral dissonance was the stuff of tragedy. An irresolvable conflict develops between “two substantive positions, each of which is justified, yet each of which is wrong to the extent that it fails either to recognize the validity of the other position or to grant it its moment of truth.”2 The result is intolerable for the individual whose ethical perspective is at loggerheads with institutional or social imperatives. No happy resolution is possible. There is only soul-destroying guilt for accepting directions a person believes improper or punishing consequences if orders are resisted. “It is the honor of these great characters to be culpable,” Hegel wrote. Swell.
He borrowed from Aristotle in this bleak setup, using the story of Antigone, and later Socrates’ trial and subsequent death, as models of tragic moral conflict. For their parts, Kierkegaard and Nietzsche later weighed in with similarly gloomy, classically weighted variants on the theme. All saw this moral distress as inevitable when individuals are asked to behave in ways they believe wrong but society (or, more precisely, the powerful in a society) deems appropriate. Mostly, the classic tragedian lament focused on the grand hero or heroine, not the common person. The conflicts they presented were exceptional rather than mundane.
The problem, if ancient, is also very modern, a constant background ache for many. For some, the issues are more severe, distressful, and injurious. In 1986 the rocket engineer Bob Ebeling pleaded with his supervisors to delay the launch of the Challenger space shuttle because he believed the rubber O-rings would fail. “He collected data that illustrated the risks and spent hours arguing to postpone sending it and its seven astronauts into space.”3 Nobody listened, and the shuttle exploded soon after it was launched. Despite his warnings, Ebeling felt responsible. Somehow he should have done … more. On the day of the launch, his wife dissuaded him from taking a gun to work to force coworkers to halt the launch. Ebeling quit soon after what in retrospect was a preventable disaster. He then turned to nature conservation work as a kind of penance, “to try and make things right.”4 A sense of guilt, of unrequited responsibility, stayed with him for the rest of his life.
A similar distress contributed to the suicide of Canadian Armed Forces Corporal Shaun Collins. After he returned from his second tour in Afghanistan, he was haunted by nightmares and flashbacks from the war. “Shaun seen and did things over there that were against everything we taught our kids to respect,” his father told a coroner’s inquest.5 The morality he was raised with did not condone the realities Corporal Collins experienced; his sense of patriotism and duty came into irresolvable conflict with his sense of appropriately honorable behavior.
This kind of professionally grounded; “moral injury” is now “one of the core topics in clinical ethics.”6 its focus typically military or paramilitary personnel, police and other urban “first responders.” David Wood nicely described the problem in a 2014 article, “The Grunts: Damned If They Kill, Damned If They Don’t.”7 Soldiers like Corporal Collins, doing what they are told to do, afterward find the memory of their actions impossible to endure. The realities they experience in war intrude on their abilities to live as ethical persons in the civilian world to which they return. Nightmares and night sweats are the least of the symptoms that persons with this kind of post-traumatic stress endure.
Ethicists and moral philosophers tend to focus not on veterans or first responders but on doctors and especially nurses.8 Alan Cribb,9 among others, argues that medical personnel’s “own sense of self as a moral agent” is continually challenged when personal imperatives to care conflict with institutional strictures.10 From this perspective, moral distress is all about “scripts” (rules of behavior) we are instructed to follow as professionals. Moral distress results when, as Jonathan Hait and Jesse Graham put it, “We cannot excuse ourselves from the ethical judgments the scripts embody or the consequences of script following.”11 This literature’s focus is always “the self as moral agent,” the person’s ability to accept the disjunction between personal morals and professional directives. It is rarely if ever the morality of the scripts themselves.
The problem is not limited to any single profession, however. We all like to think of ourselves as “moral agents.” Nor can it simply be sourced to a malignant “managerialism” creating hostile environments for otherwise honorable working professionals.12 The Canadian Broadcasting Corporation (CBC) reporter Curt Petrovich was training for a half marathon on the day he got the call to travel to the Philippines to cover the devastation of Typhoon Haiyan in 2013.13 Amid the overpowering stench of decomposing bodies and crowds of persons who had lost everything, “We weren’t bearing anything other than the odd bottle of water that we could give to them. … That’s a difficult position to be in.” No food, no water, no shelter: what type of person does nothing but record another’s misery without offering to help? After returning to Canada, Petrovich was diagnosed with post-traumatic stress. “I learned that this feeling of helplessness that I had,” he says, “was a conflict running pretty deeply in my conscious and unconscious brain.”
The circumstances of our mundane ethical quandaries may be less extreme, but the result is the same in kind, if not necessarily in degree. How can it be otherwise? We are all mired in irremediable conflict. On the one hand, we are told that self-determination and individual choice are principal virtues, that autonomy is the keystone of modern social democracy. We are enjoined to think for ourselves, to develop and then follow our own moral compasses. In doing so, we are told that each of us is responsible for his or her actions. More to the point, we are responsible for their consequences. It is the soldier who pulls the trigger, the doctor who orders the treatment, the engineer who is responsible for the launch.
And yet we are also expected to put aside our personal predilections and ignore deeply held values when they conflict with an employer’s demands, an insurer’s mandate, a superior’s injunction, a profession’s dictates, or a nation’s imperatives. We are ordered to do what is required even if we believe it is not what is needed or what is right. Our professional associations proclaim high-minded codes of ethics; our nations trumpet a set of high-minded ideals. Too often, however, what is proclaimed in the abstract we are directed to ignore in practice.
If we follow our own moral compass, the charge we face is reckless selfishness injurious to the common weal. Who are we to challenge a client’s directions, a patient’s wishes, a supervisor’s insistence, or a government’s mandate? In our intransigence, we are told, we promote harms greater than the good we seek to ensure. As a result, we are disciplined and perhaps fired. Some simply burn out and, like Bob Ebeling, quit work once loved to seek some other way to be. Others, like Corporal Collins, find the conflict more than they can bear. It is Hegel’s tragedy refitted for popular consumption, a problem broadly chronic rather than exceptional and specific. The question becomes: Is conscience to be reserved for the quixotic, vainglorious rebel? On what grounds might it be empowered and, perhaps most importantly, unleashed: where might it lead and what can it teach us all?
Nancy Berlinger, a medical ethicist at the Hastings Center, calls ethical distress an “avoidance problem” that occurs when there is “ethical uncertainty about how normal work should proceed in a complex system, one in which workers must continually adapt to changing conditions under pressures that include the need to keep themselves or others safe from harm.”14 For their part, Bruce Jennings and Fredrick Wertz describe it as a problem of “agency,” of the individual’s ability to respond to conditions of “ethical tension.”15 Among these and other writers, the central issue is a person’s ability to conform to a set of procedures or rules that seem to him or her inappropriate, inhuman, or somehow unethical. Implicit in their work, and that of most others, is the assumption that these problems can be resolved handily and individuals can make, if not a good choice, then a best choice “under the circumstances.”
But we cannot avoid injurious moral distress when there are no good choices, no way to reconcile the conflicting demands of integrity and outside direction. For engineers like Bob Ebeling, soldiers like Corporal Collins, or reporters like Curt Petrovich, there was no “fast thinking,” as Daniel Kahneman calls it, that could make the problem go away.16 It is the expectation of agency and its attendant responsibilities that lies at the heart of the irresolvable conflict. If agency is denied, then we are powerless. And if we are powerless, what does our individual integrity mean? The conflict is intransigent and unavoidable. This is not, as some suggest, about justifying one’s actions to others.17 Rather, the distress is located, festering, in the person him- or herself as he or she struggles to evaluate and understand his or her place as a moral being in a world revealed as, if not immoral, then amoral.
Moral distress and injury linger long after the causal event, long past any potential physical danger to the self. It is the opposite of what Alan Cribb calls a pervasive “ethical laziness,” in which one simply gets the job done and then moves on, or the “ethical arrogance” that results from ignoring the rules on the assumption that you know best.18 If we could simply ignore the claims and needs of others, then there would be no problem. If we could ignore the rules imposed on us and act without consequence solely as we see fit, then there would be no dissonance, or at least little inner conflict. But in all these cases it is the rules themselves that are in conflict—the person’s and another’s—and the result is consequential and can be life altering. In these cases, satisfactory, simple solutions are simply not possible.
At heart we are social beings hardwired to care for and think of others. “Human intelligence rests on a foundation of social cognition reflected in a unique capacity for empathy; for appreciating the perspective of others.”19 Our sense of self is bound to others in ways that make their relations with us integral; their denial is a kind of dehumanization that affects us and them together. At the same time, we are cultural beings enmeshed in a range of roles—familial, educational, national, professional, and religious—each defined by rules and their underlying, not-always-complementary rationales. To ignore them is to ignore our relation to those roles and the place they hold in the construction of our world. Ethical quandaries are thus rooted not simply in individual convictions but in the way we are to be in the world with others. Sometimes our varying roles—citizen, family member, neighbor, professional—conflict. The question then becomes: what do we do when the world we believe in and the one we live in seem out of joint?
The idea of “moral injury” entered modern literatures—medical and social—not as a problem of the person but instead as a descriptor of social harms by classes of individuals. In 1909, the earliest reference I have found,20 Dr. Willard Bullard urged fellow physicians to forsake the care of “undesirables” assumed to be congenitally disposed to low moral character.21 Attention to their continued well-being, he argued, perpetuated a burdensome, morally profligate population whose continuance would undermine both the social order and the economic fabric of the nation. The solution he recommended was a eugenic cleansing of society’s “idiots” and “morons” (persons with Down syndrome, hydrocephaly, spina bifida, etc.). Parents might object, and physicians might insist on their Hippocratic obligation to care for all, but acquiescing to either parental love or a sense of professional duty, Bullard argued, resulted in a moral if not mortal injury to the greater social good. In 1927 Supreme Court Chief Justice Oliver Wendell Holmes invoked a similar, economically based morality in his famous opinion Buck v. Bell, authorizing the involuntary state sterilization of “idiot” women who presumably would in turn birth “idiot” children.22
A similar kind of social injury is argued today by those who insist allegiance to ideals like equality and solidarity constitute an injurious moral failure hampering the neoliberal mechanisms of supply-side economics.23 They advance instead an ethical standard grounded in market-smart efficiencies powered by the unfettered activity of individual entrepreneurs.24 Personal moralities grounded in antiquated values of mutuality are, from this perspective, no more than sentimental baggage damaging a nation’s bottom line and its future potential. A general moral injury thus becomes the inevitable result of an allegiance to the Enlightenment ideals that are the rationale for programs of communal care and support. Two very different ethical paradigms based on different moral perspectives—one economic and the other social democratic—seem at perpetual loggerheads.25
In the first half of the nineteenth century there was a brief period in which it seemed possible to make the health and general welfare of average citizens a defining criterion of prideful nationhood.26 The idea floundered by midcentury, however, as economic indices became the principal measure of national well-being and future health.27 In recent years, some have tried to reinstitute a kind of social progress index, or one of general well-being, as at the least complementary to economic measures like the gross domestic product. Those noneconomic indices remain eccentric, however, and are given relatively little official weight.28 It is therefore fair to say that, beginning at the latest in the mid-eighteenth century, practical ethics and practical morality became economic, their central metric the financial well-being of the nation defined by a utilitarian vision of the abstract greater good. We might proclaim allegiance to Enlightenment ideals, but in practice we apply an economic standard to our relations, one to another. It is along this fault line that many modern moral fractures exist.
To take one example, in bioethics born in the second half of the last century, the physician’s imperative to care was devalued as financial growth and well-being came to dominate policy in many countries.29 Who were doctors to disagree with the demands of the state?30 Insurers paid the bill (in the United States) and thus should be permitted to set the health agenda. Public moralities of care floundered as a result. So, too, did the physician’s earlier right to organize the best possible care for a patient. At another scale, individual autonomy and choice were promoted as ethical goods. But that was done (as chap. 8 argues) largely without regard to the person’s ability to pay for care or necessary treatments. Treatment alternatives were the economic preserve of insurers or, in some countries, the state. Today similar clashes between ethical paradigms—personal, professional, political, and social—are rife across many divisions of society.31
“Moral distress manifests itself as the principal component of psychological responses to social forces.”32 It is there, rather than the moral declaration of the individual isolate, that the problem is grounded. It is not about us or “them” but about how we stand as people in a society that demands specific attitudes, and thus resulting behaviors, of its citizens. For Hegel, the problem was what to the hero or heroine seemed a “profound identification with a just and substantial position” at irresolvable odds with another, equally substantial and broadly accepted.33 Tragic failure is the only and inevitable outcome for the person whose moral perspective challenges what, at any given moment, is a generally accepted ethical worldview.
How can this broad class of chronic conflicts best be understood? To address that question requires, first, a distinction between ethics and morals, nouns and sometimes adjectives that are typically conflated in contemporary literatures.34 Most assume each to be a synonym for the other. Here, however, morals are defined as a declarative set of values, definitions of right and wrong that brook no discussion. Every culture has a small set on which its laws, professional injunctions, religious declarations, and social codes of behavior are based. “At the heart of every moral theory there lie what we might call explanatory moral judgments,” Judith Thomson wrote, “which explicitly say that such and such is good or bad, right or wrong … because it has feature F.”35
Ethics are the means by which we attempt to apply those values, the Fs, in our lives. “The ethical is presented as that realm in which principles [moral definitions] have authority over us,” not independent of our attitudes but as their basis.36 The result takes the form of a “practical syllogism” whose construction links a moral idea, the syllogism’s major premise, with the specifics of the minor premise to say that a particular action, or kind of action, is required ethically. “If this is good or ‘right’ [by definition], then I/we must do … that.”
The idea of a set of moral definitions, value statements that result in applicable ethical imperatives, is at least as ancient as Aristotle’s writings. He made justice the F value and defined it operatively as requiring that equals be treated as equals. But are we all really equal (slaves and slave owners, men and women, Christians and Moors)? And how is equality to be defined? The apparently simple Aristotelian idea seems as if it could require an endless process of definition and redefinition until the syllogism is so proscribed it has no operative value.
Here R. G. Collingwood37 and J. L. Austin38 come to the rescue. In the 1940s, Collingwood argued a kind of linkage in which a proposition (the major premise of the syllogism) involves one or more suppositions. These assumed truths power the ethical proposition “If this is good (justice as equality), then I will do it.” But because the supposition is itself open to question (“What is justice?” “What is equal treatment?”), each leads to a set of prior presuppositions attempting to more clearly define the previously stated F.
The result is not endlessly recursive, however, like an M. C. Escher print. There is, Collingwood argued, a small constellation of presuppositions that do not answer to any question and are not open to debate. They are bedrock values whose definitions are accepted reflexively and without question as clear. In our discussion, these bedrock Fs ground ethical propositions that are based on or follow from them.
The point, as Austin makes clear, is that speech is laden with associations and meanings that call forth action (or reaction). Roland Barthes presented another, related way of saying much the same thing. His semiology described a systems of “signs,” used to present events or things that serve as “signifiers” carrying the values of an ultimately “signified” idea or ideal (patriotism, e.g.).39 For Barthes, anything—the Eiffel Tower, a map, a photograph, or a wrestling match40—can be “unpacked” (some would say “deconstructed”) to reveal meanings, values, and traditions underlying this or that prosaic event or thing.41 Where philosophers begin with the big idea and its construction, Barthes’s semiology and Collingwood’s linguistics rolled their analyses up from the mundane.
All of this sounds impossibly academic and, well … philosophical. The point is first that everything is grounded in an idea about the world, at once realized and often simultaneously hidden. Second, those ideas are shared, communicated in one medium or another. Third, ideas about what is good or bad, right or wrong, stand not alone but within a chain of ideas ultimately grounded in a set of bedrock presuppositions I call here “moral definitions.” In assessing the nature of distress, we can therefore track back from the initial proposition to the root idea that anchors our thinking in this or that situation. From there we can see mechanisms through which practical syllogisms are enacted, implicitly or explicitly.
Someone may hold to a personal, eccentric moral code, and thus a set of unique ethical propositions. But to be generally credited as grounding appropriate behavior an individual’s “Well, I just believe” serves in neither ethics, law, nor the community at large. It is those moral presuppositions we share, and the suppositions they give rise to, that carry communal ethical weight. Identifying a set of broadly subscribed-to moral presuppositions—the Fs—and their resulting suppositions is relatively easy. They are clearly stated in foundational documents (the US Declaration of Independence and various national constitutions), laws, and international conventions (e.g., the UN’s Universal Declaration of Human Rights). They are referred to explicitly or implicitly in the ethical codes of most religions and most professions (e.g., the American Association of Advertisers).42 All speak, to a greater or lesser degree, to ideals of community, care, honesty, truthfulness, and well-being for the person and society as a whole. They are about us, together, not simply each of us alone.
But here’s the rub: the process of syllogistic construction and resulting action is dynamic, not static. The underlying suppositions and presuppositions that give us meaning are clarified, changed, or undermined through their application. Everyone believes in “justice,” but one person’s just cause is another’s unjust action. It is in the actions we take that our ideas of justice are practically confirmed. By enacting an ethical proposition (we should do this because…), we affirm and simultaneously bring forth the moral definitions that seem, at the thin level of declarative principle, so clear. Refusing to act on that proposition, conversely, brings its moral grounding into question, if not potentially into disrepute.
This book is not about mapping but about how we talk (and think) about doing what is right or recognizing what is wrong. The map is just “a kind of talk,” another language in which we construct active arguments (syllogisms) based on presuppositions and suppositions.43 From this perspective, map-talk, you might call it, is a way to see all this philosophy in action. And from that perspective, maps are a rich field of ethical and moral investigation.
An advantage of the medium in this exercise is that maps are everywhere, a constant in the literatures of economics, demography, journalism, sciences (of all kinds), statistics, transportation, and so on. They are a daily presence in academic journals, popular magazines, and daily news reports (broadcast and print). All are first and foremost social documents arguing a particular view of the world. “A map is never just a map,” as Timothy Barney puts it, “but a confluence of social forces that constrain a culture’s sense of its relationship to, and in, the world.”44 In their construction, maps call forth what Daniel Callahan termed, in another context, a “vital background constellation of values,”45 enacted through a set of general ethical injunctions grounded in one or another moral perspective.
At its most basic level, a map is a collection of what Barthes called organized signs signifying a worldly event (disease, draft, poverty, war) and, in its presentation, our values. Every map brings into existence an idea about something through a set of symbols (points, lines, rectangles, etc.) organized in a generally accepted, easily understood (and thus visually legible) manner. The logical form of the map, like that of ethics (If F is good, then we do that), is propositional: If this is important, then here is what it looks like; here is where it is found.46 In other words, maps locate the subjects we believe to be important in a landscape we find familiar. What is signified, map to map, is this or that ideal and, implicitly or explicitly, a call to act on it. After all, if the mapped thing had no meaning, no ethical imperative, why make the map at all?
That a map is not an objective statement but an idea made to seem factual will be disconcerting to some. Until recently, the subjective nature of the map was simply ignored, where not positively derided, by most professionals. A discoverable, value-free reality was assumed to be the cartographic goal.47 The idea was,48 and for some continues to be, that “good” maps are “mostly” objective and thus truthful, although perhaps inevitably advancing an at-times “persuasive,” authorial point of view.49
The distinction between the “objective” and the “persuasive” is an old one. In his Inquiry concerning Human Understanding (1748), David Hume distinguished between objective facts collected for critical consideration and those “influenced by taste and sentiment.” He lauded the factual while condemning the persuasive to the degree it masqueraded as objective, choosing data to make an argument seem reasonable.50 This echoed Plato’s account of Socrates’ debate with the rhetorician Gorgias, who boasted he could argue every side of any issue, convincing people of its rightness, no matter how limited or false the position might be.51 Socrates saw that claim as mendacious, and thus dishonorable, because, for him, truths were things contributing to the social good (his presupposition). The alternative is the rhetorician’s false beliefs grounded in the self-interest of an employer; his truths thus served a base, and extremely limited, end.
Maps are active, practical propositions about the world as we see it based on a sometimes complex set of suppositions. In every map, “thick” experiential statements and “thin” principled ideas are entwined.52 The problem is that we want maps to be true. We want them to be objective because we want the world to be certain, its truths absolute and unbounded. Alas, knowledge isn’t like that. Truths turn out to be malleable, and knowledge, the accumulation of data structured in a certain way, is no more than a set of sometimes shaky truths buttressed by ideas and ideals.
This is neither a book about cartographic ethics nor a learned treatise on the limits of knowable truths. It is instead an attempt to investigate problems arising at the intersection between sets of conflicting expectations and standards governing personal practice, professional ideals, and social policy. But because maps are a principal medium in this investigation, it makes sense to take a few pages to consider the map and the means by which its ethical propositions and moral suppositions are revealed.
At the simplest level, mapping (like ethics) is about things together. It takes a set of individual cases, rows of data, and makes them into members of an event class located in relation to others in a more or less recognizable geographic field (political boundaries, streets, etc.) that we think of as approximating the real. The world thus brought forth is set at a scale, from very local to global, in which the map’s argument is situated. In the mapping, the mapmaker proposes relations between event classes located in place, disease incidence and income by county, for example. If this is there and that is there, then this and that are related in a space we know. Beneath that conjunction lies an often hidden value proposition: If x (say health) is important, then we must look at it in a certain way. If y (say poverty) is bad, then we must look at it in a certain way. If poverty leads to ill health, and if the map shows areas of ill health are areas of poverty, then we argue that the ill health related to poverty is … bad. Good and bad are moral conclusions implicit in the mapping of the potentially causal relationship between health, disease, and poverty.
Nothing surprising here. Data are always chosen on the basis of assumptions and definitions that determine their selection and the mode of their analysis. Whatever the medium, “people who formulate those facts have to use assumptions: patterns of expectation, within which they select arrange, shape, and classify their data.”53 The map’s potential evidentiary power lies in this: it wraps a lasso around its facts, organizing them in groups of common characteristics, similarly symbolized, and then posts them in a space filled with other groups of things (landforms, jurisdictional boundaries, streets, etc.) in a manner insisting on their shared communal reality. It sets its constituents in a relational space whose elements synergistically interact to create a complex reality of things, together. In this way, maps concretize the abstract. The result invites a map’s users to see an ideal and its argument as a geography, a thing made concretely real with real-world consequences set at a specific resolution and attendant scale.
All of this is easier to see than to talk about. Figure 1.2 presents a vector version of a map by Dr. John Snow, the nineteenth-century British physician who famously argued that cholera was solely waterborne rather than primarily airborne, as many of his contemporaries believed.54 The basic proposition was “If cholera is waterborne, then a water source must be at the center of any cholera outbreak.” The first event class, whose members are symbolized here with small triangles, posted the location of the homes of almost six hundred persons who died from cholera in the first weeks of an outbreak in Snow’s London neighborhood. The mapped circles symbolize local wells that served as the principal water sources for residents of the affected area. Both exist together in what Snow called a “topology” of disease, a geography of streets and landmarks (a pub here, a poorhouse there) that constituted the context of the outbreak. Snow believed that the obvious centrality of a single pump nestled among a dense circle of deaths proved his proposition.
The map is an ecology in which the individual datum is sited in a class of similar events interacting with—defined by and defining—a complex environment (biogeographic, human built, political). In that dynamism is a propositional relationship between event classes: If we see x (cholera deaths), then it must be in proximity to y (water sources). If we see y (water sources), then we may expect to see x (cholera). If this is present, then that will occur. The meaning in the map grows with the complexity of the event classes it presents, the completeness of the geography to which they are indivisibly joined, and the propositions that argue their presumably causal relationships, one to the other.
In his map (fig. 1.3), Snow used data on reported deaths from cholera collected by and made publicly available by the Registrar General’s Office. Barthes would say that the diamonds (deaths), circles (water sources), and lines (streets) were signs that together signified the cholera outbreak in Snow’s Broad Street neighborhood. The real signified, however, was Snow’s idea. In the mapping, Snow left out the sewers whose foul-smelling airs (“miasmas”) many believed were the cause of the outbreak. And, too, in this map he neglected the old plague burial site that others believed was the likely origin of those airs (in another map, he included it, positioning it incorrectly). The map presented Snow’s idea of cholera as a waterborne disease and not the complex ecology (including sewers and old cemeteries) others believed cholera presented.55 Others who had different ideas about cholera mapped the neighborhood and its cholera differently.56
Implicit in Snow’s maps and explicit in his writings was an ethical syllogism: If we wish to save persons from cholera, and if its source is unclean water, then we are obliged to ensure clean water for our citizens. Famously, Snow argued this to the local parish committee when he recommended disabling the central Broad Street pump, the suspected source of the outbreak. To define cholera as waterborne thus implied a practical demand that clean water be provided to protect the lives of London citizens. Because the constituents of health in this case were thought to be broadly public rather than individually determined, Snow’s map argued the ethical insistence that officials—civil and religious—do everything in their power to tame the disease by attending to its environmental cause. Implicit in this argument was a moral presupposition defining human lives as something valuable to be protected. This moral-ethical definition-injunction of public health as a civil (and, in Snow’s day, parish) responsibility had been asserted ever since the Romans built aqueducts to ensure clean water for their citizens.57 So Snow’s map argued the ethical necessity of public officials to provide clean water for citizens.
Not everyone agreed with Snow, of course. In 1831 a group of unnamed authors published in the Lancet a review of cholera’s progress from its origins in India through the Middle East to Russia and then western Europe.58 They argued against quarantines and other restrictive measures as not only likely to be ineffective but, more importantly, injurious to national trade. Better a bit of cholera, the authors concluded, than a set of policies that might harm the economics of the nation. From their perspective, the death of some from cholera was less important than the maintenance of industry, trade, and the profits that resulted from them.
We know a lot about John Snow, his ethical posture, and its origins from his writings.59 At a professional level, Snow was enjoined by a set of moral declarations and practical injunctions that instruct physicians not only to treat disease but, in the Hippocratic tradition, to prevent it where possible.60 This created a clear imperative to public action by physicians.61 In that tradition, preventable deaths are ethical “wrongs” violating the moral raison d’être of medicine, the preservation of life. Moreover, since governments had in theory developed for the care and advancement of citizens, there was a similar injunction toward health protection for nonmedical public officials.
Snow’s focus on cholera in the 1850s was also deeply personal, however. As an apprentice apothecary, he had served mining communities near Newcastle that in 1831 were decimated by cholera in its first British outbreak.62 His best efforts did little to save the miners. The result was a traumatic experience he described repeatedly in his writings on cholera from 1849 until his death in 1858. The deaths he could not prevent weighed on him in the same way that Bob Ebeling was tormented by the shuttle deaths and Corporal Collins by his wartime patriotic service.
Finally, Snow wished as a scientist to definitively describe the nature of cholera and the manner of its diffusion. The goal was not pure knowledge but applicable knowledge that could be used to save lives. And here he learned the essential fact: Science is not just about being right. It is about convincing others of the truth of an argument based on data that all accept and a methodology others find convincing. Snow didn’t do that, at least not to the satisfaction of his contemporaries. His failures—what was not mapped—were as important to them as the strengths of his argument were to him. Snow’s major premise (cholera is waterborne) was disputed, and so his proposition and its ethical conclusion (we must clean up water sources) lost much of its immediacy and power.
What was not contested was the ethics of his search, the moral goal of preventing cholera deaths. Everybody wanted to control its spread. Physicians worried about death and sickness; bureaucrats about the health of productive, tax-paying citizens. Merchants worried about their customers and their own health. So behind the map and its debate were both a broadly shared moral stance and a set of highly practical concerns. A similar argument can be found in figure 1.5, where the “signs” are children and adults with water pails, the pump from which they draw the water, and the image of death as its operator. The signifier is the idea that contaminated water causes deaths (in this case, cholera). What is signified—the message—is the necessity of clean water to preserve the children by removing Death’s hand from the pump.
I am neither an ardent semiologist nor an analytic philosopher. Throughout the rest of the book’s chapters, I will only occasionally use the language of those professions: enough is enough. It is sufficient here to describe the means by which maps (and charts, graphs, tables, etc.) carry ethical propositions based on moral definitions of what is or should be seen as a good. The example of John Snow makes clear how one can be in liege to a set of moral definitions and ethical propositions that are consequential. He was, after all, at once a practicing physician, a superior researcher striving for recognition, and not coincidentally a citizen of the neighborhood and city whose cholera outbreak he mapped. In all that follows, it is important to remember that complexity lies not simply in the culture at large, or the structure of a particular discipline or profession, but also within the lived life. Like Snow, we all bring to our daily world an experiential history that is as deeply personal as it is professionally enabled and socially embedded. Moral stress lies not only in a simple conflict but more often in a conflicting set of ideals and principles we attempt to hold simultaneously.