Skip to main content

7. Contested Boundaries

Published onApr 27, 2020
7. Contested Boundaries

In the face-to-face world, one of the primary jobs of any architect or planner is to create boundaries. Physical barriers, such as buildings and walls, along with social and legal norms, determine who can gain admittance at each door. The exterior walls of a home separate a family’s private space from the external public world; its interior rooms bound different functions. Doors allow passage between one space and another, while locks on doors ensure that unwelcome visitors do not enter, yet permit the key-holders to come and go at will. Invited guests can enter at specific times; once in the house, they abide by other rules about which rooms they are welcome to visit and which are private, out of their view. The guest who is rifling through the upstairs bathroom’s medicine cabinet or a bedside table is out of bounds. The boundaries of the home also keep people in: small children are forbidden to venture forth on their own; freedom to come and go is part of the passage toward adulthood. The boundaries vary with time: a cat-sitter may let herself in at certain times, but would be an illegal invader if she entered at others.

Our sense of comfort and belonging within our society depends on whether we feel our home—our basic personal space—is safe from intrusion. If only the inhabitants and invited guests enter, we have sovereignty over our space. If others are likely to intrude, whether burglars or police armed with complaints or warrants, we live on edge. Barred windows and heavily dead-bolted doors signal society’s failure to maintain order. And to live in fear that authorities will enter one’s home, whether because one is a dissident in a repressive society or a criminal in a lawful one, is to be alienated from that society.

Hi, I really don’t mean to inconvenience you right now but i made a quick trip to London,UK this past weekend and had my bag stolen from me in which contains my passport and credit cards. It happened real fast leaving me document less & penniless right now. I’ve been to the US embassy and they’re willing to help me fly without passport but I just have to pay for the ticket and settle the Hotel bills i resided at. Right now I’m out of cash plus i can’t access my bank without my credit card here, I’ve made contact with them but they need more verification which could take more time for me wait. I was thinking of asking you to lend me some funds now and I’ll give back as soon as I get home. I need to get on the next available flight.

Please reply as soon as you can if you are alright with this so i can forward the details as to where to send the funds. You can reach me via email or May field hotel’s desk phone, the numbers are, 011447024043675 or 011447024051751 or instant msg me:

I await your response…

Infrastructural boundaries such as walls and doors neatly divide inside from outside, while complex and often unspoken norms create the social boundaries. Once at home, we are exempt from some (but not all) the laws and mores of the public sphere. One may not walk naked down the street, but can do so at home—yet not when casual acquaintances are visiting, especially children. One may look to the government to provide order in public, but see it as an intrusion when it reaches into the home. Our daily interactions involve negotiating many invisible social boundaries. Some are constructed through politeness. In a restaurant, we act as if there were walls around the other tables, neither making eye contact with people seated elsewhere nor interjecting comments into their conversation. Here there is no physical wall, but a boundary nonetheless. These invisible boundaries govern our public spaces, regulating how we move, speak, and look. Everyone is allowed to be out in public, with the caveat that you know how to behave; those who transgress may be placed in jail, putting a physical boundary between them and the public world. Boundaries protect us, but they also impede us.

Where are the boundaries online, and what are they made of? Although the online world is not physical, it also has both infrastructure boundaries and socially constructed boundaries.1 The infrastructure boundaries, its gates and doors, are firewalls, encryption, and private sites. Passwords are the keys to the doors of cyberspace. Like doors, passwords vary from flimsy to fanatically secure. Also like doors, if what they are protecting is interesting enough, or the thieves are skillful enough, there will be break-ins. There is a continuous stream of news stories about corporate break-ins, malicious hacks, and secrets stolen from military networks. “Cyber war” threatens to be the next battle frontier. On the more mundane side, many of us have gotten emails purporting to be from friends, but which turn out to be spam; the friend’s account was broken into and the identity-thief sent email to lure unsuspecting acquaintances (see sidebar).

In the physical realm, some boundaries have a person who checks each potential entrant. Guarding boundaries is the primary responsibility of many jobs: border patrollers, airport security agents, ticket takers, club bouncers, and receptionists in offices, schools, and libraries. Almost any semipublic space has people whose job is to maintain boundaries. Police do this at parades, in public squares, and for motorcades. One way of looking at communities is the extent to which their members feel entitled to and/or responsible for maintaining boundaries. Closed neighborhoods—from a trailer park to a gated suburb—can make outsiders wandering through feel very uncomfortable. The appeal of diverse urban neighborhoods, but occasionally their problem, is that no one feels entitled to turn others away.

Online, there are moderators. They are the doormen of discussions, vetting each comment to ensure that it is appropriate. Like the doormen of uptown apartment buildings, they are expensive. If a group that requires a moderator cannot find one, it may close. If the moderator is busy, the discussion flow might be halted as everyone waits days for their postings to pass through.

In general, the Net is a very open place. What makes it exciting—and often teetering at the edge of chaos—is that it connects a truly extraordinary number of people. An open discussion site can have participants from all over the globe, of different political, ethical, religious, and cultural beliefs, and with varied educational experiences. Some are seeking to learn, others to be entertained, still others to advance a cause. Some enjoy a constructive debate, some just want light humor, and some find fun in pulling pranks and upending discussions. There are also vast numbers of spammers, disruption professionals who will fill any unprotected space—email account, comment forum, or the like—with ads for cheap drugs and sexual aids, or worse, with stealth programs designed to infect one’s computer and turn it into an unwitting spam server itself.

Any discussion needs multiple levels of boundaries. First, there are the boundaries that protect against spam. But once these clearly unwanted bits are kept out (and that itself is a constantly evolving battle), there are still the boundaries of behavior: the ability to keep out individuals and messages that are outside the boundaries of the group’s norms. A group’s culture evolves as it defines these norms and develops strategies to maintain them. In an online community, the key issues are determining whether someone should have access to a space and preventing problem users from repeatedly appearing.

The history of online conversation is a history of changing boundaries. It is useful to know, because it turns out that finding the right balance between public and private, being open yet not anarchic, is very important. The history of Usenet in particular provides a dramatic example of how societal and technological changes affected the boundaries in one very large community. Online, boundaries are not a matter of putting up fencing, but of defining how we recognize who can participate, what material is acceptable, and how openly available it should be. Today, there are various approaches to this issue: some sites focus on content, employing teams of moderators to vet contributions; others endeavor to establish a reliable community by rewarding highly regarded participants. Some are self-governed; others are managed by the site operators. There are sites whose goal is to create a definitive knowledge repository, to motivate renegade activists, or to be a supportive healing environment. We will look at what constitutes boundaries online, and how the choice of different types of bounds shapes the community. Ultimately, the key to a thriving community is maintaining boundaries that allow new people and ideas in while maintaining a sustainable scale and focus.

Boundaries Online: The Rise and Fall of Usenet

Computers were invented to be calculating machines for performing mathematical feats. Yet, when people started sharing computers in the 1960s, they also started using them as a social medium, leaving notes and messages to each other on the shared file system. Time-shared computing resources became the social focus for groups of researchers who otherwise might not have met (Licklider and Taylor 1968). The fundamental boundary to online interaction is simply access to resources. Conflicts over who should have access to scarce computer time and file space, and what they should be permitted to do with it, shaped the early days of online mass communication.

The Defense Department’s ARPANET, the precursor to today’s Internet, was created in 1969 to make more widely available the scarce computing resources found at major research hubs such as MIT and Stanford. Although ARPANET was intended for resource sharing, communication quickly became its primary use. By the early 1970s, programmers had adapted FTP, the system for sending files from one site to another, into an email-like messaging system. A 1973 study of network resource usage found that three-quarters of all ARPANET traffic consisted in email, and this was at a time when only the most basic mail exchange software had been developed. There was no message handling, so the stream of mail had to be read linearly; there was no reply function and no addressing tools, so each user needed to figure out the complex path from his computer to the recipient’s (Hafner and Lyon 2000). Over the next few years, programmers added new features, such as the ability to read messages in any order, save them, and automatically address a reply message. By 1975, they had developed the key elements of the email interface that we still use today (Leiner et al. 2009).

Although using ARPANET for communication was controversial, the programmers who created email had the support of J. C. R. Licklider, an influential director at ARPA. In 1968, as ARPANET was still under construction, Licklider had set forth a vision for a universally networked communication system that covered the technical challenges, expressed tremendous optimism about the benefits of connectivity, and raised questions about access that resonate today. He wrote: “In a few years, men will be able to communicate more effectively through a machine than face to face” (Licklider and Taylor 1968, 21). Even more radically, he predicted that discussions would have images, information, and interactive models as their medium. He foresaw the development of geographically distributed online communities that would be united by interest rather than location.

Licklider was one of the original cyber-utopians: “Life will be happier for the on-line individual because the people with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity” (Licklider and Taylor, 1968, 40). But he also cautioned that universal access was essential: “For the society, the impact will be good or bad, depending mainly on the question: Will ‘to be on line’ be a privilege or a right? If only a favored segment of the population gets a chance to enjoy the advantage of ‘intelligence amplification,’ the network may exaggerate the discontinuity in the spectrum of intellectual opportunity” (ibid.).2

Soon after email was developed, ARPANET was hosting a variety of group email lists—the earliest online discussion groups. But access to ARPANET was limited: only research laboratories with Defense Department funding were connected. The content, too, was restricted. Although Licklider envisioned universally accessible, wide-ranging interest groups, he alone did not shape the ARPANET, which also had many other more conservative administrators whose focus was on ARPA’s military and defense missions. They saw personal communication as frivolous, getting in the way or affecting the security of the central mission of the network.

Arguably, the first widespread incarnation of an open community like the one Licklider envisioned was Usenet, a system of distributed news postings developed in 1979 by graduate students at Duke University. Usenet was created to be an open service to which any site with a Unix machine could connect, with users at that site able to read—and most importantly, respond to—news postings that others on the network had uploaded (Hauben and Hauben 1997). Comparing Usenet and ARPANET discussions illustrates how the goals for a system shape its technology, which in turn shapes the resulting social boundaries.

ARPANET’s mailing lists were centrally controlled and most were moderated, meaning that posts were not published until they were vetted for appropriate content, and users were warned about “non-official” use. ARPANET’s boundaries were guarded both to prevent outsiders from participating in their discussions and to ensure that information from their discussions did not seep out; in addition to the obvious concerns about securing sensitive technological and strategic data and avoiding copyright problems, they were also concerned about the reputation of the agency if news of frivolous conversations on this Defense Department–supported service became public. An ARPANET user at MIT recalled:

There is always the threat of official or public accusations of misuse of the network for certain mailing lists. This actually happened with a list called WINE-LOVERS and Datamation. … The fiasco nearly resulted in MIT being removed from the network, and cost us several months of research time while we fought legal battles to show why our machines should not be removed from the ARPAnet.3

Usenet, on the other hand, was designed by graduate students unhappy with being left out of the ARPANET community. The system they created was decentralized. Each machine kept a copy of the news database, and thus, while an individual system administrator could choose not to handle a particular newsgroup, no one could control the overall distribution. In practice, it was generally up to the reader to decide what to peruse. There was no central authority, and in the first years, no moderated groups. Posting was self-policed—if you had an account, you could post to any group. The discussions were far more free-wheeling than on ARPANET, where even discussions of science-fiction films pressed against the boundary of acceptable content.4 Within a year of its creation, Usenet included net.gdead (for Grateful Dead fans), net.rec.birds (for birdwatchers), net.vwrabbit (for all the owners of that car), and net.suicide (for those contemplating ending it all).

In the early 1980s, the two networks began to merge. At first, communication was one way: Usenet users could read the ARPANET messages, but could not respond to them. Even once replies were permitted, the ARPANET users could be unwelcoming to the Usenet-based outsiders. One early Usenet user recalls:

Adding the ARPANET lists to Usenet initially contributed to the sense of being poor cousins. It was initially very hard to contribute to those lists, and when you did you were more likely to get a response to your return address than to the content of your letter. It definitely felt second class to be in read-only mode on human-nets and sf-lovers.5

This clash, of an existing community feeling that it was being intruded upon by less knowledgeable and less well-disciplined outsiders, would subsequently be repeated several times in Usenet’s evolution, notably in 1993 when America Online, a popular Internet service provider marketed toward computer neophytes, gave Usenet access to its subscribers, and in 1997 when DejaNews provided an easy-to-use, Web-based interface to Usenet. Indeed, it is one of the fundamental design problems in the online world: when do we want a closed community, with carefully vetted membership, versus a more open environment, welcoming to newcomers and engaging a larger population?

In this case, the feeling among ARPANET users was mixed. One said:

I am beginning to wonder about USENET. I thought it was supposed to represent electronic mail and bulletins among a group of professionals with a common interest, thus representing fast communications about important technical topics. Instead it appears to be mutating into electronic graffiti. If the system did not cost anything, that would be fine, but for us here at Tektronix, at least, it is costing us better than $200 a month for 300-baud long distance to copy lists of people’s favorite movies, and recipes for goulash, and arguments about metaphysics and so on. Is this really appropriate to this type of system?6

Indeed, since ARPANET was a closed and tightly controlled system, the question is what motivated them to provide access to Usenet at all?

Some ARPANET users believed that opening up their network would provide access to valuable information and relationships: Usenet at the time was still an elite community, limited to researchers and graduate students in established universities and laboratories; it was far from the massive public system it would become later. Others felt that an open network was itself a worthwhile experiment. In practice, it simply came down to a few individual system administrators, first at Berkeley and then at other ARPA sites, who chose to create the ARPA/Usenet gateways. Although they were not representative of most ARPANET members, these gatekeepers’ belief in openness was enough to establish the flow of information, tenuous as it may have been at times.

The early Usenet group Human Nets focused on the unprecedented experience of being part of a vast communicative network. This post, written by the group’s moderator in response to a complaint that the discussion had gone off topic, both articulates the novelty of this enterprise and demonstrates the tension between maintaining the strict boundaries of a tightly focused group or the loose ones of a group that shifts in response to participants’ changing interests.

Even if we have shifted away from discussing human networks, we are getting a first hand EXPERIENCE of what they are through this mailing list. No amount of `a priori’ theorizing of their nature has as much explanatory power as personal experience. By observing what happens when connectivity is provided to a large mass of people in which they can FREELY voice their ideas, doubts, and opinions, a lot of insight is obtained into very important issues of mass intercommunication.

The fact that ... dissimilar ... topics have been discussed in our own instance of a human network says a lot about its nature and the interests and nature of its members and should not be considered as detracting from the quality of the discussion. A human network is a springboard for human interaction and thus for human action. Let’s view it as such and keep repression and censorship at a minimum. (Jorge Phillips to Human Nets Mailing List (ARPANET/Usenet), June 3, 1981: “administrivia”; quoted in Hauben and Hauben 1997, chapter 10)

A fundamental difference between ARPANET and Usenet was their approaches to internal policing—keeping the discussion on topic and preventing abuse. Within ARPANET, a rigid infrastructure of moderators who determined whether each posting was suitable for publication patrolled the conversational boundaries. The open groups of Usenet had no such authority; a loose and evolving set of social norms and sanctions maintained conversational bounds. As the Usenet community expanded, the issue of how to maintain civility and order became a growing problem. As early as 1981, Usenet system administrators were grappling with the problem of protecting open discussions from disruption. In a discussion about the problem of “flames,” Mark Horton, the programmer who first connected Usenet and ARPANET at Berkeley, wrote that “peer pressure via direct electronic mail will, hopefully, prevent any further distasteful or offensive articles. Repeated violations can be grounds for removing a user or site from the network.”7

Usenet’s open design was based on the assumption that people would, on the whole, behave responsibly. One could post freely to existing newsgroups and, with little central coordination, create new ones.8 Inherent to this assumption was the fact that Usenet’s initial users were clearly identified (Donath 1998), not only with their name, but also with their work or university affiliation; these affiliations provided the computer accounts and network access. People also volunteered additional identifying information. Although the posting software automatically included one’s email address at the head of the message, many people also closed their postings with elaborate signatures that included their name, affiliation, interests, and humor.

This did not guarantee that all postings would be on-topic and civil. One problem was simply ignorance; the newsgroups had developed their own culture and rules of behavior (for example, don’t post content with nothing more than “me too,” and read past discussions to make sure that your question has not already been answered in earlier exchanges), but newcomers didn’t know these rules. Usenet was growing rapidly and there were a lot of newcomers, particularly each fall when the new students arrived on campus and got their first computer accounts. The main repercussion for poor behavior was social sanctioning from within the community: people would send emails or write angry responses to offending users. Institutional monitoring varied. Employees of conservative corporations or military centers were more constrained; their supervisors were likely to be intolerant of wasting time and resources, let alone poor behavior. Most universities, however, did not closely monitor students’ online activity, and did not act unless an offense was egregious enough to generate serious complaints; poor etiquette and hostile postings passed unnoticed.

Some examples of Usenet signatures from the mid-1990s. See Donath 1998 for more on the social meanings encoded in Usenet signatures.

Signature of an anonymous remailer user:

Matou can be reached at, or, if you prefer to send your email on vacation to Scandinavia.

The following are signatures from The DoD number is a signal of membership in the online, and somewhat parodic, motorcycle club, “Denizens of Doom”:

“Tuba” (Irwin) “I honk therefore I am” CompuTrac-Richardson,Tx DoD #0826(R75/6) (Ducati 900GTS)



DoD #5088

You talking at me? Are you talkin’ to me? Who are you talking to?


This signature, from, is a joke accessible only to other advanced C (a computer language) coders:

main(v,c)char**c;{for(v[c++]=”Rick Tait <>\n)”;(!!c)[*c]&&(v—||—c&&execlp(*c,*c,c[!!c]+!!c,!c));**c=!c)write(!!*c,*c,!!**c);}

Limited access to the requisite computers and the network put a hard, infrastructural boundary around Usenet. In the early days, it excluded everyone except for researchers at select institutions. Increasingly, though, people gained access to Usenet via commercial accounts—network connections that they paid for independently, unattached to a valued affiliation. Customers of the first such services9 were still a technologically oriented group: personal computers in the early 1980s were a hobbyist’s specialty. But, the significance for the Usenet community was that they were participating via services that, so long as they broke no laws, did not care how upstanding they were and that at most might terminate their account, but would not jeopardize their job or degree.

The phrase “Endless September” described the change in Usenet culture once novice consumer users gained access. The Jargon File, an open collection of hacker slang, defined it as follows:

All time since September 1993. One of the seasonal rhythms of the Usenet used to be the annual September influx of clueless newbies who, lacking any sense of netiquette, made a general nuisance of themselves. This coincided with people starting college, getting their first internet accounts, and plunging in without bothering to learn what was acceptable. These relatively small drafts of newbies could be assimilated within a few months. But in September 1993, AOL users became able to post to Usenet, nearly overwhelming the old-timers’ capacity to acculturate them; to those who nostalgically recall the period before, this triggered an inexorable decline in the quality of discussions on newsgroups. Syn. eternal September. See also AOL! (Raymond 2003)

In the 1980s and early 1990s, Usenet was thriving.10 There were hundreds of topics and lively discussions. There were heated flame wars, but it was a very viable interaction space. At the same time, the user base was changing, from a small community of people with much in common to a heterogeneous crowd divided into a quickly multiplying set of subcultures. The technical users still dominated, and the most popular topics included computer languages, network properties, and science fiction. But as the population diversified, so did the range of topics and style of discourse.

Several developments in the early 1990s radically changed Usenet. The biggest immediate effect was when the large consumer network services—first CompuServe, then Prodigy, then AOL—gave access to Usenet to their customers. As with other commercial network providers, people who accessed Usenet via these services were coming in with an account unaffiliated with their livelihood. These companies marketed themselves to novices, to people who weren’t interested in the computer as a machine, but simply wanted the services that it could provide. They were even more dissimilar to the computer professionals and researchers who dominated the early days.

Furthermore, they—especially AOL users—arrived with their own chat-room culture, in which identity was fluid. The AOL chat-rooms were often fantasy spaces, where users with sexy or silly names played games with each other, using an online patois filled with emoticons and abbreviations. AOL users were accustomed to fanciful and disposable screen-names: “sexymama69” and “BigGuyNumber1.” Embarrass yourself or offend too many people using one such guise? It was easy to simply take on another name, a clean new identity.11

This population made a big impact when it arrived in the Usenet discussion space. Usenet had a culture of expertise: people were expected to take time and learn a group’s mores before posting to it. The consumer services were marketed to newcomers and made their money by signing up more and more novice subscribers. To encourage inexperienced users, they created an environment where to be a naive user was fine; just try stuff, join in, say anything. Many users from these services saw Usenet as just a big continuation of their familiar, informal chat-rooms, and they brought with them their smiley faces, cheap identities, and naive ignorance of the conventions and mores of the established Usenet groups (Coates et al. 1994). The infrastructural boundaries of institution-based access and professional reputation dissolved into the much looser bounds of personal computer ownership and an inexpensive monthly subscription.

A second development at the same time—the creation of a public anonymous server,, by a Finnish programmer named Julf Helsingius—had a smaller immediate effect but was symbolically important. Usenet worked much like email: when you posted something, your username, including the service that provided your Internet access, was automatically included in the header. Even if your username had no resemblance to your real name, if someone wanted to know who you were, it would generally not be too difficult to trace your real identity. An anonymous server changed this. By routing your message through this server, it would appear under any name you chose there, and tracing it back to you would be beyond other users’ ordinary means. Thus, if you wished to harass someone, or post neo-Nazi propaganda, or otherwise behave in a way that was disruptive and likely to inspire others to want to retaliate against you, the shield of an anonymous account could be quite useful.

Many of the users of were not, in fact, engaging in antisocial behavior. Many were committed to the principle that anonymity had an important role in online life. Anonymity protects someone when he or she is doing something that invites reprisal, which might be malicious behavior, but could also be dissent in an oppressive regime or whistleblowing in a corrupt corporation. The conundrum of anonymity is that whereas it protects those resisting an oppressive ruling group, it also protects those who are destructively disruptive. was actually a pseudonymous rather than an anonymous service. Helsingius kept a database that connected the users’ anonymous accounts to their regular email addresses. One could thus receive mail via an account and establish a long-term history for a particular identity. But it also meant that Helsingius could be (and was) forced to reveal these identities.12 Though turned out to be insecure, it was not long before other more secure remailers were set up. Some routed mail through multiple servers in different countries, making it much more difficult for someone to secure all the legal subpoenas needed for tracing identity. Others were true anonymous servers, where each post you sent would appear under a different name, with no records tied to you. Though you could not receive incoming mail via an anonymous remailer, they would suit someone who simply wanted to send off disruptive missives.

Soon there were floods of anonymous and untraceable postings to Usenet. Spam became a growing problem.13 It became clear that the concept of Usenet as an open public community had worked only because in the beginning, “public” had not been very public. Admittance had required having already passed the admissions barriers to a high-end university or elite technical job. It had meant being a named person responsible for and protective of one’s own reputation. Though Usenet had appeared to be an open public space, it had in fact been a protected garden set behind the walls of network access in the 1980s.

A third development eventually ended Usenet as a functioning community. In 1995 DejaNews, a Web-based archive and searchable index of Usenet postings, was started. It made it easy to search for messages by key word or author, but it also removed the postings from their original social context. Previously, you followed a discussion because you were interested in the general topic; conversations were the basic units, rather than individual postings. With the advent of a searchable Web interface, people started replying to posts with no idea of what came before or after. Easy searchability broke down the boundaries that kept newsgroups as coherent and semiprivate conversational spaces. Today, Usenet still exists, but it is an unsociable morass of spam, porn, and pirated software.

But elsewhere, online discussions are thriving. Some have strict boundaries: private mailing lists or discussions among pundits that the public is invited to read but not contribute to. Others are moderated by a trusted leader or through a system of meta-moderation, in which community members are given incentives for writing well and moderating fairly. Some require participants to use their real name and identify themselves within some community, such as their network of acquaintances. Because of the prevalence of spam, there are almost no wholly open sites; even the notorious anonymous boards of 4chan have moderators.14

Boundaries of Online Discussions

In the mid-1990s, when Usenet was an enormous but still thriving site, sociologists Peter Kollock and Marc Smith wrote a paper about it entitled “Managing the Virtual Commons.” A commons is a resource shared by a population. The challenge of the commons is to provide open access while preventing destructive exploitation. For a commons to survive its users must take care of it; they cannot overexploit it or neglect it. Kollock and Smith drew on ecological and political theories of managing such communally shared resources to examine the rich possibilities—and looming difficulties—of a massive, online conversation space (Kollock and Smith 1996).

Elinor Ostrom’s design rules for successful community management of the commons are:

  1. Group boundaries are clearly defined.

  2. Rules about use of collective goods are adapted to local needs and conditions.

  3. Most people affected by the rules can participate in modifying the rules.

  4. External monitors that help reinforce the group’s rules are also accountable to the group.

  5. Sanctions in proportion to the offense are assessed by other members (or other authorities they authorize) on individuals who violate the rules.

  6. There are accessible and quick conflict resolution mechanisms.

  7. External authorities respect the group’s right to self- govern. (Ostrom 1999)

The notion of a commons, free and open to all, has great appeal.15 But is it sustainable? In “The Tragedy of the Commons,” ecologist Garrett Hardin (1968) argued that exploitation to the point of destruction is the inevitable fate of a publicly shared resource, since individuals acting in their own self-interest will use more and more of it, until it is ruined. For example, a village may have a common pasture on which all inhabitants may graze their sheep. So long as the pasture is healthy, everyone benefits. Yet since people individually accrue the benefit from feeding their animals there, while the whole community shares the cost of resources they consume, people will keep adding more and more animals until the pasture is overgrazed and barren, at which point everyone loses.16

Is Hardin right? Are common resources inevitably overgrazed, overrun, and overused? Hardin claimed that only strict, external governance could salvage these resources. When we look at the many ecological disasters of the last century, his pessimism seems well founded. Yet there are counterexamples. Elinor Ostrom won the Nobel Prize in Economics for her studies of economic governance, particularly of commons. She showed that there are many examples of communities that have successfully self-managed common resources. Central to their success are well-maintained boundaries, defining the extent of the resource and identifying the population of legitimate users. Also essential are locally adapted rules about sharing, collective decision making, and monitoring and sanctioning those who break the rules.

Both Hardin and Ostrom agree, and history demonstrates, that poorly governed commons will be overexploited. They differ in the solutions they propose and the enthusiasm with which they assess the likelihood of success. Hardin claims that only private property and centralized, external government can save common resources (and his overall assessment is quite pessimistic). Ostrom champions self-governance. While she readily concedes that self-governance does not always work, she points out that external force has fundamental problems in compliance and a poor record of adapting to local circumstances. She emphasizes the importance of local and evolving rules: there is no universal recipe for successful self-management, and changing circumstances, such as a growing population, new demands, or new technologies, require adaptability.

This chart shows the rough breakdown of goods according to whether they are ex-cludable (whether others can be prevented from using/obtaining) and rivalrous (whether one person’s use diminish another’s). Few examples are clean cut: a polluting factory makes air rivalrous, and the air in a scuba diver’s tank is a private good.




Private goods—cars, clothing

Common goods—public pastures, fishing grounds


Club goods—satellite TV, private parks

Public goods—broadcast TV, air, knowledge

Thinking about public online conversations as a commons is useful for several reasons. It forces us to think about the resources that the participants use and the value they create, for themselves or others. Ostrom’s model of what makes a commons sustainable provides a guide for designing better interfaces for these interactions. Economists who study commons and their resources classify them along two axes (see sidebar). One is “excludability,” or how easy it is to limit access to the resource. Physical objects are excludable: storeowners hold onto cars, shoes, books, and so on until the customer pays for them, at which point ownership is transferred. Broadcast television is nonexcludable: once you send the signal into the air, anyone with the right receiver can view it. Technology can manage exclusion: if we encode the television signal so that you need to pay to get the key to decode it, we have turned it into an excludable good. The other axis is “rivalry”: does one person’s consumption of the good diminish what is available to others? A pie is rivalrous; television signals, whether free and over the air or limited by encoding and subscription, are not.

The “commons” in the “tragedy of the commons” refers to resources that are nonexcludable and rivalrous, meaning that it is hard to limit access to them and that one person’s consumption reduces what is available for the others. But it is important to note (and many sources on goods and commons omit this) that often the resources in a commons are renewable. Grass grows, fish spawn. If they are ruthlessly exploited, they will disappear, but if managed well, they are sustainable.

When we look at a commons such as a pasture for cows, it is easy to see what the resource is that needs protection: the grass. But what is it in a conversation? Kollock and Smith (1996) identify the virtual resource in danger of overexploitation as “bandwidth.”17 People have limited time and attention. A public online conversation provides a platform for expression; anyone can write, at any length and on any topic. Repetitive, off-topic, or offensive postings waste bandwidth. If many postings are dull, uninformative, or irrelevant, people will give up. If we see bandwidth as a rivalrous resource, where one person’s posting takes away from the attention available for another’s, then boundaries need to limit who can post, and some form of rules and repercussions need to enforce these bounds. Yet it is important to keep in mind that the real goal is not fewer postings—indeed, too few is as much a problem as too many—but more high-quality contributions.

A different approach is to focus on conversation spaces as places where public goods in the form of informative or entertaining contributions are produced rather than as a commons of bandwidth to be preserved. Joseph Stiglitz, another Nobel Prize winner, referred to knowledge as a “global public good,” and also good in the normative sense: something that benefits all. Public goods are nonrivalrous and nonexcludable. Being nonrivalrous, the key problem with public goods is not consumption (in the case of knowledge, my learning something does not impede your acquisition of that knowledge18) but in motivating their production. Much of Stiglitz’s writing is about finding ways to finance research. Here, in the realm of online conversation, the challenge is to find the social motivations for knowledge production. What motivates people to answer questions, share knowledge, and organize information, usually without pay, in the context of an online social space?

The relationship between consumers and producers is quite different in a discussion space than in a pasture or fishery. In the domain of cows and fish, other herders and fishermen are, for the most part, tolerated but not desired. One would be happy to have the whole pasture or pond to oneself (though the sustenance of community is also a valuable thing). But the ecology of a conversation is different. Reading and writing are symbiotic: writing has no value without readers, and reading is not possible if no one writes. Most participants take both roles. Only spammers, the least welcome of writers, write without reading. Conversations are diverse ecosystems: the motivation of different people—or the same people, at different times—varies. Some are primarily readers seeking entertainment or enlightenment. Some are primarily writers seeking to promote a cause. Some write to ask a question, hoping to generate useful responses. Some people seek community and support; they want a back-and-forth exchange of ideas.

How people perceive this ecosystem has a big impact on the type of activity they will want to promote, and thus the designs they will prefer. Kollock and Smith (1996) referred to people who read but do not write as lurkers, a common though somewhat pejorative-sounding term frequently used within the online community (Nonnecke and Preece 2003). Their disapproval of this behavior was clear; they listed lurking as a form of free-riding. It is true that the lurker is not producing more knowledge or other valued content. Yet the quiet reader is not using up bandwidth; most importantly, he is an audience, without which the writers’ efforts would be in vain.

The social ecosystem is in delicate balance. High-quality content needs to be encouraged, but poor-quality content is more harmful than nothing at all. Readers are valuable, and their presence motivates writing; but if the reader writes nothing, the writer has no way of knowing her work has been read.

Quality is contextual. Writing without adding significant content (the “me too” posting) is often considered to be poor-quality writing; it uses up bandwidth without adding information. Yet in conversations that are personal and supportive, phatic communications—messages that have purely social content, such as “you’re so right,” “we’re thinking of you”—are essential for expressing sympathy and understanding. And, such social discourse is important even in technical discussions that rely far less on social bonds, for it is through such commentary that people form reputations and learn mores within a community context; it is key to motivating them to be productive participants.

Ostrom emphasized that commons need boundaries in order to survive. Usenet began as a bounded community; though ostensibly open to anyone with a computer account, it was effectively a small bounded community of researchers and professionals. As computers became more widespread, the community grew. Some growth was good; it went from a limited community of very like-minded users to an unprecedentedly immense social space in which enormous numbers of diverse participants engaged in a wide variety of discussions. But then it grew too big and too open. Its loosened boundaries allowed in too many exploitative users (spammers), and new technologies (search) further broke down the boundaries that had sustained ongoing discussions. In a world of anonymous accounts, sophisticated bots, and massive search engines, how can we design boundaries to sustain community?

Identifying Who (or What) Is In or Out

Personal identity is essential for most boundaries. Without it, history cannot accrue and a reputation cannot be established. Should the individual break the community’s rules, identity enables social sanctioning and, if necessary, banning that person from using the community’s resources. This identity might be tied to one’s real name, or to a persistent pseudonym; these are discussed in detail in chapter 11, “Privacy and Public Space.”

Social identity, or type, can also be the bounding delimiter. Some resources or communities are open to people only of certain type—those over age thirteen or eighteen, or those who are experts in particle physics. “Type” can refer to any number of characteristics: big social categories of age, gender, and race; institutional ones such as “employee of company X”; situational ones such as “sufferers of migraines”; and so on. Much of the controversy around exclusionary practices is about type-based boundaries; a society can find some boundaries to be quite acceptable, even imperative, while others are forbidden. Designing the boundaries for a closed group of known and identified individuals is a relatively straightforward issue of security (Harper 2006; Schneier 2011), but providing access to a whole class of people, without knowing in advance who the individuals are, can be more complex, socially if not technically. Social identity is the focus of chapter 9, “Constructing Identity.”

In closed communities, individuals are vetted to ensure that they are of an accepted type to participate. They subsequently need only verify their identity (whether with an ID card to gain access to a physical space, or an email, username, and password in a virtual one) to enter. A closed mailing list works this way; once you are accepted as a participant, your individual email address is your key.19 The list rejects mail sent from other addresses. Yet the great vitality and promise of online discussions is that they are open, providing support and camaraderie to and drawing from the knowledge of people from all over. The design problem is to figure out how tight or porous to make the boundary, how to protect the community while keeping it open to new people and new ideas.

The disadvantage of tighter boundaries is decreased access. If I greatly limit the number of people I hear from, I get less information; defining boundaries too narrowly or thoughtlessly loses valuable perspectives. Hearing differing viewpoints is difficult. We often like to hear only what reaffirms our preexisting beliefs. But much of the benefit of being in dialogue is learning about what people different from us think, and perhaps changing our own views or influencing theirs. Heterogeneity can be a valuable, though unpopular, goal.

Boundaries protect against two different types of outsiders. Aspirants to the group want to be part of it, to have access to its resources and the status it confers. In an open and mobile society, today’s aspirant may well be tomorrow’s insider. The country club membership, once denied, is granted when the applicant becomes partner at her firm; the newcomer to the city is invited to underground parties once he learns the nuances of how to dress and talk. The other type, predators to the group, such as thieves, rival street gangs, or spammers, do not want acceptance by its legitimate members; they seek its resources or benefit by harming it. Many of the most visible barriers—from the barred windows and steel doors on houses in poor neighborhoods (and celebrities’ mansions) to the requirement that comments be held until a moderator approves them in an online forum—are there to protect against predatory outsiders.

Malignant outsiders, whether deliberately malicious or simply in pursuit of their own incompatible goals, destroy the common resource. There may be plenty of fish in the pond, but not if a factory sets up on the shore and dumps toxic sludge into the water. On Usenet, spam was the equivalent of pollution, and the technologies that let it cross the boundaries of the discussion space were anonymous access, automated scripts, and computer viruses. One of the first widely disruptive spam campaigns on Usenet appeared in 1994. In numerous newsgroups, any mention of Turkey was answered by a user who called himself “Serdar Argic” and whose postings consisted of long diatribes denying the Armenian genocide committed by the Turks, and claiming instead that Turks were slaughtered by the Armenians. Argic posted prolifically, averaging one hundred posts a day.

Eventually, many people came to believe that “Serdar Argic” was actually a bot, a scripting program that automatically generated the messages; in this case, one that searched newsgroups for mentions of Turkey or Armenia and then pieced together a response by grabbing a line or two from the original letter and appending its typical tirade. This suspicion was strengthened by the poster’s indifference to whether a mention of Turkey referred to the country or to Thanksgiving turkeys and other culinary, avian, and nonpolitical uses of the word.

Serdar Argic’s histrionic messages were designed to provoke, using strategies such as taking the original poster’s name, adding an Armenian ending to it and referring to the writer’s “criminal Armenian grandparents” (DeVoto 1994). The reflexive response to an offensive post is to respond, whether calmly explaining why it was inappropriate or angrily demanding an apology. In a newsgroup, numerous people may respond this way. If the offending writer intends to be a participant in the discussion, he or she may learn from this response; it is a way of articulating and enforcing the group’s social norms, and although it temporarily distracts from the main discussion, it serves a useful purpose. But posters deliberately inciting discord are only encouraged by such responses. Serdar Argic—and, over time, many other “trolls”20—was able to thoroughly disrupt several groups by hijacking the conversation to be about him; rather than the ostensible topic of the group, most posts would be Argic’s rants, responses to his rants, or people telling others not to respond to him. Social sanctions work only when the person being sanctioned cares about being in good standing with the group (and is actually a person). People who intend to be disruptive cannot be stopped by sanctioning; they don’t care if their reputation in the group is lowered.

Concern about unwanted intrusion influences design. In a safe neighborhood in carefree times, there may be many open windows and doors, few obvious locks, and building designs that maximize sunlight and views. In a high-crime neighborhood, the houses look like fortresses, with heavy steel doors and bars on the windows. Online, the barriers are complex registration processes and tests to prove that you are human, not a bot.

Knowledge Borders

In the physical world, access based on social type is often determined by physical characteristics, since they are easy for us to assess. A nightclub’s doorman admits only stylishly dressed patrons; customers can try on clothing in the ladies dressing room if they appear to be female; decades of civil rights legislation still struggles against the legacy of racial segregation. Such boundaries can be essential to society or highly destructive, depending on who they admit and why. Online, where we have information, not bodies, potential entrants can be assessed based on their knowledge (or lack of it). Such knowledge boundaries have everyday precedents, such as the tests of knowledge that must be passed to gain admittance to college, to the police academy, or to obtain a driver’s license.

A simple but good example has been in use since the early days of personal homepages. Someone puts up a collection of family photographs. They want their friends to be able to see them, but not the general public. They also do not want to enumerate everyone who should have access; it is too much work and they may omit someone. Instead, they protect the page with a password, and provide the clue that it is, say, the name of their middle son or the family dog. People who are close enough friends will know this information and be granted access to the photographs. And, unless they are famous or have posted this information elsewhere, it is the sort of data that online research will not find (at least not for now). Such “group knowledge” identifiers work best for small communities, whose members share knowledge that is obscure to outsiders. Sconex, an early social network site aimed at high school students, used a series of questions such as “What floor is the library on?” or “Who is the substitute gym teacher?” to assess whether applicants actually attended the schools they claimed to (Laraqui 2007). Some online sites for ultra-Orthodox Jews limit membership to followers of a particular rabbi; the organizer asks potential participants to name the topics discussed at recent gatherings (Campbell and Golan 2011). While the biggest design challenge is devising a test that works, other considerations further complicate design. A lengthy examination will be too onerous for most sites; it could discourage the legitimate users, too. So any “test” needs to be lightweight: the effort to gain access must be in proportion to the participants’ desire to enter.

<p>Figure 7.1</p><p>A text CAPTCHA, circa 2012. (See</p>

Figure 7.1

A text CAPTCHA, circa 2012. (See

Since the days of Serdar Argic, arguably the most important boundary online is the one distinguishing humans from machines. The vast quantities of spam that clog mailboxes and message sites are machine generated, sent by programs of ever increasing sophistication. What is needed is a test that humans can pass and computers cannot, so that sites can grant access to the former while barring the latter. A CAPTCHA is a “completely automated public Turing test to tell computers and humans apart” (von Ahn et al. 2003). The challenge in making a CAPTCHA is not only to devise a test that only humans can pass, but also to make it so that both judging and generating new instances of it can be done automatically. A popular version today (circa 2012) uses pictures of distorted letters (see figure 7.1). Computers can generate these images algorithmically so that an endless quantity of them can be created as needed, and they can easily assess the correctness of an answer, which is simply a string of letters. Contemporary computers have difficulty deciphering them, whereas people can do so with relative ease. But computers are getting better at this task, and will soon be able to pass it. Other CAPTCHAs, perhaps also harder for humans to solve, will be needed.21

Moderated Borders

A different approach to bounding discussion focuses on the content of messages, rather than or in addition to identity-based boundaries. Moderated groups, in which a single person or small group of people read each message before it is posted, date back to the earliest days of ARPANET. Moderation ensures that only messages of sufficiently high quality become part of the discussion. But they are limited by size; if the discussion is too popular the moderators will be overwhelmed, resulting in long delays before messages are posted. They are also subject to the whims of the moderators, who may censor views contrary to their own.

From the Slashdot Q&A:

Most of the trolls and useless stuff comes from “Anonymous Coward” posters. Why not eliminate anonymous posting? We’ve thought about it. We think the ability to post anonymously is important, though. Sometimes people have important information they want to post, but wouldn’t do so if they could be linked to it. Anonymous posting will continue for the foreseeable future. (Miscellaneous—FAQ—Slashdot:

Distributed moderation, in which many or all of the participants act as moderators, alleviates these problems. Commonly, participants can vote on the postings, and ones that receive higher ratings are given greater prominence; often there is a connected reputation system, and participants whose posts are highly rated receive corresponding reputation points. One of the earliest sites to successfully use distributed moderation is (Lampe and Resnick 2004). It is a news discussion site focused on technological topics, where the discussions occur via comments on individual stories. The stories are suggested by the community, but the editorial staff decides which ones to feature on the site. The participants comment on them, and they also rate the comments according to a series of criteria including funny, insightful, redundant, flamebait, and so on. Not everyone can moderate all the time; it is a privilege that is granted at random for a limited amount of time. Slashdot also features meta-moderation; the ratings are themselves moderated by a rotating group of participants with more established accounts. The moderator’s ratings on a comment raise or lower its score, which affects how prominently it is displayed. Ratings also affect the karma (Slashdot’s version of reputation) of the users who wrote the comment. With higher karma, one is more likely to be chosen to moderate; thus, over time, the people whose contributions to the site align with the community’s tastes gain greater influence over the site. Slashdot does allow for anonymous participation (see sidebar), though it encourages people to establish an identity; only logged-in users can moderate and contributions from identified users start with a score of one, whereas anonymous contributions start with zero. Anonymous commenters must solve a CAPTCHA to prove that they are at least human.

Slashdot’s multilayered boundary sustains a robustly active (as of 2012) news-gathering and discussion site. The boundary includes the infrastructural layer of the CAPTCHA, keeping out vast quantities of automated spam. It includes the community itself, moderating and meta-moderating the content, a constant distributed sorting process that highlights the most important stories and comments and conveys increasing influence to its most respected members. And it includes the staff—the gatekeepers who determine which stories to feature, which moderators to reward, and which names or IP addresses to ban.

There are now a number of sites that use a variation of this multilayered boundary and incentive system to give prominence to high-quality content, filter out spam, and create a community of known users while still allowing pseudonymous and even anonymous participation. They differ in their technical implementation: Do users rate others? Re-edit their words? What are the privileges granted to trusted participants, and how does one gain them? They also differ considerably in their overall goals. Slashdot is used to quickly publish breaking news stories. Other moderated sites include Wikipedia, for collaboratively creating neutral, authoritative encyclopedia entries; the Stack Exchange network, for questions and answers on focused topics; and 4chan/b/, for the fast exchange of outrageous images.22

Stack Overflow is a site devoted to questions and answers about programming. It has a carefully planned economy of reputation points. You lose points if your postings are down-voted; you gain 5 points if your question is up-voted and 10 points if an answer you give is up-voted. All constructive (as defined by the community) participation is rewarded, but the more generous act of helping to solve someone else’s problem is rewarded more highly; question askers, presumably, are sufficiently rewarded by the possibility of receiving a solution to their problem. Privileges are intricately engineered, with 15 levels of permissions: to create a post, you need one reputation point; 15 points will allow you to vote a comment up, but you need 125 to down-vote (and down-voting costs you a bit in reputation points, too); with 20,000 points you become a trusted user, with numerous privileges, including the ability to delete answers that have negative scores. As with Slashdot, you can participate anonymously, but with reduced capabilities.

Like the blueprints for an empire of carefully planned suburban developments, Stack Overflow’s code has been reproduced to create the Stack Exchange network of Q&A sites, each of which have a different topical focus, though all share the same social infrastructure. One’s identity works across sites, and experienced users with a high reputation on one site start off with 100 points on any new site they join. The designers emphasize that each site is designed to eventually be fully self-governing, though they are launched with moderators picked by the Stack Exchange staff.

Conversation on the Stack Exchange sites is tightly controlled. Clear directives stipulate exactly what sort of question is appropriate, and each site has parallel spaces for discussing issues of site governance, and what precisely is a prototypical question for that topic. Reputation and social approbation are central, but their operation has been made almost entirely infrastructural; the Stack Exchange model provides a prefabricated social structure for topical discussion. Indeed, even the concept of “discussion” is denigrated: when one answers a question, a note pops up saying: “Please make sure you answer the question; this is a Q&A site, not a discussion forum. Provide details and share your research. Avoid statements based solely on opinion; only make statements you can back up with an appropriate reference, or personal experiences.”

This focus makes the Stack Exchange technical sites exceptionally useful. In the fast-moving world of programming languages and system administration, it provides a place to get help for problems like: “I bumped into this strange macro code in /usr/include/linux/kernel.h [paragraphs of code] … What does:-!! do?” A related site, MathOverflow, focuses on concrete, research-level math questions with definite answers;23 it has attracted a remarkable community ranging from graduate students to Field Medalists. But definitive questions and answers are only one of the myriad forms that social interactions can take. For other topics addressed in the Stack Exchange network, such as science fiction or parenting tips, the regimented rigor that brings clarity to the technical discussion may be too stiffly reserved for these more inherently social discussions.











Figures 7.2–7.6: Lolcats are pictures of cats with added captions. The captions themselves are in-jokes. They are easy to make, but one must understand the swiftly evolving idiom to create one that fits. Figure 7.2 and 7.3, for example, are plays on the phrase “im in ur base killin ur d00dz,” a popular gaming meme. Such in-jokes can function as cultural boundaries when only people who know the underlying pattern can understand the joke. A picture of the economist Adam Smith captioned with his famous phrase, “invisible hand,” does not seem funny at first glance. But in context with other “invisible [noun]” images (figures 7.4 and 7.5), it becomes humorous.

At the opposite end of the discursive spectrum from Stack Exchange’s constructive regulation is 4chan. It is an image board site, where participants, usually anonymous, post and comment on pictures. It consists of various boards, some devoted to topics such as anime or food and cooking. The most popular and infamous is the random board, /b/. Obscene and destructive, it is popularly described as “the id of the Internet.” At first glance, /b/ seems anarchic. Images are posted so quickly—4chan gets 1,000,000 posts a day24—that each screen refresh yields a new crop of raunchy, puerile, and offensive postings. The comments are unintelligible to a newcomer, filled with repeated references to “ponies,” “newfags,” “rolling,” “bumping,” and other in-jokes. Yet out of this chaos have emerged numerous memes that spread widely across the Internet, such as lolcats (pictures of cats with funny captions—see figures 7.2–7.6) and rickrolling (tricking readers to clicking on what they think is a relevant link only to get, instead, a video of Rick Astley singing “Never Gonna Give You Up”). It has also launched anonymous activist actions including cyber-attacks on the Church of Scientology. Users of the board have participated in large distributed denial of service attacks (DDOS), taking down websites of organizations, including antipiracy sites and news media sites that criticize them.25 Though it has no formal organization or recognized leaders, the sheer size of the anonymous crowd that assembles on /b/ gives it power. Understanding the order in its seeming chaos can help us understand the source of its incisive creativity.

Almost all postings on 4chan are anonymous, including on the more innocuous boards, such as paper-crafting or science and math. Even if you do post with a name, anyone else can use that name, too. Though unique identities can be created using a technique called tripcodes (see figures 7.7 and 7.8), in this looking-glass discussion world creating an identity is perceived as unpleasantly attention-seeking.26 On 4chan, although anonymity lowers inhibitions and allows more flaming and antisocial behavior, it can also foster stronger communal identity and encourage equity in participation (Bernstein et al. 2011).

Though anonymous, 4chan still has boundaries. To keep out automated spam, you must solve a CAPTCHA for every posting.27 There are moderators and “janitors,” hand-picked from the participant community; they are also anonymous, and forbidden to reveal their role on the boards. The moderators can delete postings and ban users; though participation is anonymous, users who break the rules have their IP address banned from the site for an arbitrary time. Rules vary from board to board—some are meant to be “work-safe,” while /b/ is a (nearly) anything-goes board—but posting material that is illegal in the United States, being under eighteen, posting personal information, or calls to invasion will still get you banned from it.28 Underlying the free-wheeling chaos of 4chan is a cryptic and sometimes arbitrary autocracy. For all boards except /b/, Rule 8 is “Complaining about 4chan (its policies, moderation, etc.) on the imageboards can result in post deletion and banishment.” Many rules do not apply in /b/—and its own first rule is “ZOMG NONE!!!” (i.e., “Oh My God, None!!!”), with a note stating that this anarchic freedom applies to the moderators also.

<p>Figure 7.7</p><p>Correctly formed triforce.</p>

Figure 7.7

Correctly formed triforce.

<p>Figure 7.8</p><p>Incorrect triforce. One way of displaying status on 4chan is through the display of esoteric knowledge, such as knowing how to display a “triforce,” a triangle of composed of three smaller triangles. If you copy and paste one that someone else has posted, it will appear incorrectly (see figure 7.8); only if you know the correct ASCII code will yours appear properly formed (see figure 7.7). A poster may challenge another to make this picture, and if he cannot, he is derided as a newcomer (<a href=""><br>cant-triforce</a>).</p>

Figure 7.8

Incorrect triforce. One way of displaying status on 4chan is through the display of esoteric knowledge, such as knowing how to display a “triforce,” a triangle of composed of three smaller triangles. If you copy and paste one that someone else has posted, it will appear incorrectly (see figure 7.8); only if you know the correct ASCII code will yours appear properly formed (see figure 7.7). A poster may challenge another to make this picture, and if he cannot, he is derided as a newcomer (

For the first-time visitor, /b/ can be nearly unintelligible, filled with in-jokes, obscure references, and games whose goals and rules seem impenetrable. Such jokes and rules are emergent, bubbling up from the community; like the latest fashion, once too many people are familiar with them, they cease to be of use to the insiders. This is a knowledge boundary, separating the insiders from newcomers and curious observers. Status is signaled through complex codes and one needs to participate for a while to gain the knowledge—the communicative competence—required to compose an insider’s well-received contribution.

Although reputation does not exist in this anonymous space, the dynamics of up-voting interesting posts still do. The board is designed so that the threads with the newest content are displayed at the top of the screen. Only a limited number of threads are kept on the board: once a thread slips below a certain position it is pruned, permanently deleted. This ephemerality—a rarity online, where permanent archiving is the norm—helps hone the community’s ability to create imaginative and contagious memes. Given how quickly new postings appear on /b/, participants must keep actively commenting on a thread they like in order to keep it alive; a comment that is itself witty or provocative will help inspire more people to keep it going.29

Its ephemerality also gives this written space some similarity to oral narratives, with their memory-aiding repeated formulaic themes.30 Because written texts do not rely on memory, they can discuss complex, abstract themes; their characters and details can be more subtle and mundane than the archetypes of oral narrative. They can be rewritten and perfected. Oral narratives, by contrast, are composed of regular and memorable sequences that are repeated in various forms. They are often redundant, ensuring that audience members who missed hearing a section will still get all the information. The oral narrative that is not repeated disappears quickly into the past.

Walter Ong, an authority on the historical transition from oral to written culture, notes that oral cultures are often combative, engaging in stylized verbal hostility:

Proverbs and riddles are not used simply to store knowledge but to engage others in verbal and intellectual combat: utterance of one proverb or riddle challenges hearers to top it with a more apposite or a contradictory one. Bragging about one’s own prowess and/or verbal tongue-lashings of an opponent figure regularly in encounters between characters in narrative. (Ong 2002, 43)

At times, /b/ has the feel of a crowd, a faceless, anonymous mob. Ephemerality gives it an insistence on speedy action; act now or it will be too late. The transgression of viewing its hostile and shocking images gives the adrenaline rush that comes from taking part in a forbidden activity.

Though Stack Overflow and 4chan/b/ could hardly be more different, they share some key structures. They both use knowledge boundaries (CAPTCHAs) as protection against spam; they both allow anonymous postings (though they are the norm in 4chan and discouraged elsewhere); they both rely on moderation to eliminate poor-quality postings, and draw from user response to highlight good ones. To varying degrees, these communities police themselves. Participants who have been deemed by other participants to be exemplary contributors gain additional abilities to rate content and establish other users’ reputations.

In their very different ways, these systems have all implemented the infrastructures that Ostrom observed were characteristic of successfully managed commons. They have boundaries and rules about proper usage, and the community itself has the ability to enforce those rules. Their rules are “adapted to local needs and conditions,” resulting in the generation of knowledge on the Stack Exchange sites, the fast exchange of news on Slashdot, and the creation of a bizarre and trenchant culture on 4chan.

It will be interesting to see how these sites will evolve in the ever-changing context of the Internet (had we studied Usenet in 1990, we would have made a very different assessment about its long-term viability). 4chan, for example, is financially unsupportable. Although it is one of the most heavily trafficked sites on the Web, it takes in very little advertising revenue; few companies want to be associated with its offensive content. The Stack Exchange model seems to work very well for technical areas, but as it has expanded into more social topics, the results are uneven. Some topics, such as parenting or personal productivity, lend themselves to a more social format than this model’s rigid voting and removal of questions that “will likely solicit opinion, debate, arguments, polling, or extended discussion.”

None of these sites requires participants to use their real-world identity. Slashdot uses permanent pseudonyms; /b/ discourages even pseudonymous identities (but then again it embraces much of the behavior that real-world identity is lauded for hindering). Even the most serious of the Stack Exchange sites do not require real names. MathOverflow’s introductory page says:

Using real names reminds everybody that they are corresponding with real people, and it demonstrates a certain level of personal investment in your MathOverflow identity. If you use a pseudonym and you get into some kind of trouble (e.g. fights in comment threads or spammy-looking posts), the moderators are much less likely to give you the benefit of the doubt.31

At the same time, they recognize that someone might have good reasons use a pseudonym, ranging from insecurity (is my question so naive that I should be embarrassed asking it?) to confidentiality (my question is about a paper that I am reviewing anonymously).


The “human net” of electronically connected people that fascinated the early ARPANET and Usenet participants goes “beyond being there” by dissolving the spatial and temporal boundaries that limit face-to-face interactions to currently present, nearby people. It also dissolves the boundaries of conversational and organizational scale. Online, we can converse and create with multitudes.

But boundaries are not merely constraining limits, forbidding people from accessing what they want. They are also essential for establishing community. Without boundaries, communication becomes chaotic. The key challenge for online communities is to figure out how to structure the new boundaries—how to make them porous enough so that new people and new ideas can enter, yet impervious against those who would disrupt the community.

It is easy for us to see the need for boundaries in physical spaces, to see how a commons of fish stocks or pasturage can be wasted by excessive use. But it was not obvious, at first, that this would always be true in the online world. Full openness seemed like a good idea: the more people who have access to the public good of knowledge, the better. But online communities are complex ecosystems. Not all contributions to a conversation are of equal value, readers’ time and interest are limited, and there are destructive forces the community must guard against. For the designer of tomorrow’s communities (and thus of their boundaries), the history of Usenet vividly illustrates how boundaries evolve in response to changing circumstances, and what happens when they fail.

The concept of boundaries draws heavily on the metaphor of walls and containers. But online boundaries are made of information—codes one must know, points one must acquire, levels of status one must attain. The challenge for designers is to understand the social impact of these various types of knowledge walls and data gates: how do they influence how people contribute? Who and what do they exclude?

We have looked at several online boundary strategies in this chapter. Identity is often—but not always—the foundation. Identity enables reputation, so a community can welcome or reject someone based on his or her past behavior. This identity may be pseudonymous, but it must be persistent. In general, people act more conservatively the more that is known about them, and the more they risk if their actions are deemed inappropriate. There are also boundary strategies that do not rely on identity, assessing instead the value of the contribution, without necessarily knowing who the contributor is. Here the burden of assessing the material is greater, but the range of contributions is wider (and wilder).

In the physical world, our bodies anchor our identity. We recognize people by their faces; this is the basis of reputation. We hear a person’s words coming from their body: it is easy to associate actions with an individual. We create boundaries that limit where bodies can go, some porous and easily surmounted, such as sidewalks or the rooms within houses, others much stronger, like the prison walls that confine those who transgress society’s laws.

Online, there are no bodies, only information. Identity becomes amorphous and the participants in a discussion often blur into an inchoate form. A person banned for misbehavior reappears in easily assumed new guises (Dibbell 1993). However, we can construct information bodies through accumulated data and interaction history. The design of these bodies shapes how identity manifests in the online world. What are the cues we want to use to recognize each other? What are the boundaries we want to enforce?

In later chapters, we will examine two related issues. Chapter 9, “Constructing Identity,” looks closely at how we know who others are, both as particular individuals and as social types. Both are relevant for understanding social boundaries: individual identity enables reputation, and social identity allows for including or excluding groups based a few easily assessed cues—something that can be essential for bounding communities, but which is also the basis of harmful prejudice. One of the great early beliefs about the social transformation that the Net would bring was that it would eliminate the boundaries of race and gender; “Constructing Identity” looks at whether this has happened, how desirable it is, and what alternative social framework we may wish to create. In this chapter, we have mostly assumed that the material created in these large conversation spaces is public. Yet not all information is. Chapter 11, “Privacy and Public Space,” looks at the social and technical boundaries that separate private from public space. Here the focus is on how we perceive this boundary, for technology can make it porous and invisible.


No comments here