Skip to main content

3. Interfaces Make Meaning

Published onApr 27, 2020
3. Interfaces Make Meaning
·

In July 2012, physicists at CERN discovered a subatomic particle they believe to be the long-sought Higgs boson, a key piece in confirming the Standard Model of theoretical physics. “Today we have witnessed a discovery which gives unique insight into our understanding of the universe and the origin of the masses of fundamental particles” (Professor Stefan Söldner-Rembold, Professor of Particle Physics at the University of Manchester, quoted in Davies 2012). The presentation announcing it was witnessed by a standing-room-only audience in Geneva, including many of the world’s top physicists, as well as a large international audience online. Yet many people noted an incongruity: the presentation (see figure 3.1) was written in “Comic Sans,” a cartoonish font that seems more suitable for elementary school bulletins than for groundbreaking discoveries in particle physics (Randall 2012). “CERN’s Higgs presentation just added weight to the theory that Comic Sans is a terrible font,” said popular Twitter user @Hal9000.

Part of what you say is how you say it, and in writing, fonts and color take the place that tone of voice has in speech. These paralinguistic (i.e., nonverbal) elements of communication convey emotion, level of seriousness, and cultural references. Features such as accent and aesthetic choices provide cues about the speaker’s (or writer’s) identity, which the listener (or reader) uses to decide whether to trust and believe the message.

So, why did the physicists at CERN choose Comic Sans for their presentation of such extraordinarily important experimental results? It is possible that although the choice seemed incongruous to viewers who expected the presentation to convey the significance of the event, the physicists had a different message. They were cautious about overstating claims, and the slides were meant to educate and explain the results, with the goal of making the complex mathematics as understandable as possible. Hence the choice of a friendly, school-like font. It is also possible that the inappropriate font and garish colors were the PowerPoint equivalent of the stereotypical physicist’s rumpled clothes and uncombed hair: simply careless design, made by people focused on the content of the slides, with little concern—perhaps even disdain—for appearance.

Figure 3.1

Fabiola Gianotti, slide from the Atlas presentation at CERN, July 4, 2012 (Gianotti 2012).

Here we see how even the mere choice of a font can shape impressions. The design of an online interface involves many such choices. Its words present information, while its fonts, lines, and colors convey mood and provide a setting for the information. As a computational medium, online interfaces have the added complexity of interaction. Interactive interfaces can seem to be sentient; the scale and timing of their reactions can evoke a personality that may be sluggish or alert, businesslike or subtly humorous.

Understanding how an interface’s design shapes the impression it conveys is useful for any application, but it is especially crucial for communication media. Interfaces set the scene for online sociability. The features of a social space not only facilitate discussion, they also set the tone for it, affecting how the users perceive each other and conveying cues about how to act. Representations of people, even if abstract, should portray the individual vividly and evocatively.

Metaphors are one way that interface design creates meaning. We use metaphors to give shape to the inherently and incomprehensibly abstract world of data. Some metaphors are subtle, almost unnoticeable; but they are also pervasive, occurring in all our thinking about nonphysical concepts. When we plot data so that positive values are shown as higher, we are using the metaphor of growth. Other metaphors are deliberate and literal, such as the recycling bin on the computer desktop where we dump unwanted files. Metaphors, whether subtle or conspicuous, shape how we think; in the case of interface design, they control what we can do. They provide legibility by letting us see abstract concepts in concrete terms. But they must be used skillfully, or they will constrain the electronic world unnecessarily to mimicking the physical one.

Another way that interfaces shape meaning is through sensory elements, including color and motion. We humans are physical beings, interacting with the outside world through our senses. In the physical world, we are surrounded by vibrant hues and shades, objects in motion, and responsive, animate beings. The world on today’s computer screen is often comparatively lifeless; or, alternatively, it is excessively bright, with flashy ads and quizzes grabbing at your attention. A sensory interface need not be garish, but should instead contribute to our ability to communicate.

Interfaces also shape meaning through interactivity, the distinguishing feature of online (as opposed to traditional) media. This responsiveness adds a new dimension of expressivity: does it react fast or slow, in an expected or surprising way? Different response styles can give the impression of an entity that is alert and accommodating, or one that is shy or sly. Even simple activities, such as what happens when you move the mouse across a page, can be imbued with expression: imagine a page where words grew immense or disappeared when the mouse passed over them, or ones that seemed to be drawn to or repelled from it, as if magnetic. Though not appropriate for business memos, such expressive interactions can help set the tone of a game or social space, or enliven the experience of exploring a social dataset.

This chapter will examine how interfaces make meaning using metaphors, sensory elements, and interactivity. It is but an introduction: interfaces convey meaning in many other ways, including fonts, layout, sound, and the like; and of the topics we do cover, whole books can be (and are) devoted to each one.1

The goal here is not to provide in-depth instruction on how to use these elements in design, but to raise awareness of their use and capabilities. As designers, we are sometimes oblivious to the messages design choices carry, from graphical elements to the fundament metaphors that shape the interface; as users, too, we are often unaware of their effects, responding to them subconsciously.

Metaphor

But he that dares not grasp the thorn
Should never crave the rose.

—Anne Brontë

The world of information is inherently abstract, and we use metaphors drawn from our everyday physical experience to bring sense and structure to it. Metaphors ground our thinking, allowing us to understand novel and abstract concepts in familiar terms. They take the knowledge and beliefs we have about one thing and let us apply them to make sense of something new and unfamiliar. When a metaphor works well, this framing is enlightening. Comparing a lover to a rose evokes beauty, bright colors, and a sweet scent, yet also impermanence and hidden prickly thorns. This is a complex image, and when it is appropriate, the rose metaphor is a powerful shorthand way of communicating this intricate set of properties. Yet if some of the properties are not relevant—if you do not want to bring up the issue of thorns, for instance—that particular metaphor may not be right. No metaphor is perfect. The art of applying them comes from understanding which are good enough, as when the power of the image overcomes the inconsistencies or even when the inconsistencies are ironic and desirable.

Verbal metaphors help make our language more colorful and expressive. They influence how we think about something, but they do not change the thing itself. Interface metaphors play a more fundamental role in how we experience and interact with the technological world, affecting function as well as feeling. The metaphor that is chosen for an interface shapes how it can be used. When we put computer “files” into “folders,” these metaphoric constructs help us think about the way information is organized in our machine, but they also constrain what we can do with it. Interface metaphors also influence the feel of the experience, the emotional and aesthetic response we have to our interactions with and via the machine. The desktop metaphor evokes office work: secretaries, bosses, quarterly plans, and cubicles. It was developed in the late 1970s and early 1980s, when office work was seen as the primary use for personal computers (Johnson et al. 1989; Perkins, Keller, and Ludolph 1997). The desktop image certainly is appropriate for that setting. However, it is less appropriate when we use the computer as an entertainment center or as the locus of our social life.2 Interface metaphors need to fit both the feel and function of the application.

When the computer is a social medium, its primary purpose is not organizing documents. Instead, it is a machine for playing games, making friends, reading news, and watching movies amid a virtual crowd of other viewers. People use it to keep up with a wide range of acquaintances, to see what others are doing, to participate in discussions, and to present a particular view of themselves to close friends and distant strangers. Making an intuitively usable interface for this world requires making the information accessible and navigable, delineating public areas versus private space, and enabling various channels of communication. At the same time, it needs to capture the feeling of being in a social space, not a filing cabinet.

Metaphorically Thinking

In Metaphors We Live By, George Lakoff and Mark Johnson (1980) show how our abstract thoughts are built metaphorically on more concrete foundations (for example, this sentence uses the metaphor of thought as construction). These cognitive metaphors allow us to use what we know about an easily understood domain in order to comprehend a more abstract one. For example, we sometimes think about money by using the “money as liquid” metaphor: “his assets were frozen”; “they have good cash flow”; “it’s like pouring money down the drain”; “their cash source dried up.” Like water, money flows according to certain rules, and we like it to be plentiful. Metaphors can also build on each other, so that we think of one abstract concept in terms of another. Thus, money can be the basis for understanding time: “I like spending time with you”; “we are wasting time”; “this will save time.”

The choice of metaphor influences our beliefs about abstract things. When we speak about arguments using the metaphor of war (“I lost that argument”; “he shot down all her points”; “you caught me off-guard in that discussion”), the goal is to win, to crush the opponent. Yet there are other metaphors for argument, such as construction (“his argument rests on a weak foundation”) or fabrication (“she wove all the strands of the discussion together”). With these, the goal is to construct a solid and compelling position; such metaphors encompass working cooperatively together.

Metaphors help us understand the abstract, but it is an imperfect understanding—and we may find ourselves relying on other metaphors to figure out what went wrong. We often think about information as if it were a solid object: “I can grasp that idea”; “give me all the info about her”; “throw away that thought”; intellectual property law is the legal formalization of the information-as-object metaphor. Yet laws based on the information-as-object metaphor are difficult to enforce, as stories that “go viral” or “spread like wildfire” vividly illustrate with their metaphoric casting of information as contagion and conflagration.

When we are oblivious of the metaphors we use to make sense of the world, they simply seem like the way things naturally are. For people who think of argument as war, building consensus with others seems foolish; to them this appears to be an inherent feature of arguments, not an effect of their own conceptual framework. Metaphors help us grasp abstract ideas, but they also constrain how we think about them. By becoming more aware of the metaphors we use, we can better understand the assumptions we make about a topic or situation, and we can choose to try other conceptual frameworks, to see what insights may come from a fresh perspective.

Computer Interfaces Are Metaphors

Computer interfaces use metaphoric structures to shape and define the way we think about data, interactions, and computation; without them, it would be very difficult to understand these concepts. In fact, metaphors permeate our computer interactions. We put our email (a postal metaphor) into folders (a physical document metaphor), and we read stories on Web pages (physical documents again, mixed with a spidery biological metaphor).

Much of what you see on a typical computer screen is visual metaphor. There aren’t really “buttons” or “scroll-bars” embedded there, but visual cues that remind you of these objects, and an interface that reacts consistently with that metaphor. A physical button is a familiar object. You know that when you press a button on a machine, something happens; you also know that it doesn’t matter where on the button you put your finger, or for how long you hold it down. The screen button behaves in a similar way.

Because the screen button is a virtual object, the designers could have made it to have all kinds of behaviors: to respond differently when clicked in different spots, to cause something to increase the longer it is held, to disappear when you press it, or to grow or seem to explode. In general, however, screen buttons are made to act as much as possible like ordinary physical buttons. This makes them legible; we don’t have to think very hard about how we expect them to behave, and they do what we expect.

One can, of course, make exploding or disappearing or otherwise bizarrely acting screen buttons for dramatic effect; viewers perceive these to be strange because they have preexisting expectations of normal behavior that are subverted in this situation. Cognitive metaphors can be (mis)used poetically.

While metaphors can borrow the cultural meaning of the thing that they reference, they do not import its full significance. A jewel-encrusted gold locket in the physical world has real properties of rarity and expense; but while decorating a picture of one’s self (or avatar) with a rendering of such a locket may attest to your aspirations, the picture itself is no more valuable than any other carefully chosen array of pixels. The locket’s meaning, deeply rooted in its physical state, becomes only a “cheap” reference when used online.

Metaphors help us make sense of abstractions, but they also limit what we do with them. Email is an example: organizing the formless stream of emails into folders makes us better able to organize them, but it also imposes the limits of physical folders on the more versatile electronic form. Whereas a physical letter can only be in one folder at a time, it is technically possible for electronic messages to be in more than one virtual folder at a time. However, since such multiplexed existence is inconsistent with the folder metaphor, most programs make email conform to paper’s limits in order to maintain consistency. This makes email function less well than it could.3 If you have a folder for email from friends and one that is for financial information, where should you put the note from a friend with useful investment information? People spend a lot of time trying to decide where to file a piece of email, and spend even more time in retrieval, searching through multiple plausible folders to find the one they chose some time ago (Mackay 1988a; Venolia et al. 2001; Whittaker and Sidner 1996).

We could build an interface that allows for filing a single email simultaneously within multiple folders, but doing so within the existing folder metaphor would be confusing. The users would not know whether a single email is somehow visible in many places at once or if there were multiple copies of the email. They would then be unable to predict whether deleting the email in one folder would delete all of them (as would happen if it were a single email) or if the other copies would remain (as would happen if there were multiple copies). This confusion arises from breaking the folder-as-container metaphor.

Instead, we need to take a fresh look at which metaphor is best suited to the problem. Rethinking the purpose of the folder makes it clear that what we really want is something else entirely, something like a label, many of which can apply to a single piece of email, rather than a container, only one of which can hold that mail.4 With this interface metaphor, users no longer need to decide if a note with stock tips from a colleague who is also a friend goes under “work,” “friends,” or “finance.” They can label it with all the relevant tags and would intuitively understand that it is a single item with multiple labels.5

Beyond understanding how to apply metaphors, the art of interface design requires knowing how to stretch them so that they are more useful. For instance, we can give electronic labels capabilities beyond those of ordinary physical ones, such as instantly searching for every message featuring a combination of one or more labels; although this cannot be done instantaneously in real life, it is not outside our established model of what it is possible to do with labels. It bends the metaphor but does not break it.

Interface metaphors shape people’s understanding of online social situations, including their notion of how private or public is the space they are in. In the early days of “chat-rooms,” many novice users were coming online. The “room” metaphor was a useful way to help people understand, with little visual assistance, that multiple separate conversational threads were available, that one could participate in only one at a time, and that only those who chose a particular one would be privy to it.

The downside of the metaphor is that features that are not typical of physical rooms break the metaphor. In a physical building with different rooms, walls block sound. Thus, calling a conversation interface a “chat-room” leads people to expect that others in that shared space can hear the discussions in it, but those outside cannot. This is good so long as the virtual experience stays close to the physical model. Yet, for example, keeping a publicly available archive of the conversation violates the privacy expectations set up by the room metaphor. The problem is not the existence of the archive itself, which may serve a useful purpose, but the mismatch between the permanence and openness of the discussion archive with the expectation of privacy established by the room metaphor (in chapter 11, “Privacy and Public Space,” we will look more closely at how people understand privacy online).

Concrete metaphors make interfaces legible, but excessive use of them constricts functionality. Ted Nelson, one of the pioneers of computer interface design, noted: “We are using the computer as a paper simulator, which is like tearing the wings off a 747 and driving it as a bus on the highway” (quoted in Freiberger and Swaine 2000). Since metaphors limit as well as empower, it behooves the designer to choose them carefully. The challenge is to design interfaces that go beyond copying the everyday physical world, yet remain intuitively comprehensible.

More abstract metaphors are often more versatile. Instead of using a room, which is a physical structure with many well-defined properties, one might instead depict conversations occurring in different generic “containers,” which could have different properties; for example, one could be anonymous, another could have archives, and another could be publicly broadcast. Visual cues, such as text, transparency and murkiness, and color and borders, could help users understand the properties of the different containers (Harry and Donath 2008). We are still using metaphors here, but they are less concrete. In general, using metaphors that are abstract and general, yet still convey the necessary meaning, provides the most flexibility to make interfaces that go beyond being there.

“Information Physics”

Metaphors such as the computer folder are deliberate and obvious, but interfaces can be more subtle and conceptual yet still legible. Ben Bederson and Jim Hollan used the term “information physics” to describe interfaces that were consistent and believable yet did not rely on high-level metaphors such as desktops and files (Bederson and Hollan 1994). This “physics” is still metaphorical, but it is much more abstract and ultimately flexible. Growing, shrinking, coming together, pushing apart—when an interface implements these behaviors consistently, the underlying model will seem invisible to the user: it will simply seem to work and make sense. Many familiar designs make use of such metaphors. For example, we are accustomed to things growing upward: saplings and children are small, whereas grown trees and people are tall. Thus, though it is possible for the axes in graphs to reach in any arbitrary direction, it is intuitive to put the origin at the bottom of the screen and have amounts rise as they get bigger.

The most fundamental and ubiquitous metaphors are spatial. They derive from our basic relationships with things around us: up and down, near and far, in and out, in front and behind. We conceive of time using spatial metaphors: “the holidays are approaching”; “I’m glad that week is behind us.” We structure our own activities spatially: “I’m going to go ahead with that plan”; “I’m falling behind on my work.” Spatial metaphors are inescapable and they ground our thinking about abstract concepts (indeed, being “grounded” is itself a spatial metaphor) and serve as building blocks for creating more complex concepts (Harnad 1990; Lakoff and Johnson 1980).6

Space is abstract but not necessarily neutral; we ascribe values to space. When we make a stock market graph, we put the bigger numbers up and the smaller ones down. This could be a direct representation of how things accumulate in piles: the more there is, the higher the pile. But height also connotes deeper meanings. Emotionally, up is happy (upbeat, rising spirits), whereas down is sad (low energy, depressed). When we feel happy, we stand taller; when we’re sad, we slouch and look at the ground. I would rather feel upbeat than down on my luck; I would like to have the upper hand.

Our metaphoric interpretation of “up” also embodies ethics. Lakoff and Johnson note that we think of virtue as up, as in “an upstanding citizen” versus being “down and out.” People are quicker to comprehend words representing power relations (e.g., professor–student) when the powerful one is shown on top (Schubert 2005). These metaphoric associations shade how we interpret the vertical location of data.

This is fine when up is good, strong, or powerful, but what about when more is bad? A chart in which everything is rising looks encouraging, even if the measured quantity is one we would prefer to see go down. For example, a chart showing how gas prices are rising has that optimistic upward slope; to make it visibly convey that this is an unfortunate trend one could instead graph an equivalent downward statistic, such as how much gas $10 will buy. Being aware of metaphoric meaning helps us ensure that the intuitive impression a graph makes matches the meaning of the data.

Metaphors have the power even to change how we perceive the physical world around us. For example, the spatial orientation on maps affects people’s understanding of distance. Going uphill is harder than going downhill. Maps depict north as “up.” Studies show that people perceive traveling north (which is “up” on a map, but not physically) to be harder than going south, even for short distances (Nelson and Simmons 2009). If metaphoric constructs can reshape our perception of the solid physical world around us, imagine how strongly they can influence our understanding of the abstract and inherently formless virtual world.

Spatial metaphors are so basic and common that often they are nearly invisible. Yet, like the poetic metaphor of the rose with its sweet scent and prickly thorns, spatial metaphors can encompass a complex mix of meanings that designers need to be cognizant of in order to create coherent and intuitive interfaces.

Let’s look at the seemingly simple task of making a circle on the screen larger. The circle could become larger because the circle itself is growing (scaling), because the circle and viewer are moving closer to each other (translation), or because the viewer’s eye, acting like a camera’s lens, is changing its focal length (zooming). The result of each of these transformations is a larger circle, but each changes the scene in different ways.

If the object itself scales, then it will take up more space, while other things around it stay their original size. A pen that had originally been too thick to write inside the circle would then be proportionately smaller and able to do so. If the viewer moves closer to the object (translation), he will be able to see more detail on the circle than had been previously visible. His relationship to all other objects will also be different: some objects that had been in front of him might now be behind him, out of sight. Scaling and translation involved actual changes to the things in the scene. Zooming, on the other hand, is a change in perception only; things just appear larger to the viewer, and objects that had been in the periphery of his vision would now be out of sight.7

How is making circles grow and shrink part of designing sociable media? Later in this book, we will encounter problems such as how to explore the dense interconnections of social networks or interact with the vast archives of a large-scale, long-term conversation. To do so, we may want some parts of the interface to expand and reveal more information while other parts are still visible but less detailed. The designer needs to be aware of the different ways of doing this and their significance. We might want to scale some participants in a forum to be larger than others, because they have been active longer or more prolific. We might use translation to bring them closer, because the viewer is engaged in a direct discussion with them. We might use zooming to get a detailed view in order to be a neutral observer, looking more closely but not changing the underlying representation.

Turning Time into Space

In our everyday, unmediated existence, we live in the present. Our words and the activity around us are ephemeral, disappearing into the past as soon as they occur. Media, on the other hand, accumulate over time, allowing us to contemplate and analyze it at leisure. From the time that the first primitive people drew oxen on the walls of caves, we have been taming time by recording the events of a moment. Books record narratives that can span centuries, and photographs freeze a singular instant.

Online, there are vast stores of history: conversation archives, Web browsing records, and accumulated years of status updates. Yet in their raw form, these are of limited use. We seldom want to look at history at the rate that it occurred. Instead, we want to compress time, to see at a glance patterns that unfolded over days, months, or years (see figure 3.2).

Figure 3.2

Charles Minard, Carte figurative des pertes successives en hommes de l’Armée Française dans la campagne de Russie 1812–1813 (1869). Quintessential depiction of action over time. This graph shows Napoleon’s losses in Russia from 1812 to 1813. The thickness of the band depicts the size of the army as they marched to Moscow (beige) and retreated (black) (Tufte 1982).

A key problem—and one that we will return to throughout this book—is how to represent history, how to show time as space. We cannot think of time without using metaphors (Lakoff and Johnson 1999). Mostly, we think of it as motion in space. We think of time as linear: the past stretches out behind us, the future is before us, and we are at an ever-changing present. We think of time as being extended in space (“the meeting took a long time”); we think of it as having boundaries (“he did it in the allotted time”).

We may view ourselves as moving along this timeline (“we’ve passed the deadline”) or we may perceive ourselves as static while the timeline is shifting (“the weekend flew by”). This makes phrases such as “let’s move the meeting forward a week” ambiguous. If I think of myself as moving through time, moving a meeting ahead pushes it further into the future; if I think of time as coming toward me, moving it ahead brings it closer, that is, less far into the future (Boroditsky, Ramscar, and Frank 2001).8

There are visualizations of time in nature (see figure 3.3). One of our most familiar representations of time, the (analog) clock, comes from the pattern a stick’s shadow makes as the sun makes its daily trip across the sky. Tree rings depict local conditions over the centuries. The accumulation of layers in the geological record shows the passage of millennia. These are natural metaphors, and when the data fit, they can inspire intuitive and beautiful interfaces.

Figure 3.3

Zuni Douglas fir, © Henri D. Grissino-Mayer, The University of Tennessee–Knoxville. Tree rings can be the inspiration for visualizing history. They show seasonal changes, so are appropriate for situations where there are variations in a repeating temporal cycle. They are directional, showing greater growth in one area or scars in another. What significance might you apply to different compass points of the ring?

On the eve of their move to a new building, the Institute of Contemporary Art in Boston commissioned the Sociable Media Group to create an artwork to commemorate their original gallery space. Inspired by the geological accumulation visible in a canyon’s wall, we designed an installation, Artifacts of the Presence Era*, that built a growing wall of images of gallery activity (Viégas, Perry, Howe, and Donath 2004). Visitors were photographed as they entered the gallery and irregular slices of these images, algorithmically chosen according to the sound and activity level in the gallery (analogous to wind and rain shaping deposits in the natural world) became the new layers of video sediment (see figure 3.4). These layers told a story of past events; they revealed long-term patterns—the rhythm of night and day, periods of great activity or empty silence—while retaining occasionally serendipitous but often arbitrary and mundane samples of the passage of life. In addition, the growing accumulation of events weighed down and compressed the distant past. This visible distortion referred both to metamorphic rocks in the geological metaphor and to the distortion of the distant past in human memory.

Figure 3.4

Judith Donath, Fernanda Viégas, Ethan Perry, and Ethan Howe, Artifacts of the Presence Era (2004).

A still image can encompass activity occurring over a period of time or capture an instant. In a single panel, ancient reliefs told tales of complex battles (see figure 3.5) and medieval altarpieces depicted people and events from disparate places and times (see figure 8.1). It was not until the Renaissance, with its emerging scientific mindset, that reproducing the experience of seeing—to depict what something looked like at a given moment—became the goal of painting. Today, both approaches are common. At one extreme, stroboscopic photography shows us events too fleeting for our eyes to perceive, reminding us that what we do see most of the time is a mental construct, a canonical view of a world in constant motion.9 At the other, graphs that show data such as housing prices, birthrates, or the Earth’s temperature depict patterns that occurred over weeks, decades, or eons in a single image.

In addition to representing the passage of time, design can help us manage time. How we talk about time and coordinate with others profoundly effects social organization and the development of industry, travel, and worldwide communication. Until the mid-nineteenth century, most interactions took place with neighbors, and local events—a church bell, the rising of the moon over a landmark—could coordinate action. Even the most industrialized cities still ran on local solar time (mechanical clocks were set by the sun’s meridian), and time varied significantly from one neighboring town to another. With the advent of trains, telephones, and other transportation and communication technologies, interactions occurred over greater distances—and timekeeping needed to be standardized and coordinated at a global scale.10

Figure 3.5

The Battle of Tullîz. Assyrian. Drawn by Boudier after original in British Museum (Maspero 2005).

Today we take for granted the ability to coordinate time with distant people. We can schedule a conference call for a specific time on a particular date, and all the participants will share an understanding of exactly when the call is to take place. Yet meshing local and global time still presents challenges. Time has social meaning, and scheduling involves negotiating among different participants’ customs and convenience. A group may decide to hold their weekly conference call at 3:00 p.m. Eastern time; for the participant in Japan, this is an inconvenient 4:00 a.m. Designs that bring people from distant places together online may find it useful to establish a common time, or to highlight awareness of participants’ local times.

Time unfolds in a series of rhythms: there are days and nights, the seasons, the year. Most days we sleep, eat breakfast, make use of some kind of transportation, go in and out of a door. Time is marked by events and anomalies. You get older, the mountains get older, the bread on the counter gets older. A tree grows, a species evolves. Depicting time and the accumulation of history requires distinguishing among repeated cycles, progressions, accumulations, and discrete events, and highlighting those that trigger a change in normal growth and cycles. Depicting time is an intrinsic part of social media design, whether facilitating communication across time zones, showing the temporal patterns in a conversation, or making visible the persistence of information into the unknowable future.

Sensory Design

We are sensory creatures, living in a world with colors, movement, sounds, tastes, and smells. In our face-to-face social world, we are very aware of how other people look, the sound of their voices, and the smell of their perfume. We enjoy sensory experiences with other people; we go out to dinner or for a walk on the beach. We may participate in rallies with colorful posters and anthemic music, cheer with the crowd at a baseball game, or talk quietly with a friend, listening not only to her words but to the tone of her voice. When we are with others, both the people and the surroundings engage our senses and shape the meaning of our experience.

Our surroundings also provide cues about the sort of social situation we are in. A dark smoky bar and a sunny health-food restaurant are conducive to different conversations; we relate differently to people dressed in business suits versus bathing suits. Social visualizations can provide these sorts of contextual cues for online encounters. Yet many online social interfaces have the warmth and sensuality of a financial report. They may have some decorative elements, but these tend to serve as background graphics that are not incorporated into the interaction space. Design aesthetics should be a fundamental part of conveying meaning, for the style and appearance of a setting shapes our impression of the people in it.

Color

Yellow, if steadily gazed at in any geometrical form, has a disturbing influence and reveals in the color an insistent, aggressive character. … Blue is the typical heavenly color. The ultimate feeling it creates is rest. When it sinks almost to black, it echoes a grief that is hardly human.

—Wassily Kandinsky, Concerning the Spiritual in Art (2009)

Color is everywhere we look. Ordinary in its ubiquity, it has also puzzled and fascinated philosophers, scientists, and artists throughout history. What is the relationship of one color to another? How can we systematically understand how two colors combine to produce a third? Why do they appear different in different lights, and do they appear different to different people (Land 1977; Sloane 1989)?

Colors have had symbolic significance since prehistoric times, perhaps the earliest example of this being the use of red ochre in burials 92,000 years ago during Middle Paleolithic era (Hovers et al. 2003). In American culture today, red can stand for attention and danger (stop signs, alarms, and fire trucks), for sexuality (red lipstick, “red-hot mama”), and for allegiance to particular teams (Republicans, the Boston Red Sox). We associate blue with calmness, the cold, and Democrats; green with the environment, safety, and Ireland. Colors derive these meanings from the biology of the human visual system, from our experience of nature, and from cultural use. The meanings that derive from our biology are universal, whereas those that arise from cultural use vary greatly from place to place and time to time.

Our perception of color is a function of both the spectral quality of light—the color that is “out there,” its wavelength measurable by a spectrometer—and the response of our visual system to these different wavelengths. We notice reds and yellows much more than blue because our visual anatomy causes us to see them more vividly: we have numerous cones in the fovea, our central vision, that are receptive to reds, yellows, and greens, while the few blue receiving cones are more peripheral.11

Although any account of why our vision evolved as it did is necessarily speculative, many scientists believe that primate color vision evolved at least in part because it was helpful in foraging for brightly colored fruit and/or fresh young leaves. Once primate vision evolved to perceive these colors, color-based signals could then evolve, such as the red tinge that is a sexual signal in baboons and chimpanzees. These biologically established correlations would subsequently be the basis for the human cultural evolution of symbolic color meaning, such as the use of red lipstick and blush (Surridge, Osorio, and Mundy 2003).

The biology of our vision and our natural surroundings intertwine in their shaping of our reactions to colors. This helps us understand why a red mark is so attention grabbing, whereas a blue one is not. Red stands out: our visual system is highly attuned to it. The associations we have with red are intense and varied. It is the color of many fruits and berries (which may be the genesis of our evolved ability to see it so well) and thus associated with celebration and plenty, but also, in a culture that is suspicious of bodily pleasure, with temptation. Red is the color of blood, associating it with danger. Red cosmetics and body paint, symbolizing sexual availability and power, decorate the faces and hands of people in cultures ranging from hunter-gatherers to contemporary jet-setters. Blue, on the other hand, is the color of the sky and ocean. We see them, but we need not pay the same attention to these distant backgrounds as to our activities. Blue is a calming color—but on the face or body it evokes cold and illness.

Although biological and environmentally based color associations are universal, there are significant differences between cultures in the meanings they ascribe to different hues and shades. White symbolizes purity in American and many European cultures, but is associated with death in Eastern ones. We can see natural roots for both: a light object shows stains easily, so one that is all white must be pure and clean; our hair turns white in old age and white snow blankets the world in winter, the season of death and hunger. The symbolic interpretation of a color depends on its metaphoric context. Yellow hair turns white as it ages (white as symbol of death), but white paper yellows as it ages.12

Figures 3.6, 3.7

Adjacent colors change the way we see a color. The centers of these squares are an identical shade of gray; the centers of the blotches are identical purples. (Adapted from Albers 1975.)

Colors interact with each other. The way we see a color changes as the colors around it change. Although we cannot distinguish between two very similar colors when viewed separately, when they are adjacent to each other not only do we see the difference between them, but we perceive a line drawn between them. The interaction among colors also changes their emotional impact. Purple can seem neutral against a gray background, but can look harsh and glaring in a field of yellow and red. Every color that is added to a composition changes its overall feel (see figures 3.6, 3.7).

A common error in visualization design is to use hue to represent a sequence. The visible spectrum and the computer’s representation of color treat hue as a linear series of values, but it is not so perceptually.13 Rather, we see the hues as different categories, with red, blue, yellow, and green as the major ones and then branching out to orange, purple, and cyan. Hue thus is best at demarcating a small number of categories. For representing a sequence, using a range of lightness values is better, as is a progression along the saturation/brightness scale from black or white to a solid color. Saturation can also provide an intuitive depiction of activity: the brightness of and contrast among richly saturated colors looks lively, whereas dull, gray colors appear inactive.

In designing an interface, legibility is often the primary concern, especially with text and charts. Black text on a light background is legible; bright orange text on a bright blue background is not. To achieve greatest clarity and readability, colors should be muted and information presented in a dark color against a light background. Bright colors can highlight key points, but are not good for long blocks of text. In general, high luminance (brightness) contrast between text and background makes things more legible, and low color saturation is good for long blocks of text (Hall and Hanna 2004; Jacobson and Bender 1996; MacDonald 1999; Tufte 1990).

Yet legibility is not always the primary concern. Sociable media design is as much about creating the feel of a place as it is about conveying information. Ensuring that a design is legible is important, but creating the right experience is also essential. Sometimes this means minimizing the use of color to remain neutral. Or, a site may choose to be less legible but more edgy if that better conveys its message. Breaking the rules of legibility should be done only for a reason.

Motion

Motion is the strongest visual appeal to attention. … It is understandable that a strong and automatic response to motion should have developed in animal and man. Motion implies a change in the conditions of the environment and change may require reaction. It may mean the approach of danger, the appearance of a friend or of desirable prey.

—Rudolf Arnheim, Art and Visual Perception (1974)

Figure 3.8

A praxinoscope, 1879. Experiments with “persistence of vision”-based technologies, the precursors to film, were increasingly common in the nineteenth century. The zoetrope, originally invented in China (approx. 180 ad) and its refined version, the praxinoscope (invented 1877), showed a sequence of images on a spinning cylinder (Needham and Wang 1972; Turner 1983).

Artists have been using color since the days of cave painting, but the ability to use motion easily as a design element is much more recent.

Motion in the interface can grab attention, alerting you to some event that you must attend to immediately. This is helpful in some cases: a message you have been waiting for, a caller you had hoped to hear from, an emergency weather bulletin, a notice that you need to leave now for an important meeting, or a flight update. Yet the attention-grabbing flashes of animated advertisements make many sites clamorously distracting. When you are attempting to concentrate, a constant stream of popping alerts is annoying and unhelpful.

Motion can also be more subtle. Animation on the desktop can make the “objects” seem more real, though this stylistic realism need not conform to physical reality. For example, clicking on a button at the bottom of the screen opens a file. As this occurs, a very quick animation shows the “file” expanding out of the button. We do not notice this animation because it is fast, almost to the point of being subliminal. The animation helps us connect the button at the bottom of the screen to the expanded file; it makes the action more legible, even though in our everyday experience, files do not magically emerge from buttons.14

Motion in the interface can create the impression of vitality: living things are never completely still. Even when one is sleeping, the gentle rhythm of breathing is a visible sign of life. For interfaces that represent people and presence, we can incorporate subtle movements that invoke a living metabolism. Such motions can depict a functioning, changing online social world—not the sudden motions of pop-up notifications, but a constant, smooth growing and shrinking, a slow change of color and brightness. These movements indicate vitality and dynamism, without demanding an instant response.

For example, I may have many sources of frequently updated information—my email accounts, some news organizations, a few online discussions, and the like. Without a visual representation, I have no visceral sense of this lively activity when I look at my computer. Notifications that demand attention create a distracting interface; what I would like instead is a design that provides awareness of change without inciting reaction. Here we want neither the clamor of pop-ups, nor the sterile stillness of typical desktop interfaces.

One approach is to use ambient motion.15 This is different from the narrative motion that shows progression through time. Leaves rustling on a tree or waves lapping at the shore do occur over time, but what we see as most salient is not the progression of a story but the rhythm of repetition. We can convey information using ambient motion by changing the frequency, rhythm, and duration of the pattern (Lam and Donath 2005). Many patterns can be the basis for such animation, the key being that they are slow but rhythmic. Repetitive motions such as ocean waves, a fire flickering in a fireplace, or the flow of traffic as seen from many stories above can inspire interfaces that inform without distracting. A pattern can pulse slowly from dark to light, or waves can smoothly ripple across it. The fundamental rhythm can be set to reflect, for instance, the typical number of messages and updates one receives over a certain period; and current variations from the normal pattern can be represented as changes in that rhythm. Multiple rhythms can concurrently show patterns at different timescales, such as what is typical for a day and what is typical for an hour.16

Aesthetics

When representing social information, the look and feel of the interface—the subtextual messages that its visual style conveys—can be as influential as the actual data. In designing visualizations, legibility, accuracy, and aesthetics are all important. Traditional scientific visualization has focused primarily on the first two, legibility and accuracy. I will argue here that aesthetics, particularly for social visualizations, is also important.17


Art historian E. H. Gombrich writes:

The geometrical structure of a visual design can never, by itself, allow us to predict the effect it will have on the beholder. … [In addition to perceptual elements such as scale and color] the visual effect of any design must also depend on such factors as familiarity or taste. … It is this subjective element in the visual effect of pattern which seems to me largely to vitiate the attempts to establish the aesthetics of design on a psychological basis. (Gombrich 1981, 117)


Aesthetic judgment is a combination of cultural taste and personal response. It is a subjective assessment, drawing from both our sensory responses and our learned interpretations. Let’s take the example of a page with a border of intertwined flowers. Part of our sensory response comes from our biological reactions to colors: if they are blue and purple, they will seem calmer than a border of bright reds and yellows; this is inherent to the structure of our visual system. Personal taste determines whether they seem delightfully bright or unattractively garish. The meaning we ascribe to the border comes from both its innate properties—living flowers are signs of life and of pleasant weather—and from learned cultural associations. If I see a flowery-bordered Web page, my immediate guess is that it belongs to an older woman who is interested in homey crafts, possibly religious and conservative, and probably not very technical. Whether I find the design attractive depends on whether I find the associated traits appealing. Aesthetic judgment often has a class component: things we associate with higher status and admired social affiliations appear attractive. The unattractive appearance of things associated with social affiliations we disdain or dislike feels like an inherent property of the thing, and we thus see those who like them as having poor taste, without recognizing the extent to which our aesthetic judgment is a learned and cultural response.

Aesthetics affects the impression that a visualization makes about the social phenomenon it depicts, and it influences the social behavior an interface promotes. In the physical world, noteworthy buildings are more than just functional; their architecture and interior provide us with cues about the tone and purpose of the space. We see this in restaurants, where the lighting, colors, and materials tell us whether an establishment is formal, romantic, or a great place to bring small children. People communicate this way in their homes, consciously or not, conveying not only an impression of who they are or aspire to be, but also of their expectations for visitors. How your guests sit, stand, and what they discuss with you may be different when in a room with formally arranged antiques versus brightly colored beanbag chairs. Magazines’ layouts and fonts provide us with clues about their content and editorial policy.

3.9

USA’s Most Wanted

(dishwasher size).

3.10

USA’s Least Wanted

(paperback size).

3.11

Holland’s Most Wanted

(magazine size).

3.12

Holland’s Least Wanted

(wall size).

Figures 3.9, 3.10, 3.11, 3.12: Vitaly Komar and Alex Melamid, People’s Choice (1995). Which fonts, colors, shapes, and motions are attractive, and what they signify, varies from culture to culture and from person to person. The artists Vitaly Komar and Alex Melamid, in their “Most and Least Favorite Painting” project, surveyed hundreds of people around the world about what they liked in a painting—colors, sizes, subject, and so on—and painted the results, country by country. The results show intriguing differences in national taste: Americans, overall, like traditional paintings with a historical figure in an outdoor scene, whereas the majority of Dutch people surveyed preferred abstract art with blended colors. “Most and Least Favorite Paintings” is a brilliant critique of opinion-survey-driven design, and a celebration of the power of visualization. Whereas reading the survey results, country by country, is dull and meaningless, seeing them rendered as paintings is fascinating (Komar and Melamid 2011; Komar, Wypijewski, and Melamid 1997).

Aesthetics also affects the visitor’s or viewer’s emotional response and enjoyment of an experience. Around 2010, Apple’s iPhone became the most popular cell phone in the United States. People found the way it looked and responded so enjoyable that they preferred using it over a phone that had better service. Viewers will spend time with a visualization that they find attractive (calm, soothing images attract some, whereas bright animation appeals to others). When something seems dull or irritating, it is not easy, even if you are quite motivated, to look at it for very long. A dry graph may be accurate, but if the viewer’s eyes glaze over, the information will not be conveyed. Yet when something is appealing, you can look at it for quite a while. You can take your time and think about the patterns you see; you can watch it long enough to formulate impressions, to wonder about certain anomalies or correlations. The aesthetic appeal of the visualization is important in motivating people to spend time with the material, to contemplate it, and to think deeply about it.

In 2005, Martin Wattenberg launched the NameVoyager, a Web applet that lets people explore the popularity, since 1900, of about 6,000 names. It was immediately popular, with over 500,000 visitors the first two weeks and still heavily visited years later. It is a graceful and enticing visualization. You can type a name to see the arc of its popularity. As you type, it instantly shows the graphs for all names beginning with the letters you have entered (figure 3.13 shows names beginning with “N,” with “Nicole” highlighted). Expectant parents are interested in baby names, but this site attracts a broad audience, many of whom spend quite a bit of time delving into a dataset that would normally be of only passing interest. People discussed it on blogs and discussion sites, commenting on the changing fashions in names, speculating why a name had a peak of popularity at a particular time, and looking for ethnic and cultural patterns in the names. The intriguing graphical form and intuitive interface inspired hundreds of thousands of people to explore this data (Wattenberg and Kriss 2006; Wattenberg 2005).18

Figure 3.13

Martin and Laura Wattenberg, Name Voyager (2005).

There is no sharp division between legibility and aesthetics, between form and function. A casual and curvy chair that reveals itself, when actually sat on, to be uncomfortable has a (lack of) functionality that belies its initial appeal. An attractive visualization whose meaning is hard to decipher will eventually prove more frustrating than fascinating.

The influential graphic designer and writer Edward Tufte strongly advocates a minimalist approach to visualization. He has waged battle against what he calls “chart-junk”—embellishments and decorations that do not convey the focal statistics—and recommends a high “data-ink ratio.” Tufte’s work features graphs that he has made sparely elegant by removing outlines, extra markers, and even pieces of bars from bar graphs (Tufte 1986, 1990).

Yet, decorative graphs can be more memorable than simpler ones, and are often as legible (Bateman et al. 2010). They may draw the viewer’s attention and keep it longer. A graph’s style can provide cues about the objectivity and completeness of the data. Minimalist statistical graphs convey seriousness and exactitude. They imply that the data are solid and significant. Decorated graphs usually convey some editorial position about the data, which can also draw viewers’ attention; they present an argument, take a stand, rather than just offering dry statistics. Deciding on the right approach depends on the type of data and the goal for the depiction.

Ambiguity

For a visualization to be accurate, it should appropriately display the degree of exactness of the data it renders. In many cases, the ideal depiction is a rendering of ambiguity.

Social data are often inexact, and rendering them too clearly can misleadingly imply accurate precision. For example, one can make social network diagrams that show the connections among a group of people, as inferred by their common interests. Different ways of depicting these connections give viewers different impressions of how solid each tie is. Drawing a line connecting two people implies there is a palpable connection, that they know each other personally. But their tie may be weaker; the connection between two people may only be that they have used many words in common in their postings or expressed interest in the same movies, books, and music. Tenuous connections should be drawn with the appropriate ambiguity—for example, by putting the people near each other, but not connecting them with an unambiguous line. (The next chapter, “Mapping Social Networks,” looks more closely at how to depict the often complex relationships between people.)

There are many ways to visually render inexactness and ambiguity, including oscillation, blending, blurring, and waviness (Pang, Wittenbrink, and Lodha 1997; Zuk and Carpendale 2006). Visual Who (which was discussed more fully in chapter 1) showed connections among people inferred from common mailing-list membership. As it animated the clusters, rather than snapping each new configuration decisively into position, it rendered them as if invisible rubber bands attached the names, which thus oscillated indefinitely, only slowly settling into place. This conveyed the imprecision of the associations (Donath 1995).

Interactivity

Interactivity sets computational media apart from others. With interactivity, the viewer provides input and the interface responds; it is a dialogue between person and machine. The designer defines the machine’s role in the human–computer dialogue. That role may be simply to be a tool: click on this button and a picture appears; click on that button and the cursor looks like a pen, and now moving it around leaves marks on an image. Seemingly simple and straightforward, this human–computer dialogue is carefully crafted to seem intuitive (Card, Moran, and Newell 1983; Preece, Rogers, and Sharp 2002).

When we interact with something online, we expect it to react. How it reacts provides our impression of what it is and shows the effect of our action. Interaction may be exploratory, where what changes is the user’s perspective; the underlying data remain intact. Or interaction may change something in the online space. Interface objects such as the button that maximizes and minimizes a window have the power to act on other things. Part of the design challenge is helping the user understand what these powers are.

When something reacts to our actions as we expected, we perceive it to be working; if it does so in an unexpected way, it may seem broken; or, if cleverly designed, funny or thought provoking. Many online interactions are metaphoric constructions drawn from familiar objects, like the buttons that I mentioned earlier. Most of the time, they behave as a physical button does: press or click anywhere on it and it will set some other action in motion. We are not surprised if they change in some way to show the mode they are in; physical buttons do this when they stay depressed or light up. However, we do not expect buttons to do different things depending on where on the button we press them. Like their physical model, we expect the online button to be a solid object; but online, that solidity is a design choice rather than a physical constraint. The cursor usually moves freely across the screen, floating above the interface objects—buttons, file icons, and the like. Yet we can make interfaces where things push against each other, deform their shapes, or attract and repel each other like magnets. Used carefully, interactions can convey a wide range of expressions.

The Illusion of Sentience

When we interact with something that behaves in a very simple and predictable way, we think of it as a mechanistic object, something that we can manipulate, that may in turn cause something else to happen; a switch turns on a light, for example. If it behaves in a more complex way, perhaps with unexpected responses, we start to see it as sentient. The new car that starts up flawlessly with the turn of a key is a machine; the old one that must be coaxed with just the right amount of pedal pumping and rest between tries seems to have a will and personality.

An important distinction exists between interactions that give the impression that a sentient being is responding to you and those that feel as if you are controlling a puppet. Many experiments with interactive portraits (Cleland 2004) and other artworks, intended to create a sense of dialogue, instead feel as if one is controlling an object. The cause is often an overly simple script: if I move closer, it does X; if I move away, it does Y; if I speak, it does Z. The setup may be complex and the visuals elaborate, but the actual interaction is made up of a series of discrete actions with predictable responses.

Figure 3.14

Tamagotchi, Bandai Corporation. This virtual pet requires continual care in order to thrive.

“Keychain pets” show how a very simple interactive object can still provide a strong impression of sentience (see figure 3.14). These interactive toys were first released in Japan in the mid-1990s. The owner’s job is to keep the pet happy and healthy by feeding it, playing with it, disciplining it, cleaning up after it, and so on. If the owner assiduously attends to its needs, the pet will thrive and behave well; if ignored, the pet will sicken and die. All these actions are carried out through an interface of a few buttons and simple screen graphics; they are metaphorical creatures, created out of hints and references to real animals. They are interesting because the design of the interaction between owner and “pet,” through a combination of autonomous behavior, dependency, intensive interaction, and ongoing development, engenders deep devotion to them (Donath 2004a; Kaplan 2000).

An artificial pet acts—or, more precisely, appears to act—autonomously. Its actions seem to be internally motivated; it appears to have its own goals, feelings, and desires. It does not necessarily obey human commands but instead makes its own demands on its owner. When machines work exactly as we expect them to and do what we request of them, we think of them as simply machines. It is when they do not work as expected that they appear to have a will of their own and we ascribe intelligence to them. Most artificial pets start as “infants.” This elicits nurturing and affection: we instinctively take care of the young. The pets are designed to require their owner’s help throughout their life span in order to thrive and survive. If the owner does not “feed” or “entertain” them, they become ill or even die. The pet’s dependence makes the owner feel responsible for it. Feeding, cleaning, and playing with the pet all involve interacting with it, and the pet becomes integrated into the owner’s daily routine. Having spent a considerable amount of time and energy on the pet, the owner is invested in its well-being, a feeling that is enhanced by the way the artificial pets are designed to develop in response to the owner’s treatment of them: a pet that is well cared for will be healthier and more tractable. The owner is thus encouraged to take pride in his or her pet’s well-being.

Artificial pets also demonstrate how metaphorical thinking influences our sense of ethics. If we think of them as games, the time spent playing with them is entertainment and somewhat self-indulgent; if we think of them as animals, time spent playing with them is caretaking, an act of responsibility and altruism. It is obsessive to leave a meeting or dinner because a game requires attention, but it is reasonable to do so if a pet is in need; indeed, it is heartless not to. This is a vivid example of the power of metaphor: the metaphor we use to think about something can change how we interpret it, act toward it, and judge how others behave toward similar things (Donath 2004a).

The metaphors that we use to think about other people online similarly affect our sense of responsibility toward them. If I ask a question of a search engine I am pleased if I get a useful answer, annoyed if I do not. But I (correctly) do not feel thankful to the search engine for the time and effort it has put into helping me; it is an information machine, not a sentient being. If I ask the same question of an online forum, the process of typing words into a box is quite similar. Yet the person who answers me has donated time and effort. Ideally, I recognize this work and acknowledge the person behind it. Yet online interactions can suffer from “depersonalization,” where we fail to think of the others truly as people, which lowers the barrier to responding angrily and other antisocial behavior. A social interface should promote the view of the other participants as human, creating a sense of community and responsibility.19 Understanding phenomena such as the nurturing response that artificial pets trigger helps us see the complex ways that interface and interaction shape our perception of others.

Affordances and Perceived Affordances

The psychologist James J. Gibson coined the word “affordance” to describe the properties of the environment relative to a particular animal. “The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill” (Gibson 1986, 127). For a human (and other animals), the ground, for example, affords support. For a lightweight insect, so does the surface of a pond—but not for heavier creatures; we sink. For a small child, a toddler’s tiny chair affords sitting, but not for a stiff and heavy older person. A book affords reading, but only to someone who is literate in its language. Affordances describe our potential relationship with any external entity, from the basic elements of water and air to the social interactions we have with other people.

The other animals afford, above all, a rich and complex set of interactions, sexual, predatory, nurturing, fighting, playing, cooperating, and communicating. What other persons afford comprises the whole realm of social significance for human beings. (Gibson 1986, 128)

An affordance is something you can actually do with a given element of your world. A perceived affordance is what you believe you can do with it.20 One can perceive an affordance that is not there: I can walk across what I think is solid ground that affords support, but if it is actually thin ice and I fall into the wintery pond, then it did not, for me, have the actual affordance of support.

Gibson notes that one might not always be aware of an affordance, but stresses the independence of affordances from the concerns or goals of the animal: “The observer may or may not perceive or attend to the affordance, according to his needs, but the affordance, being invariant, is always there to be perceived” (Gibson 1986, 139).

An affordance is different from a function because the latter implies an intended purpose. A fallen tree and a chair both afford sitting (for people), but it is the intended function only of the chair. Affordances are about what someone actually can do with some other thing or being; they are independent of intention. For a thief, a tourist with a wallet in his back pocket affords pickpocketing, but that is not the tourist’s intention. Gibson’s point, that affordances are invariant, means that the pocket affords picking for all of us with dexterous hands, though few of us are inclined to do so.

Designers fashion cities, houses, furniture, and interfaces with an intended function in mind. Yet people often use them in ways very different from those the designers had envisioned. We put matchbooks under a leg of a tippy table and store paintbrushes in old coffee cans. Houses are turned into schools and schools into houses. The artist David Byrne created visual poetry using the corporate presentation software PowerPoint (Byrne 2003; see figure 3.15). Homemaking magazines offer tips to thrifty readers about using rain gutters to keep computer cables in order and turning egg cartons into jewelry boxes. In West Africa, plastic-fiber rice sacks are unwoven and then rebraided into strong new ropes (Steen, Komissar, and Birkeland). Such repurposing is about discovering the affordances of the item beyond its stated uses.

Some designers work very hard to maintain control of their creations, making them specialized and difficult to convert to other uses.21 Others intentionally create flexible systems. For social communication, more flexible, adaptable technologies are generally the most successful. An interface that enforces a strict protocol of behavior is not only more limited in its uses than a general-purpose one, it also provides fewer opportunities for a group to develop its own communication mores (Sproull and Kiesler 1991 a,b). For example, if you are trying to schedule a dinner party, you could discuss possible dates with several friends via email, a general-purpose tool, or you could ask them to check off possible dates using a scheduling tool. One is not better than the other; but we should be cognizant of the trade-off between sociability and efficiency. The scheduler makes gathering the information easy and organized, but you do not get the stories about where one unavailable friend will be for the month, nor the social cues that people exchange about the importance of the event, and so on.

Figure 3.15

David Byrne, Sea of Possibilities (2003). PowerPoint presentation software repurposed as art medium.

Most environments contain innumerable affordances, potential relations that a being in that space could have with the other things in it. We become aware of them only when we have some need, simple or complex as it may be, or when we are faced with a novel goal. If there is a sudden leak, for instance, we may look around for things that afford catching water. Stories of unexpected affordances fascinate people. A recent news article about a woman who fended off a ferocious black bear by throwing a zucchini at it made headlines around the world. With that story in mind, I see my coffee cup—which I usually think of as simply a container for hot beverages—and all the other small yet heavy objects in reach as potential projectiles, a stash of desktop weapons.

Lost affordances also capture our attention. The nightmarish edge of surrealism is a place where objects appear nearly normal, but some distortion has eradicated their common and expected function: Meret Oppenheim’s Object, a fur-lined coffee cup, or Man Ray’s Gift, an iron with a tidy line of spikes on the bottom. The Sociable Media Group’s Cheiro* chair was designed as a physical avatar standing in for a distant person (see figure 3.16 and the discussion in chapter 12, “Social Catalysts”). The chair itself is a surrealistic object, a chair that does not afford sitting. It was sculpted to capture a moment in a chair’s transition from furniture to sentient object, its arms changing from rests to limbs, lying in its lap. And while the arms that make it impossible to sit subvert the customary purpose of a chair, in this case, that serves a deliberate and useful purpose, for Cheiro is an art object, and we wanted people to look at it, but not sit on it.

A legible object is one whose affordances are clearly perceivable. Some basic ones are instinctive—air affords breathing, unless you are a fish—but most of our understanding of what we can do with the world around us comes from learning. Babies crawl, taste, and touch their way into understanding that flat surfaces afford support and that blocks afford stacking, whereas balls do not. Much of our social communication is about providing cues to each other about our “social affordances.” We do this with words: “Call me any time, I’d really like to help you”; “I’d love to, but I’m terribly busy.” And we do it with gestures, with how long we hold eye contact, how close we stand to another.

Figure 3.16

Francis Lam, Scott Weaver, and Judith Donath, Cheiro (2006).

In designing social media, a key issue in legibility is how others will see someone else’s actions. Communicating via computer media takes a leap of faith. You trust that the words or images you wish to send out will go to the people you want them to (and only to them) and in the form you intend. With an unfamiliar medium, you may not know if your typing is instantly visible to your correspondent, or if it appears only after you type a carriage return, press a send button, or perform some other action. It may be unclear who will see your writing, where it will appear in the context of an ongoing discussion, and whether you can subsequently withdraw it. The role of design is to make the perceived affordances match the real ones.

Comments
0
comment
No comments here
Why not start the discussion?