… My heart’s in Accra Ethan Zuckerman’s online home, since 2003

June 10, 2011

Martin Nowak and the mathematics of cooperation

Filed under: Berkman,hyperpublic — Ethan @ 4:48 pm

Mathematical biologist Martin Nowak talks to us about the evolution of cooperation. Cooperation is a puzzle for biologists because it doesn’t make obvious evolutionary sense. In cooperation, the donor pays a cost and the recipient gets a benefit, as measured in terms of reproductive success. That reproduction can be either cultural or biological and the challenge to explain remains.

It may be simplest to consider this in mathematical terms. In game theory, the prisoner’s dilemma makes the problem clear to us. Given a set of outcomes where we’re individually better off defecting, it’s incredibly hard to understand how we get to a cooperative state, where we both benefit more. Biologists see the same problem, even removing rationality from the equation. If you let different populations compete, the defectors win out against the cooperators and eventually extinguish them. Again, it’s hard to understand why people cooperate.

There are five major mechanisms that biologists have proposed to explain the evolution of cooperation:
– kin selection
– direct reciprocity
– indirect reciprocity
– spatial selection
– group selection

Nowak works us through the middle three in some detail.

In direct reciprocity, I help you and you help me. This is what we see in the repeated prisoner’s dilemma. It’s no longer best to defect. As originally discovered by Robert Axelrod in a computerized tournament, the three-line program “Tit for Tat” wins:

At first, cooperate.
If you cooperate, continue to cooperate.
If you defect, defect.

While it’s a powerful strategy, it’s very unforgiving. If there’s a mistake, there’s an endless cycle of retaliation. Nowak wondered what would happen if natural selection designed a strategy. He created an environment to allow this, and permitting random errors to create a harder environment. If the other party plays randomly, the best strategy is to defect every time. But when tit for tat is introduced, it doesn’t last for long, but it does lead to rapid evolution. You’ll see “generous tit for tat” – if you cooperate, I will. If you defect, I will still cooperate with a certain probability. Nowak suggests that this is a good strategy for remaining married, and step towards the evolution of forgiveness.

In a natural selection system, you’ll eventually reach a state where everyone communicates, always. A biological trait needs to be under competition to remain – we can lose our ability to defect and become extremely susceptible to a situation where an always defect strategy can come into play. Cooperation is never stable, he tells us – it’s about how long you can hold onto it and how quickly you can rebuild it. Mathematically, direct reciprocity can come about if the benefits of cooperation, on average, outweigh the costs of playing a new round.

Indirect reciprocity is a bit more complex. The good samaritan wasn’t thinking about direct repayment. Instead, he was thinking “if I help you, someone will help me.” This only happens when we have reputation. If A helps B, the reputation of A increases. The web is very good at reputation systems, but we’ve got simple offline systems as well. We use gossip to develop reputation systems. “For direct reciprocity, you need a face. And for indirect reciprocity, you need a name and the ability to talk about others.” In indirectly reciprocal systems, cooperation possible if the probability to know someone’s reputation exceeds the costs associated with cooperation. And this only works if the reputation system – the gossip – is conducted honestly.

In spatial selection, cooperation happens based on people who are close geographically, in terms of graph theory. Graph selection favors cooperation if there’s a few close neighbors – it’s much harder to do with lots of loose collaborators. A graph where you’re loosely connected to a lot of people equally doesn’t tend towards cooperation.

Charlie Nesson and a new vision of the public domain

Filed under: Berkman,hyperpublic — Ethan @ 4:28 pm

Charlie Nesson, one of the founders of the Berkman Center, asks us to consider who we are, and what is our public space. The query that informed the early life of the Berkman Center was whether we, on the internet, were capable of governing ourselves. To address this question, we need to ask what our domain as a people is. He offers, “We are the people of the net, and our domain is the public domain.”

If you want an orderly world of real property, you should build a registry. It’s the same in the world of bits. Charlie is now working on a directory of public domain, starting with the Petrucci collection and the IMSLP – the international music score library project. Charlie doesn’t mean public domain in the strict legalistic sense. Instead, he asks us to think of the public domain as the bits you can reach through the net. We can then separate the space into the free and the not-free, as constrained by copyright and by market.

To ensure we can be the people of the public domain, we need to build our domain on a foundation that is solid in law. We’re going to build based on collections organized by registrars. The problem with that strategy is that registries can be the focus of litigation risk. So the goal is to work with a reputable law firm to protect the registrar, the registry and users of the registry. That helps us positively define the public domain and defend it.

How does this relate to privacy? It’s worth thinking about the key actors involved. What are actors that appreciate individual privacy? Governments are interested in surveillance. Corporations are interested in data acquisition. Look at the librarians and we’l find allies. They are connected to powerful institutions that share the values of privacy.

Judith asks Charlie to strengthen the connection to privacy. He responds, “I don’t like privacy. It tends to be too closely associated with fear, and it always seems like a rear-guard action against technology.” Instead, we should work on the architecture of the public space and ensuring we architect for private space.

Hubert Burkert – moving beyond the metaphor

Filed under: Berkman,hyperpublic — Ethan @ 3:47 pm

Herbert Burkert of the University of St. Gallen in Switzerland teaches internet law and heads a center at St. Gallen that parallels the work we do at Berkman. He suggests we consider the space between beauty and cocercion. There’s only a few occasions where an audience takes pity on a lawyer, and it’s when a lawyer ventures in the sphere of aesthetics. There’s such a thing as legal creativity, but it usually leaves you facing the ethics board and quickly turns from pity to self-pity. So he wants to move from a presentation on “criteria” to one about “comments”.

His comments are structured around two names. One is Johann Peter Willebrand, a German writer about public security who encouraged registration of foreigners in towns. But he also encouraged the pledge to treat citizens and foreigners politely, which you can read as you wait for hours to pass through immigration in Boston. He’s become something of a hero to Burkert, as someone who’s tried to change the relationship between beauty and coercion, coercing people into beauty.

Burkert’s point – design and architecture talk is dangerous talk. Le Corbusier wanted to design not just buildings, but how people life. Totalitarian designers gave certain architectures to control people. And today’s contemporary suggestions on public safety, walkability, and security need to be considered in this light. When you consider criteria of design, ask whether you’re designing for people, and whose interest you’re designing for. How much space for opportunities to live are you prepared to leave for others?

This leads us to Lina Bo Bardi, an Italian architect who worked in Brazil. She was asked to turn a factory in Sao Paolo into a recreation area. The city is a remarkable and challenging place: so crowded that it’s got the highest percentage of helicopters per capita, because it’s the only way to beat traffic, and has a serious problem with crime. She built a tower and bridges that connected to the factory, suggesting a dialog between work and play. It’s a very striking building – the windows look like the holes that might be made by grenades than designed openings.

How is this relevant? Bo Bardi was designing to create opportunities for social gatherings, and for cross-generational communication. Burkert suggests that cross-generational communication is quite rare in social media. So is cross-cultural communication. And these spaces encourage opportunity for variety, and opportunity for protected openness.

Perhaps the low walls that appear in her design are metaphors for scaled privacy. Or maybe we need to stop using these kinds of physical metaphors, at least from architecture, in these virtual spaces?

Data, the city and the public object

Filed under: Berkman,hyperpublic — Ethan @ 2:59 pm

Adam Greenfield is the principal designer of Urbanscale, a design firm that focuses on design for networked cities and citizens. He’s interested in the challenge of building spaces that support civic life, public debates, and the use of public space.

The networked city isn’t a proximate future, it’s now. We’ve got a pervasively, comprehensively instrumented population through mobile phones. We have widespread adoption of locative and declarative media through tools like Fourquare and systems of sentiment analysis. And we’re starting to see “declarative objects”, items in public spaces like the London Bridge, which now tweets in its own voice using data scraped from a website. Objets start having informational shadows, like a building in Tokyo, literally clad in a QR code – you can “click” on the building and read more about it.

We’re starting to see cities that have objects, buildings and spaces that are gathering, processing, displaying, transmitting, and taking action on information. We’re subject to new modes of surveillance which aren’t always visual. Tens of millions of people are
already exposed to this, which suggests we may need a new theory and jurisprudence around public objects.

Offering a taxonomy of public objects, Adam starts with the example of the Välkky traffic sensor. This detects the movement of people and bikes in a crosswalk and triggers a bright LED light to warn motorists. This is very important in Finland, which is very dark 20 hours a day, 10 months of the year. He describes this as “prima facie unobjectionable, because the data is not uploaded, not archived, and because there’s a clear public good.

Another example is an ad in the subway system in Seoul. There’s a red carpet in front of a billboard. Walk on it, and the paparazzi in an animated billboard will swivel and photograph you. It’s mildly disruptive and disrespectful, and there’s no consensus public good. On the plus side, it’s purely commercial – there’s no red herring of benefit. And it probably doesn’t rise to the threshold of harm.

And then there’s the soda machine. Adam shows us the Accure touch screen beverage machine in Tokyo, which uses a high resolution display to show you what beverages are available. Each customer is offered different consumables – an embedded camera guesses at age and gender and delivers beverage options to you based on that model. It’s prescriptive and insidiously normative. And it compares information with other vending machines. If you’re a bit abnormal – a man who likes beverages common in the female model, for instance – these systems leave you out of luck. And while they’re commercially viable, there’s no public good associated with this information gathering. We might put this into the same category as interactive billboards with analytics packages, like the Quividi VidiReports, which detects age, gender, and even gaze. There is no opt out – you’re a data point even if you turn away from the ad.

How do we think about these systems when power resides in a network? Adam gives the example of an access control bollard in Barcelona, a metal pile that rises out of the ground to block access to a street unless you present an RFID that gives you permission to pass. This system relies on an embedded sensor grid, RFID system, signage, and traffic law all interacting together. It’s a complex, network system that we largely interact with through that bollard. It’s even easier to understand these systems when they exist solely through code.

There’s a class of public objects that we need to define and have a conversation about. Adam proposes that they include any discrete object in the common spatial domain intended for general use, located on public right of way, or that have de facto shared access to the public. When we build these systems, Adam says, we should design in ways that the data is open and available. That means offering an API, and making data accessible in a way that’s nonrivalrous and nonexcludable.

An open city necessarily has an more open attack surface. It’s more open to griefing and hacking. We need a great deal of affirmative value to run this risk. And we need to develop protocols and procedures to establish precedence and deconfliction around these objects. We’re roughly a century into the motor car in cities and we still don’t handle cars well, never mind these public objects.

Adam advocates a move against the capture of public space by private interest and towards a fabric of freely discoverable, addressable, queryable and scriptable resources. We need to head towards a place where “right to the city” is underwritten by the technology of the space.


Jeffrey Huang of the Berkman Center and EPFL Media x Design Laboratory has been involved with the design of a “hyperpublic” campus in the deserts of Ras Al Khaimah, one of the seven Emirates of the UAE. The Sheik of the state has agreed to fund the joint development of the campus with Huang’s institution in Switzerland, and his design students have been focused on building a university campus that’s deeply public, both in terms of physical and architectural space.

One of the major constraints for the design is lowering water and energy usage. The goal is to make buildings make environmental sense using data. They’ve mapped the building site and located natural low points where water accumulates. The design makes use of these points as “micro-oasises”. The design for the building is large, open spaces around these points, an echo of the EPFL learning center in Laussane, Switzerland.

Within the building, a network of sensors can greet people by name and offer personal services to them. You can interact with people through data shadows, which physically track people through the building, a shadow cast on the wall that shows someone’s name, identity and interests.

He acknowledges the dangers of this system, making reference to Mark Shepard’s Sentient City Survival Kit and an umbrella whose visual pattern scrubs your data from surveillance. But he notes that there’s less need to design the private if hyperpublicness is adequately designed. We should build systems where everyone and no one owns the data, which are fully transparent.


Betsy Masiello from Google works on public policy issues and offers us a practicioner perspective on the topic of the hyperpublic. She tells us she originally misread the title of our session – “The risks and beauty of the Hyper-public life” and skipped over the risk part. She worried we might be celebrating a “Paris Hilton-like existence of life streaming,” making your identifiable behavior available to anyone who chooses to watch.

There’s a better way of thinking about data-driven lives and existences. Systems like Google Flu trends uses lots of discrete points of information to make predictions about health issues – this gets quite important when this helps us target outbreaks of diseases like dengue fever. Unlike the pure performance of a public life, we get public good that comes from big data analysis.

She offers a frame for analysis: predictive analytics based on your behavior, which use your data and make it clear how it’s used veruss systems that are predictive based on other people’s behaviors, like Google’s search, flu trends, and perhaps the soda machine Adam talks about. Both systems can be very valuable. But the risk is the collapse of contexts that happens in a hyperpublic life – the idea that data can be reidentified and attached to your identity.

She recalls Jonathan Franzen’s essay, “The Imperial Bedroom”, from 1998 about the Monica Lewinsky scandal. Franzen suggests that without shame, there’s no distinction between public and private. The more identifiable you are, the more likely you are to feel that shame.

The current challenge we face is contructing and managing multiple identities. Ideally, we’d have ways to manage an identity that includes a form of anonymity. It’s becoming trivial to reidentify people within sets of data. We may need to have policy interventions that put requirements on the data holders, punishing people who release information that allows people to be reidentified.


There’s an interesting argument that arises around privacy and transparency. Adam offers his frustration that Amazon continues recommending Harry Potter to him despite having 15 years of purchasing behavior data, none of which should indicate his desire to read fantasy. Jef sees this as a problem of too little data, not too much. Jeff Jarvis, moderating, criticizes Adam for asking for too much privacy and tells us he doesn’t want a world in which we can’t customize, and where we’re forced away from targeted data when it’s useful.

Latanya Sweeney and rethinking transparency

Filed under: Berkman,hyperpublic — Ethan @ 11:43 am

Latanya Sweeney urges us to rethink the challeges of privacy. She’s worked in the space for ten years and tells us that thinking about privacy in terms of the design of public spaces is a helpful and useful conceptual shift. We tend to look at the digital world in terms of physical spaces. In digital spaces, though, we can often look at someone from different perspectives in parallel spaces, and we can learn things about you that might be considered to be “private”, hidden behind some sort of a wall.

She prefers to talk about semi-public and semi-private spaces, and to consider the tension between privacy and utility. It’s not one or the other, but the sweet spot between the two. She’s rethinking privacy, particulary around the topic of big data: pharmacogenomics, computational social science, national health databases. This movement towards the analysis of huge data sets forces us to rethink within legacy environments. How do we de-identify data? What does informed constent and notice mean in these spaces? And we’re rethinking at architectural levels, too – moving towards a realm of open consent and privacy-protecting marketplaces.

Open consent has been popularized by George Church at the Harvard Medical School. Rather than asking consent or making promises or guarantees, he gives you a contract where you sign away liability, because considering future risks is simply too hard. It sounds kooky, but a thousand people have signed up. Another model is a trade secret model – what if I treat your genomic data as a trade secret? As long as I keep it private, you’re exempt from liability – release it and all bets are off. We might also think of data sharing marketplaces where we insulate participants from harm and compensate them when it occurs.

We need to think through these components:

Data subjects – we need to think through the possibility of economic harm to these actors, in part because humans tend to discount risks around privacy

Technology developers – some of these developers are her students, and she urges them to think about the power over privacy and technology decisions they exert. Video recorders record sound and video, and sound is hard to mute. As a result, videotaping often pushes us against wiretapping laws… and this could have been moderated with a $0.01 cost decision

Policymakers

Belief systems

Benefit structures

and Legacy environments

Zeynep Tufekci asks Sweeney to talk through the question of belief systems and false tradeoffs. She suggests that debates have a false belief that you’re trying to maximize privacy or utility – the key is a relationship between the two.

Walls and thresholds – physical metaphors at Hyper-public

Filed under: Berkman,hyperpublic — Ethan @ 10:50 am

Urs Gasser, the director of the Berkman Center, opens the Hyper-public conference with a discussion of a privacy ruling in his native Switzerland. Swiss data protection law has led to a ruling regarding Google’s Street View product. To operate in Switzerland, Google must blur the faces of anyone caught in a photograph as well as license plate numbers. But they have to go further and eliminate any identifying information, like the skin color of people standing in front of shelters, retirement homes, prisons, and schools. The ruling also prevents Google’s cameras from looking into gardens and back yards, or into any space it’s not possible for a pedestrian to see.

The ruling, Gasser argues, indicates the complexity of delineating between public and private. It points to the need for a nuanced definition of privacy, including privacy in a public space, like in streets or libraries.

The design choices we make have multiple effects. They’ve got enabling effects – there are many services built atop Google Street View. These services can have leveling effects – Google’s product lets people who are physically immobile explore cities. And design choices have constraining effects. As Larry Lessig has famously observed, code is law – the technical constraints can prevent you from taking certain actions.

We made need corrective mechanisms for design choices. There’s a role for social norms – we might consider the fact that a significant percentage of Swiss inhabitants are using Street View, which might provide implicit support for design choices. And we need to consider how norms are changing – are changes like the practice of offering a “public apology” for actions on Facebook an appropriate response to straying across legal or normative boundaries?

Law tends to expect perfection – Swiss law doesn’t consider it sufficient that 99% of faces are blurred by Google’s technology. The law requires 100%, whether or not it’s realistic. To handle the challenges of privacy and technology, we need a feedback system that incorporates tech, law and user behavior.

Judith Donath is the lead organizer of the conference, and she reminds us that there’s no shortage of examples of the tension between public and private brought forward by technology. We can consider Google Street View, or just Anthony Weiner’s involuntary public exposure on Twitter. Technology is having an effect on what’s public and private space, whether you opt into social media systems or simply walk down the street. Ephemeral behavior becomes a permanent record.

Societies evolve norms around privacy. We don’t join other people’s conversations at a restaurant, and if we listen in, we try to disguise our behavior. In traffic jams, people forget their cars are transparent – they get dressed and pick their noses, and we try to look away. Those norms allow for privacy in public spaces. In law, on the other hand, privacy sometimes seen as a goal in itself, not just a means to an end.

We may be reaching a point of very high societal privacy in the US. We’ve got privacy in our dwellings to an unprecedented degree, and the possibility of extreme privacy that comes from moving away from where we grew up. Through a market economy, is possible to live in ways where you no longer rely on tight networks of people to provide essential services like child care – you can live your life in personal isolation and still be clothed and fed.

Our behavior in public has to do with who is watching us. Are we being watched by marketers who want to turn us into consumption machines? By a repressive government? That doesn’t make for an especially cooperative society. On the other hand, at the extremes of privacy, it’s hard to have society at all.

Jonathan Zittrain leads the first discussion – he suggests that this privacy conference is different from other privacy conferences because we’ve got a mix of people in the room, people who often think about these issues, and those who rarely encounter it. Our goal is to come to a language where we can all understand.

His first panelist is computer scientist Paul Dourish of UC Irvine. He suggests we consider privacy not as something we have, but as something we do. We tidy up before the cleaning lady comes, we might re-stack our magazines, putting Oprah on the bottom and the Atlantic on the top before people come to visit.

Privacy is also a function of group identity. There’s information that’s public within a group, and being in a family or a group requires a compromise in terms of privacy. We might think of this in terms of Michael Warner‘s concept of publics and counterpublics. There are
multiple publics that emerge in terms of address and encounter with media objects – there’s media aimed at people like me and other people’s media, which leads to other publics.

And there are infrastructures, networks that provide new ways of connecting people. They’re reusable and interchangeable – if we cannot plug in our computer into a plug in a particular place, that’s not infrastructural. These infrastructures make new relationships with spaces possible.

He offers a provocative example: how sex offenders navigate in California while constrained by GPS-enabled tracking anklets. How do you think about moving through a space if you can’t come within 2000 feet of schools, parks or swimming pools? It turns out that you simply can’t navigate the world at this scale. Instead, you end up thinking in terms of safe towns you can be in, and safe areas you can wander around.

What does it mean to be connected to other people in online and offline spaces? It’s about accountability to each other. The movements of parolees aren’t just their responsibility, but those of parole officers who need to be accountable for the complex, detailed log of where parolees go. Those officers need to account both for their behaviors and the vagaries of the tracking system.

Zittrain wonders whether we might consider an iPhone ap that solves “the travelling sex offender problem”, even as an art piece that displays how difficult it is to move through real spaces.

Laurent Stalder, an architecture professor at ETH Zurich, has recently been studying two topics: the emergence of the English House as it entered German culture in the 1890s, and the nature of the threshold. Privacy is associated with enclosed spaces, he tells us. The desire for intimacy and protection, enclosed on all sides, reached its apogee with the Victorian house.

Since then, we’ve seen a reconsideration of the wall as a limit between interior and exterior space. We can think of the “unprivate house”, like Philip Jonson’s glass house in New Canaan CT, a house which has the state of being permanently accessible. On the one hand, we have open houses – a thresholdless space, a seamless environment – and on the other hand, spaces that are inherently about control: airports, laboratories.

The traditional door was a clear boundary between public space and complete privacy. The emergence of different threshold devices has fractured that space. These devices are anthropomorphic – they shape our activities by prescribing certain behaviors. And we see rituals associated with thresholds: cleansing, absolution. We need to think through the difference between a border and a threshold – a border can be closed, while threshold is a neutral space, and a contested one.

John Palfrey, vice-dean of Harvard Law School and librarian of the Law Library, suggests that it’s simply not true that young people have given up on privacy. We care about it in particular contexts, and understanding those contexts is critical to understand our practice. Unfortunately, we may not be very good at figuring out how to correctly navigate these new spaces.

Palfrey suggests that the design of the fourth amendment – which determines, when the state wants information about you, what are the rulesets? – doesn’t always work well in the news spaces we’re building. When we build a space like Facebook, we’ve not done the hard work about those permissions and tradeoffs.

We need to consider some of the basic design notions behind systems of internet and social media. The “check in” applications like Foursquare that seem to thrive are those that are interoperable. We want to check in once and have it posted in all systems simultaneously. But given the free flow of that data, we need to consider breakwalls and safe harbors, situations where the data can be slowed or stopped. What do those breakwalls look like in those highly interoperable systems?

Zittrain suggests that both Palfrey and Stalder are considering thresholds, limits and interfaces between space. Palfrey points out that designers generally want lower walls, but there are costs associated with those low walls – we try to keep walls low at Berkman, to maximize participation, but there are literal costs associated with it.

Zittrain reminds us that our colleague Charlie Nesson used to life stream, recording the conversations he had. This was a step towards moving into a world of 24/7 streaming, wearing microphones and cameras at all times. He wonders what happens when we merge this data stream with a market economy that allows us to surveil the world by purchasing parts of people’s lifestreams.

Jeff Jarvis wonders how architectural innovation on the web is changing our understanding of public and private spaces, offering the analogy of the hall in the 19th century home as introducing the possibility of privacy in a bedroom. Stalder suggests that private and public spaces aren’t changing much in the private home, but ways we go inside and outside, potentially through the internet and through remote cameras, may be changing. Dournish reminds us that these aren’t just spatial notions (inside/outside, public/private), but social notions. Not everyone had privacy in English homes – it was very different upstairs than downstairs. He suggests that not only are we still trying to figure out boundaries in cyberspace – we’re figuring out them in real space as well.

Zittrain wonders whether the reformulation of ideas of public and private are changing more quickly now than in years past. We build buildings and they last for many years. Virtual spaces can change much more quickly. When we build a house for someone, we know who’s the customer. It’s far less clear who’s the customer for Facebook, the user who gets it for free, or the advertisers who want access to you. How do we think of privacy in these reconfigurable spaces?

Facebook today is not the same Facebook as yesterday, not just because of Facebook’s decisions, suggests Dournish. We reshape the space as well. Reconfiguration, he argues, is sociological and technological.

Nell Breyer asks Stalder to clarify the nature of a threshold in the context of cyberspace – what’s the purpose of the threshold in a virtual space? He explains that it’s about a double meaning, a unity of space between the public and private. Breyer pushes forward and wonders what we lose with the ability to “apparate” in digitial space, appearing deep within a space, avoiding the engineered transition. Zittrain wonders whether we might see a visual representation of where other people are entering into a website from.

Dournish points out that webmasters actually have all this information. This might be a reminder that we need to be careful about overusing spacial metaphors – we maintain multiple windows, we’re in different places at the same time. We need to recognize that part of the power of digital spaces is the dehistoricizing nature of new spaces. We need to consider the creative opportunities for reconfiguring space.

David Weinberger offers the observation that physical architecture is always local. The web is global, and the norms of privacy, which had been intensely local, are now being forced to interact with this truly public space. Is there any hope we’ll come to global privacy norms that we can rely on?

Kenneth Carson suggests we think about private and public spaces in terms of the creation of community. He wonders how we change the nature of community in public and private spaces.

An offer is posed to David Weinberger’s question: privacy actually begins with the invention of the chimney – it’s possible to have an enclosed space and heat. But this didn’t exist for the poorest people. In general, we’ve built spaces that eliminate privacy, like the factory, for those who are disadvantaged.

A woman who introduces herself as “a lowly intern” suggests we consider spaces where people use technologies, not just the virtual spaces: cybercafes versus the use of computers in a private home. That spacial aspect can shape how we encounter these spaces. How do we feel about using Facebook in the library? In a repressive nation where government officials might be looking over our shoulder?

Dournish tells us about work one of his students is doing on World of Warcraft in China. Many players come into public spaces to play together, and their discourse about a game, which they know is American, is very Chinese – they consider it a Chinese game because it places huge weight on Chinese values like teamwork. There won’t be global agreements in part because we can have encounters that are inherently local.

Powered by WordPress