Ethan Zuckerman’s online home, since 2003
Currently Browsing: hyperpublic

Martin Nowak and the mathematics of cooperation

Mathematical biologist Martin Nowak talks to us about the evolution of cooperation. Cooperation is a puzzle for biologists because it doesn’t make obvious evolutionary sense. In cooperation, the donor pays a cost and the recipient gets a benefit, as measured in terms of reproductive success. That reproduction can be either cultural or biological and the challenge to explain remains.

It may be simplest to consider this in mathematical terms. In game theory, the prisoner’s dilemma makes the problem clear to us. Given a set of outcomes where we’re individually better off defecting, it’s incredibly hard to understand how we get to a cooperative state, where we both benefit more. Biologists see the same problem, even removing rationality from the equation. If you let different populations compete, the defectors win out against the cooperators and eventually extinguish them. Again, it’s hard to understand why people cooperate.

There are five major mechanisms that biologists have proposed to explain the evolution of cooperation:
- kin selection
- direct reciprocity
- indirect reciprocity
- spatial selection
- group selection

Nowak works us through the middle three in some detail.

In direct reciprocity, I help you and you help me. This is what we see in the repeated prisoner’s dilemma. It’s no longer best to defect. As originally discovered by Robert Axelrod in a computerized tournament, the three-line program “Tit for Tat” wins:

At first, cooperate.
If you cooperate, continue to cooperate.
If you defect, defect.

While it’s a powerful strategy, it’s very unforgiving. If there’s a mistake, there’s an endless cycle of retaliation. Nowak wondered what would happen if natural selection designed a strategy. He created an environment to allow this, and permitting random errors to create a harder environment. If the other party plays randomly, the best strategy is to defect every time. But when tit for tat is introduced, it doesn’t last for long, but it does lead to rapid evolution. You’ll see “generous tit for tat” – if you cooperate, I will. If you defect, I will still cooperate with a certain probability. Nowak suggests that this is a good strategy for remaining married, and step towards the evolution of forgiveness.

In a natural selection system, you’ll eventually reach a state where everyone communicates, always. A biological trait needs to be under competition to remain – we can lose our ability to defect and become extremely susceptible to a situation where an always defect strategy can come into play. Cooperation is never stable, he tells us – it’s about how long you can hold onto it and how quickly you can rebuild it. Mathematically, direct reciprocity can come about if the benefits of cooperation, on average, outweigh the costs of playing a new round.

Indirect reciprocity is a bit more complex. The good samaritan wasn’t thinking about direct repayment. Instead, he was thinking “if I help you, someone will help me.” This only happens when we have reputation. If A helps B, the reputation of A increases. The web is very good at reputation systems, but we’ve got simple offline systems as well. We use gossip to develop reputation systems. “For direct reciprocity, you need a face. And for indirect reciprocity, you need a name and the ability to talk about others.” In indirectly reciprocal systems, cooperation possible if the probability to know someone’s reputation exceeds the costs associated with cooperation. And this only works if the reputation system – the gossip – is conducted honestly.

In spatial selection, cooperation happens based on people who are close geographically, in terms of graph theory. Graph selection favors cooperation if there’s a few close neighbors – it’s much harder to do with lots of loose collaborators. A graph where you’re loosely connected to a lot of people equally doesn’t tend towards cooperation.


Charlie Nesson and a new vision of the public domain

Charlie Nesson, one of the founders of the Berkman Center, asks us to consider who we are, and what is our public space. The query that informed the early life of the Berkman Center was whether we, on the internet, were capable of governing ourselves. To address this question, we need to ask what our domain as a people is. He offers, “We are the people of the net, and our domain is the public domain.”

If you want an orderly world of real property, you should build a registry. It’s the same in the world of bits. Charlie is now working on a directory of public domain, starting with the Petrucci collection and the IMSLP – the international music score library project. Charlie doesn’t mean public domain in the strict legalistic sense. Instead, he asks us to think of the public domain as the bits you can reach through the net. We can then separate the space into the free and the not-free, as constrained by copyright and by market.

To ensure we can be the people of the public domain, we need to build our domain on a foundation that is solid in law. We’re going to build based on collections organized by registrars. The problem with that strategy is that registries can be the focus of litigation risk. So the goal is to work with a reputable law firm to protect the registrar, the registry and users of the registry. That helps us positively define the public domain and defend it.

How does this relate to privacy? It’s worth thinking about the key actors involved. What are actors that appreciate individual privacy? Governments are interested in surveillance. Corporations are interested in data acquisition. Look at the librarians and we’l find allies. They are connected to powerful institutions that share the values of privacy.

Judith asks Charlie to strengthen the connection to privacy. He responds, “I don’t like privacy. It tends to be too closely associated with fear, and it always seems like a rear-guard action against technology.” Instead, we should work on the architecture of the public space and ensuring we architect for private space.


Hubert Burkert – moving beyond the metaphor

Herbert Burkert of the University of St. Gallen in Switzerland teaches internet law and heads a center at St. Gallen that parallels the work we do at Berkman. He suggests we consider the space between beauty and cocercion. There’s only a few occasions where an audience takes pity on a lawyer, and it’s when a lawyer ventures in the sphere of aesthetics. There’s such a thing as legal creativity, but it usually leaves you facing the ethics board and quickly turns from pity to self-pity. So he wants to move from a presentation on “criteria” to one about “comments”.

His comments are structured around two names. One is Johann Peter Willebrand, a German writer about public security who encouraged registration of foreigners in towns. But he also encouraged the pledge to treat citizens and foreigners politely, which you can read as you wait for hours to pass through immigration in Boston. He’s become something of a hero to Burkert, as someone who’s tried to change the relationship between beauty and coercion, coercing people into beauty.

Burkert’s point – design and architecture talk is dangerous talk. Le Corbusier wanted to design not just buildings, but how people life. Totalitarian designers gave certain architectures to control people. And today’s contemporary suggestions on public safety, walkability, and security need to be considered in this light. When you consider criteria of design, ask whether you’re designing for people, and whose interest you’re designing for. How much space for opportunities to live are you prepared to leave for others?

This leads us to Lina Bo Bardi, an Italian architect who worked in Brazil. She was asked to turn a factory in Sao Paolo into a recreation area. The city is a remarkable and challenging place: so crowded that it’s got the highest percentage of helicopters per capita, because it’s the only way to beat traffic, and has a serious problem with crime. She built a tower and bridges that connected to the factory, suggesting a dialog between work and play. It’s a very striking building – the windows look like the holes that might be made by grenades than designed openings.

How is this relevant? Bo Bardi was designing to create opportunities for social gatherings, and for cross-generational communication. Burkert suggests that cross-generational communication is quite rare in social media. So is cross-cultural communication. And these spaces encourage opportunity for variety, and opportunity for protected openness.

Perhaps the low walls that appear in her design are metaphors for scaled privacy. Or maybe we need to stop using these kinds of physical metaphors, at least from architecture, in these virtual spaces?


Data, the city and the public object

Adam Greenfield is the principal designer of Urbanscale, a design firm that focuses on design for networked cities and citizens. He’s interested in the challenge of building spaces that support civic life, public debates, and the use of public space.

The networked city isn’t a proximate future, it’s now. We’ve got a pervasively, comprehensively instrumented population through mobile phones. We have widespread adoption of locative and declarative media through tools like Fourquare and systems of sentiment analysis. And we’re starting to see “declarative objects”, items in public spaces like the London Bridge, which now tweets in its own voice using data scraped from a website. Objets start having informational shadows, like a building in Tokyo, literally clad in a QR code – you can “click” on the building and read more about it.

We’re starting to see cities that have objects, buildings and spaces that are gathering, processing, displaying, transmitting, and taking action on information. We’re subject to new modes of surveillance which aren’t always visual. Tens of millions of people are
already exposed to this, which suggests we may need a new theory and jurisprudence around public objects.

Offering a taxonomy of public objects, Adam starts with the example of the Välkky traffic sensor. This detects the movement of people and bikes in a crosswalk and triggers a bright LED light to warn motorists. This is very important in Finland, which is very dark 20 hours a day, 10 months of the year. He describes this as “prima facie unobjectionable, because the data is not uploaded, not archived, and because there’s a clear public good.

Another example is an ad in the subway system in Seoul. There’s a red carpet in front of a billboard. Walk on it, and the paparazzi in an animated billboard will swivel and photograph you. It’s mildly disruptive and disrespectful, and there’s no consensus public good. On the plus side, it’s purely commercial – there’s no red herring of benefit. And it probably doesn’t rise to the threshold of harm.

And then there’s the soda machine. Adam shows us the Accure touch screen beverage machine in Tokyo, which uses a high resolution display to show you what beverages are available. Each customer is offered different consumables – an embedded camera guesses at age and gender and delivers beverage options to you based on that model. It’s prescriptive and insidiously normative. And it compares information with other vending machines. If you’re a bit abnormal – a man who likes beverages common in the female model, for instance – these systems leave you out of luck. And while they’re commercially viable, there’s no public good associated with this information gathering. We might put this into the same category as interactive billboards with analytics packages, like the Quividi VidiReports, which detects age, gender, and even gaze. There is no opt out – you’re a data point even if you turn away from the ad.

How do we think about these systems when power resides in a network? Adam gives the example of an access control bollard in Barcelona, a metal pile that rises out of the ground to block access to a street unless you present an RFID that gives you permission to pass. This system relies on an embedded sensor grid, RFID system, signage, and traffic law all interacting together. It’s a complex, network system that we largely interact with through that bollard. It’s even easier to understand these systems when they exist solely through code.

There’s a class of public objects that we need to define and have a conversation about. Adam proposes that they include any discrete object in the common spatial domain intended for general use, located on public right of way, or that have de facto shared access to the public. When we build these systems, Adam says, we should design in ways that the data is open and available. That means offering an API, and making data accessible in a way that’s nonrivalrous and nonexcludable.

An open city necessarily has an more open attack surface. It’s more open to griefing and hacking. We need a great deal of affirmative value to run this risk. And we need to develop protocols and procedures to establish precedence and deconfliction around these objects. We’re roughly a century into the motor car in cities and we still don’t handle cars well, never mind these public objects.

Adam advocates a move against the capture of public space by private interest and towards a fabric of freely discoverable, addressable, queryable and scriptable resources. We need to head towards a place where “right to the city” is underwritten by the technology of the space.


Jeffrey Huang of the Berkman Center and EPFL Media x Design Laboratory has been involved with the design of a “hyperpublic” campus in the deserts of Ras Al Khaimah, one of the seven Emirates of the UAE. The Sheik of the state has agreed to fund the joint development of the campus with Huang’s institution in Switzerland, and his design students have been focused on building a university campus that’s deeply public, both in terms of physical and architectural space.

One of the major constraints for the design is lowering water and energy usage. The goal is to make buildings make environmental sense using data. They’ve mapped the building site and located natural low points where water accumulates. The design makes use of these points as “micro-oasises”. The design for the building is large, open spaces around these points, an echo of the EPFL learning center in Laussane, Switzerland.

Within the building, a network of sensors can greet people by name and offer personal services to them. You can interact with people through data shadows, which physically track people through the building, a shadow cast on the wall that shows someone’s name, identity and interests.

He acknowledges the dangers of this system, making reference to Mark Shepard’s Sentient City Survival Kit and an umbrella whose visual pattern scrubs your data from surveillance. But he notes that there’s less need to design the private if hyperpublicness is adequately designed. We should build systems where everyone and no one owns the data, which are fully transparent.


Betsy Masiello from Google works on public policy issues and offers us a practicioner perspective on the topic of the hyperpublic. She tells us she originally misread the title of our session – “The risks and beauty of the Hyper-public life” and skipped over the risk part. She worried we might be celebrating a “Paris Hilton-like existence of life streaming,” making your identifiable behavior available to anyone who chooses to watch.

There’s a better way of thinking about data-driven lives and existences. Systems like Google Flu trends uses lots of discrete points of information to make predictions about health issues – this gets quite important when this helps us target outbreaks of diseases like dengue fever. Unlike the pure performance of a public life, we get public good that comes from big data analysis.

She offers a frame for analysis: predictive analytics based on your behavior, which use your data and make it clear how it’s used veruss systems that are predictive based on other people’s behaviors, like Google’s search, flu trends, and perhaps the soda machine Adam talks about. Both systems can be very valuable. But the risk is the collapse of contexts that happens in a hyperpublic life – the idea that data can be reidentified and attached to your identity.

She recalls Jonathan Franzen’s essay, “The Imperial Bedroom”, from 1998 about the Monica Lewinsky scandal. Franzen suggests that without shame, there’s no distinction between public and private. The more identifiable you are, the more likely you are to feel that shame.

The current challenge we face is contructing and managing multiple identities. Ideally, we’d have ways to manage an identity that includes a form of anonymity. It’s becoming trivial to reidentify people within sets of data. We may need to have policy interventions that put requirements on the data holders, punishing people who release information that allows people to be reidentified.


There’s an interesting argument that arises around privacy and transparency. Adam offers his frustration that Amazon continues recommending Harry Potter to him despite having 15 years of purchasing behavior data, none of which should indicate his desire to read fantasy. Jef sees this as a problem of too little data, not too much. Jeff Jarvis, moderating, criticizes Adam for asking for too much privacy and tells us he doesn’t want a world in which we can’t customize, and where we’re forced away from targeted data when it’s useful.


Latanya Sweeney and rethinking transparency

Latanya Sweeney urges us to rethink the challeges of privacy. She’s worked in the space for ten years and tells us that thinking about privacy in terms of the design of public spaces is a helpful and useful conceptual shift. We tend to look at the digital world in terms of physical spaces. In digital spaces, though, we can often look at someone from different perspectives in parallel spaces, and we can learn things about you that might be considered to be “private”, hidden behind some sort of a wall.

She prefers to talk about semi-public and semi-private spaces, and to consider the tension between privacy and utility. It’s not one or the other, but the sweet spot between the two. She’s rethinking privacy, particulary around the topic of big data: pharmacogenomics, computational social science, national health databases. This movement towards the analysis of huge data sets forces us to rethink within legacy environments. How do we de-identify data? What does informed constent and notice mean in these spaces? And we’re rethinking at architectural levels, too – moving towards a realm of open consent and privacy-protecting marketplaces.

Open consent has been popularized by George Church at the Harvard Medical School. Rather than asking consent or making promises or guarantees, he gives you a contract where you sign away liability, because considering future risks is simply too hard. It sounds kooky, but a thousand people have signed up. Another model is a trade secret model – what if I treat your genomic data as a trade secret? As long as I keep it private, you’re exempt from liability – release it and all bets are off. We might also think of data sharing marketplaces where we insulate participants from harm and compensate them when it occurs.

We need to think through these components:

Data subjects – we need to think through the possibility of economic harm to these actors, in part because humans tend to discount risks around privacy

Technology developers – some of these developers are her students, and she urges them to think about the power over privacy and technology decisions they exert. Video recorders record sound and video, and sound is hard to mute. As a result, videotaping often pushes us against wiretapping laws… and this could have been moderated with a $0.01 cost decision

Policymakers

Belief systems

Benefit structures

and Legacy environments

Zeynep Tufekci asks Sweeney to talk through the question of belief systems and false tradeoffs. She suggests that debates have a false belief that you’re trying to maximize privacy or utility – the key is a relationship between the two.


« Previous Entries

Powered by WordPress | Designed by Elegant Themes