… My heart’s in Accra Ethan Zuckerman’s online home, since 2003

November 7, 2011

Mapping Media Ecosystems at Center for Civic Media

Filed under: Berkman,Blogs and bloggers,CFCM,Media,Media Lab — Ethan @ 7:54 pm

This summer, Sasha, Lorrie and I started brainstorming the sorts of events we wanted to host at the Center for Civic Media this fall. The first I put on the calendar was a session on “mapping civic media”, a chance to catch up with some of my favorite people who are working to study, understand and visualize how ideas move through the complicated ecosystem of professional and participatory media.

To represent the research being done in the space, we invited Hal Roberts, my collaborator on Media Cloud (and on a wide range of other research), Erhardt Graeff from the Web Ecology project, and Gilad Lotan, VP of R&D for internet analytics firm BetaWorks. On Wednesday night, I asked them to share some of the recent work they’ve been doing, understanding the structure of the US and Russian blogosphere, analyzing the influence networks in Twitter during the early Arab Spring events and understanding the social and political dynamics of hashtags. They didn’t disappoint, and I suspect our video of the session (which we’ll post soon) will be one of the more popular pieces of media we put together this fall. In the meantime, here are my notes, constrained by the fact that I was moderating the panel and so couldn’t lean back and enjoy the presentations the way I otherwise might have.

Hal Roberts is a fellow at the Berkman Center for Internet and Society, where he’s produced great swaths of research on internet filtering, surveillance, threats to freedom of speech, and the basic architecture of the internet. (That he’s written some of these papers with me reflects more on his generosity than on my wisdom.) He’s the lead architect of Media Cloud, the system we’re building at the Berkman Center and at Center for Civic Media to “ask and answer quantitative questions about the mediasphere in more systematic ways.” As Hal explains, media researchers “have been writing one-off scripts and systems to mine data in haphazard ways.” Media Cloud is an attempt to streamline that process, creating a collection of 30,000 blogs and mainstream media sources in English and Russian. “Our goal is to get as much media as possible, so we can ask our own questions and also let others ask questions of our duct tape and bubblegum system.”

Hal’s map of clusters in popular US blogs. An interactive version of this map is available here.

Much of Hal’s work has focused on using the content of media – rather than the structure of its hyperlinks – to map and cluster the mediasphere. He shows us a map of US blogs that cluster into three main areas – news and political blogs, technology blogs and what he calls “the love cluster”. This last cluster is so named because it’s filled with people talking about what they love. Subclusters include knitters, quilters, fans of recipes and photography. The technology cluser breaks down into a Google camp, an iPhone camp and a camp discussing Android Apps. Hal’s visualization shows the words most used in the sources within a cluster, which helps us understand what these clusters are talking about. The Google cluster features words like “SEO, webmaster, facebook, chrome” and others, suggesting the cluster is substantively about Google and its technology projects.

While we might expect the politics and news cluster to divide evenly into left and rightwing camps, it doesn’t. Study the link structure of the left and the right, as Glance and Adamic and later Eszter Hargittai have, and it’s clear that like links to like. But Hal’s research shows that the left and right use very similar language and talk about many of the same topics. This is a novel finding: It’s not that the left and right are talking about entirely different topics – instead they’re arguing over a common agenda, an agenda that’s well represented in mainstream media as well, which suggests the existence of subjects neither the right or left are talking about online.

Building on this finding, Hal and colleagues at Berkman looked at the Russian media sphere, to see if there was a similar overlap in coverage focus between mainstream media and blogs. “Newspapers and the television are subject to strong state control in Russia – we wanted to see if our analysis confirmed that, and whether the blogosphere was providing an alternative public sphere.

The technique he and Bruce Etling used is “the polar map” – put the source you believe is most important at the center, and other sources are mapped at a distance from that source where the distance reflects degree of similarity. The central dot is a summary of verbiage from Russian government ministry websites. Right next to it is the official government newspaper. TV stations cluster close to the center, while blogs cover a wide array of the space, including the edges of the map.

It’s possible that blogs are showing dissimilarities to the Kremlin agenda because they’re talking about knitting, not about politics. So a further analysis (the one mapped above) explicitly identified democratic opposition and ethno-nationalist blogs and looked at their placement on the map. There’s strong evidence of political conversations far from the government talking points in both the democratic opposition and in the far right nationalist blogosphere.

What’s particularly interesting about this finding is that we don’t see the same pattern in the US blogosphere. Make a polar map with the White House, or a similar proxy for a US government news agenda, at the center, and you’ll see a very different pattern. Some right wing American blogs flock quite closely to the White House talking points – mostly to critique them – while the left blogs and mainstream media generally don’t. However, when Hal and crew did an analysis of stories about Egypt, they saw a very different pattern than in looking at all stories published in these sources. They saw a tight cluster of US mainstream media and blogs – left and right – around the White House. The government, the media and bloggers left and right talked about Egypt using very similar language. In the Russian mediasphere, the pattern was utterly different – the democratic opposition was far from the Kremlin agenda, using the Egyptian protests to talk about potential revolution in Russia.

The ultimate goal of Media Cloud, Hal explains, is to both produce analysis like this, and to make it possible for other researchers to conduct this sort of analysis, without a first step of collecting months or years of data.

Erhardt Graeff is a good example of the sort of researcher Media Cloud would like to serve. He’s cofounder of the Web Ecology Project, which he describes as “as a ragtag group of casual researchers that has now turned in a peer-reviewed publication“. That publication is the result of mapping part of the Twitter ecosystem during the Tunisian and Egyptian revolutions, and attempting to tackle some of the hard problems of mapping media ecosystems in the process.

The Web Ecology Project began life researching the Iranian elections and resulting protests, focusing on the #iranelection hashtag. With a simple manifesto around “reimagining internet studies”, the project tries to understand the “nature and behavior of actors” in media systems. That means considering not just the top users, or even just the registered users of a system like Twitter, but the audience for the media they create. “Each individual user on Twitter has their personal media ecosystem” of people they follow, influence, are followed by and influenced by.

This sort of research rapidly bumps into three hard problems, Erhardt explains:

– Did someone read a piece of information that was published? Or as he puts it, “Did the State Department actually read our report about #IranElection?” It’s very hard to tell. “We end up using proxies – you followed a link, but that doesn’t mean you read it.”

– Which piece of media influenced someone to access other media? “Which tweet convinced me to follow the new Maru video, Erhardt’s or MC Hammer’s?”

– How does the media ecosystem change day to day? Or, referencing a Web Ecology paper, “How many genitalia were on ChatRoulette today?” The answer can vary sharply day to day, raising tough problems around generating a usable sample.

The paper Erhardt published with Gilad and other Web Ecology Project members looks at the Twitter ecosystem around the protest movements in Tunisia and Egypt. By quantitatively searhing for information flows, and qualitatively classifying different types of actors in that ecosystem, the research tries to untangle the puzzle of how (some) individuals used (one type of) social media in the context of a major protest.

To study the space, the team downloaded hundreds of thousands of tweets, representing roughly 40,000 users talking about Tunisia and 62,000 talking about Egypt. They used a “shingling” method of comparison to determine who was retweeting whom ad sought out the longest retweet chains. They looked at the top 10% of these chains in terms of length to find the “really massive, complex flows” and grabbed a random 1/6th of that sample. That yielded 774 users talking about Tunisia, 888 talking about Egypt… and only 963 unique users, suggesting a large overlap between those two sets.

Then Erhardt, Gilad and others started manually coding the participants in the chains. Categories included Mainstream Media (@AJEnglish, @nytimes), web news organizations (@HuffingtonPost), non-media organizations (@Wikileaks, @Vodaphone), bloggers, activists, digerati, political actors, celebrities, researchers, bots… and a too-broad unclassified category of “others”. This wasn’t an easy process – Erhardt describes a system in which researchers compared their codings to ensure a level of intercoder reliability, then had broader discussions on harder and harder edge cases. They used a leaderboard to track how many cases they’d each coded, and goaded those slow to participate into action.

The actors they classified are a very influential set of Twitter users. The average organization in their set has 4004 followers, the average individual 2340 (which is WAY more than the average user of the system). To examine influence with more subtlety than simply counting followers, Erhardt and his colleagues use retweets per tweet as an influence metric. What they conclude, in part, is that “mainstream media is a hit machine, as are digerati – what they have to say tends to be highly amplified.”

The bulk of the paper traces information flows started by specific people. In the case of Egypt, lots of information flows start from journalists, bloggers and activists, with bots as a lesser, but important, influence. In Tunisia, there were fewer flows started by journalists, more by bots and bloggers, and way fewer from activists. This may reflect the fact that the Tunisian story caught many journalists and activists by surprise – they were late to the story, and less significant as information sources than the bloggers who cover that space over time. By the time Egypt becomes a story, journalists realized the significance and were on the ground, providing original content on Twitter, as well as to their papers.

One of the most interesting aspects of the paper is an analysis of who retweets whom. It’s not surprising to hear that like retweets like – journalists retweet journalists, while bloggers retweet bloggers. Bloggers were much more likely to retweet journalists on the topic of Egypt than on Tunisia, possibly because MSM coverage of Egypt was so much more thorough than the superficial coverage of Tunisia.

While Gilad Lotan worked with Erhardt on the Tunisia and Egypt paper, his comments at Civic Media focused on the larger space of data analysis. “I work primarily on data – heaps and mounds of data,” he explains, for two different masters. Roughly half his work is for clients, media outlets who want to understand how to interact and engage with their audiences. The other half focuses on developing the math and algorithms to understand the social media space.

This work is increasingly important because “attention is the bottleneck in a world where threshhold to publishing is near zero.” If you want to be a successful brand or a viable social movement, understanding how people manage their attention is key: “It’s impossible to simply demand attention – you have to understand the dynamics of attention in the face of this bottleneck.”

Gilad references Alex Dragulescu’s work on digital portraits, pictures of people composed of the words they most tweet or share on social media. He’s interested not just in the individuals, but in the networks of people, showing us a visualization of tweets around Occupy Wall Street. Different networks take form in the space of minutes or hours as new news breaks – the network around a threatened shutdown of Zuccotti Park for a cleanup is utterly different than the network in July, when Adbusters was the leading actor in the space.

Lotan’s visualizations of Twitter conversations about Occupy in July and October 2011

Images like this, Lotan suggests, “are like images of earth from the moon. We knew what earth looked like, but we never saw it
We knew we lived in networks, but this is the first time we can envision it and see how it plays out.”

When we analyze huge data sets, we can start approaching answers to very difficult questions, like:
– What’s the audience of the New York Times versus Fox News?
– What type of content gains wider audiences through social media?
– What topics do certain outlets cover? What are their strengths, weaknesses and biases?
– How do audiences differ between different publications? How are they similar?
– How fast does news spread, and how does it break?

Much of media and communications research addresses these questions, though rarely directly – as Erhardt noted, we generally address these questions via proxies. But Lotan tells us, we can now ask and answer questions like, “How many Twitter users follow Justin Bieber and The Economist?” The answer, to a high degree of precision, is 46,000. It’s just shy of the number who follow The Economist and the New York Times, 54,000.

Lotan is able to research answers like this because his lab has access to the Twitter “firehose” (the stream of all public data posted to Twitter, moment to moment) and to the bit.ly firehose. This second information source allows Lotan to study what people are clicking on, not just what media they’re exposed to. He offers a LOLcat, where the feline in question is dressed in a chicken costume. “We can see the kitty in you, and the chicken you’re hiding behind.” What people share and what they click is very different, and Lotan is able to analyze both.

This data allowed Lotan to compare what audiences for four major news outlets were interested in, my measuring their clickstreams. Al Jazeera and The Economist, he tells us, are pretty much what you’d think. But Fox News watchers are fascinated by crime, murders, kidnappings and other dark news. This sort of insight may help networks understand and optimise for their audiences. Al Jazeera’s audience, he tells us, is very engaged, tweeting and sharing stories, while Fox’s audience reads a lot and shares very little.

Some of Lotan’s recent research is about algorithmic curation, specifically Twitter’s trending topics. Many observers of the Occupy movement have posited that Twitter is censoring tweets featuring the #occupywallstreet hashtag. Lotan acknowledges that the tag has been active, but suggests reasons why it’s never trended globally. Interest in the tag has grown steadily, and has a regular heartbeat, connected to who’s active on the east coast of the US. The tag has spiked at times, but remains invisible in part due to bad timing – a spike on October 1st was tiny in comparison to “#WhatYouShouldKnowAboutMe”, trending at the same time.

At this point, Lotan believes he’s partially reverse engineered the Trending Topics algorithm. The algorithm is very sensitive to the new, not to the slowly building. This raises the question: what does it mean to “get the math right”. Lotan observes, “Twitter doesn’t want to be a media outlet, but they made an algorithmic choice that makes them an editor.” He’s quick to point out that algorithmic curation is often very helpful – the Twitter algorithm is quite good at preventing spam attacks, which have a different signature than organic trends. So we see organic, fast-moving trends, even when they’re quite offensive. He points to #blamethemuslims, which started when a Muslim women in the UK snarkily observed that Muslims would be blamed for the Norway terror attacks. That tweet died out quickly, but was revived by Americans who used the tag unironically, suggesting that we blame Muslims for lots of different things – that small bump, then massive spike is a fairly common organic pattern… and very different from the spam patterns he’s seen on Twitter.

When we analyze networks, Lotan suggests, we encounter a paradox that James Gleick addresses in his recent book on information: just because I’m one hop away from you in a social network doesn’t mean I can send you information and expect you to pay attention. In the real world, people who can bridge between conversations are rare, important and powerful. He closes his talk with the map of a Twitter conversation about an event in Israel where settlers were killed. There’s a large conversation in the Israeli twittersphere, a small conversation in the Palestinian community, and two or three bridge figures attempting to connect the conversations. (One is my wife, @velveteenrabbi.) Studying events like this one may help us, ultimately, determine who’s able to build bridges between these conversations.

I can’t wait for the video for this event to be put online – we’ll get it up as soon as possible and I’ll link to it once we do.

October 18, 2011

Beth Coleman on “Tweeting the Revolution”

Filed under: Berkman — Ethan @ 2:05 pm

Beth Coleman presents some of her recent research on the protests in Tahrir square, and a broader theory of how social networks and activism in the physical world work together today at the Berkman Center. With her is Mike Ananny, her coauthor and researcher in danah boyd’s lab at Microsoft Research. The presentation, “Tweeting the Revolution”, tries to understand how we read large data sets to understand located action. This is a timely topic because we’re seeing a rise in protest activity that’s been missing from the public sphere for a few decades. Coleman wants to know what we can understand about social media and people’s willingness to take an activist stance. One of the foci of her work is the idea of mediated copresence, which she sees as a major way of understanding the relationship between technology and public action.

Tahrir Square offers an opportunity to think through the relationship between three types of speech:
– Public speech, the broadcast of information to a broad audience
– Civic speech, speech within the networks of your located environment
– Poetic speech, speech about expressing needs and interests

What’s the effect of Twitter, SMS and other technologies in a space like Tahrir? They may be critical in understanding the sustainability of commitments to a movement beyond the initial phase of protest.

In his critiques of online activism in understanding the Arab Spring, Malcolm Gladwell has suggested that activism needs to include bodily presence, risk of harm or arrest, and developed organizational infrastructures. It’s worth asking those questions – does online participation matter? Do we need bodily presence for activism? Coleman and Ananny use the possibility of bodily risk – in this case, the physical presence in Egypt – as a precursor for inclusion for her interview group. She cites Elaine Scarry’s work on body and pain, suggesting that when a body is in pain, there’s a loss of self, a loss of agency, and a loss of language. Pain cannot be articulated, and there’s the failure of “subject as a system”. So physical location in Egypt opens risk of incarceration and torture, and creates a category of potentially effected actor.

There’s lots of analysis of network collective action from at least two points of view: considering social media as an augmentation to traditional organizing tools, and considering network media as a form of command and control. There’s an open space for analysis around strategic and tactical engagement around located network media. We might think of social media as a way of facilitating co-presence, the way of being part of a phenomenon either in physical space or in a complementary virtual space. If we’re continually surrounded by Twitter, Facebook and SMS, which remind us of people’s presence even if we’re not interacting with them, how does this help us understand a move from onlooker to participant in collective action.

To understand copresence, we need to understand quotidian media engagement. 17% of Egyptians were online before the revolution and 72% on mobile phones. Coleman notes that Kate Crawford, studying non-literate women in India, sees SMS use from people you wouldn’t expect to be able to use SMS. It’s worth being open to the notion that SMS could be a powerful tool for sending the sense of presence for a very large swath of an Egyptian audience. Coleman suggests that we need to engage in careful consideration of the oral and the local to understand the cascae of strong and weak ties and their relationship to collective action.

She and Ananny propose a way of thinking through Egyptian positions towards the Tahrir protests. There were people who were present in Tahrir and those who weren’t. There were people engaged with the protests online and those who weren’t. We can create four categories of engagement by considering those categories in terms of binaries. This separates some figures from the discussion – individuals like Alaa Abdel Fatteh, who was deeply engaged online, but in South Africa for much of the protest. But it’s a useful structure in part because it forces you to consider the bottom quadrant, those who didn’t engage physically or online, and are therefore the hardest to study. Eszter Hargittai’s contribution to the work, Coleman notes, is to urge her to take that quadrant of nonparticipation seriously.

Interviews with participants quickly complicate and stretch the boundaries of these categories. An interview with a 20-something woman, upper middle class, who’s been using Ushahidi to map sexual harassment, shows Coleman that “on/off the square” may be too binary a distinction. In the wake of the media blackout on the 28th, she tells Coleman, she was motivated to go to the square because she didn’t want to be alone, she wanted to find other people, and she felt like the movement was moving from online to offline. But as she headed to the square, she felt a sense of risk and turned around. Her story calls into question the idea of whether you needed to be in Tahrir physically to be part of the revolution.

Coleman shows us a graph of Dima Khatib’s Twitter network rendered by Gilad Lotan. Based on the frameworks Coleman is suggesting, can we better understand who connects, who retweets and how information cascades? “How might the data trace of media engagement overlap with the human narrative?”

This matters, ultimately, because it influences how we might develop new tools. This past weekend, Coleman led a workshop with Juliana Rotich of Ushahidi, a platform for crisis mapping and management. “After the crisis, what are the tools for sustaining movements?”

September 12, 2011

A Vast Wasteland, Five Decades Later

Filed under: Berkman,CFCM,ideas — Ethan @ 8:55 pm

Fifty years ago, Newton Minow, the 35 year old FCC chairman, gave a speech that’s still studied today. It’s taught in rhetoric courses, tested on the LSAT reading comprehension test and still is invoked in discussions of how communications technology affects entertainment, news, and democracy. The speech challenged broadcasters to actually watch their programming, and urged them to consider whether they were proud of what’s they’d see. It read, in part:

“When television is good, nothing — not the theater, not the magazines or newspapers — nothing is better.
But when television is bad, nothing is worse. I invite each of you to sit down in front of your own television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland.”

Today, Minow’s daughter, Martha Minow, dean of the Harvard Law School, welcomed her father to the stage at her institution as part of an event titled, “News and Entertainment in the Digital Age: A Vast Wasteland Revisited“. Minow (I’ll refer to Newton Minow throughout the rest of this post) starts his talk by noting that we’re a day past the ten year anniversary of 9/11, a time at which there was no YouTube, no Twitter, none of the social media we discuss today to understand the tragic events of the day. If that shift is difficult to comprehend, it’s much harder to understand the landscape of fifty years ago, when phone calls traveled by wire, when there were no computers, one phone company and two and a half television companies. There was no public television or radio. Audiences, Minow reminds us, were passive – they gathered around the single set in the house and watched in silence.

When Minow came to the FCC, it was a group wracked by scandal – previous commissioners had been fired for corruption. Minow’s relationship was a highly personal one with President Kennedy. He recalls a meeting with Kennedy and Commander Alan Shepard, recently returned from the first American voyage into space. Kennedy was enroute to a speech at the National Association of Broadcasters, and asked Minow what he thought Kennedy should say to the broadcasters. He told him, “Mr. President, tell them that this is the difference between a free and a closed society: when the Soviets send people into space, we don’t know whether they succeed or fail. In the US, we let people see and hear what’s going on.”

Kennedy gave a brief speech to the NAB which used Minow’s talking points and got a standing ovation. Minow’s infamous speech didn’t get quite as warm a reception. Minow reminds us that Sherwood Schwartz, producer of the television show Gilligan’s Island, honored him by naming the sinking ship on his show the S.S. Minow.

Why give such an incendiary speech? Television was the dominant medium of the era. The televised Kennedy/Nixon debate had decided the election. But there was little discussion about public interest and public responsibility on the part of broadcasters. Minow’s contribution as an FCC chairman was to try to expand choice – licensing the UHF spectrum, early cable TV systems and satellite television. When Kennedy invited him to visit the space program, Minow observed that satellites were more important to sending a man into space, because they permitted sending ideas into space, and ideas last longer than people. Minow notes that there’s a strong possibility that the recent events of the Arab Spring were a product, in part, of satellite communication.

Both Minow and Kennedy had lived in cities where there was a strong public television statement. They both assumed that public television would spread throughout the country, but there was no public TV in New York, LA or Washington DC. When Minow left the FCC, he went on to serve on the board of governors of the Public Broadcasting Service, and on the Carnegie Foundation, one of the major funders of public broadcasting.

As someone who’s been concerned with public broadcasting for his entire career, Minow tells us that he’s deeply disappointed by the relationship between money and politics. “Politicians need massive amounts of money to buy radio and television ads. They raise money from the public to gain access to something the public owns: the airwaves.” This is an absurdity – the US is one of the few countries in the world that doesn’t provide access to the airwaves to candidates. In the UK and Japan, it’s not possible to buy access to the airwaves. Much of the cost of American campaigning comes from the media.

Minow ends his remarks with praise for his host: “I wish the Berkman Center had existed 50 years ago,” because the issue of the responsibilities of broadcasters was neglected 50 years ago, and is still neglected today.

Anne Marie Lipinski, the new curator of the Niemann Center, is one of the three designated “respondents” to Minow’s remarks. She suggests that the most inspiring aspect of Minow’s remarks is the idea that we can do better – as individuals, as broadcasters. One of the challenges in helping us become better is defining the public interest. “I don’t think we have a shared ethos around te public interest in contemporary society.”

Journalist Jonathan Alter reminds us that Minow is also the father of the televised presidential debate. While we still see this important form of civic programming, most of what passes for civic discourse online is extremely poor. “The news business is the only business recognized by the Constitution and it’s largely dysfunctional.” Talk is cheap and reporting expensive, he argues – “the vast wasteland has a Tower of Babel on top of it.” Much of the news we get is “people like me babbling on MSNBC or Fox”, rather than the sort of expensive newsgathering required to report facts on the ground.

Yochai Benkler calls on a section of Minow’s speech where he challenges broadcasters to challenge their sponsors: “Tell your sponsors to be less concerned with cost per thousands and more concerned with understanding per millions.” This section points to the core tension between an American broadcast model that is anchored in markets, and the challenges of public responsibility. Public funding for media and nonprofit models tend to be foreign to American audiences. Yet there’s evidence that networks like the BBC produce some of the highest quality news content available.

Benkler provokes Alter by suggesting that there’s the possibility of producing key and investigative reporting via radically distributed methods. He suggests that the Neda Aga Soltan video, which Alter alluded to in his remarks, was an example of the power of citizen production. He (generously) references a talk I gave the week before about the complex interaction of Tunisians on the ground, activists in the diaspora and Al Jazeera – a state-funded media network – to amplify voices in Sidi Bouzid leading to the Tunisian revolution. “Because we all now carry sound, video and text generating and disseminating tools – phones – we’ve got an unprecedented opportunity to close the gap between what costs a great deal of money and what we all need as citizens.”

Lipinski asks whether anyone is prepared to pay for this sort of crowd-sourced media, asking if any of us pay people whose blogs and twitter feeds we read. Minow suggests that this may be the wrong place to ask for support. He notes that the Japanese closely studied media models around the world before starting NHK and based their model on the BBC, including charging a license fee for television sets. “Other countries started building public media before they built commercial. We tacked on public broadcasting after the fact, without a way to pay for it.” This leaves us with a difficult choice: “Do you want the market to decide and provide everything? And if the market is not going to provide everything, do you want to build an alternative system?”

Alter suggests we don’t hold our breath waiting for the rise of a new public media system in the US. What’s happening instead is the fragmentation of what media exists. He points to the evening entertainment market, where big shows like Leno’s and Letterman’s are ceding ground to the Colbert Report. “It’s a move towards greater choice.” But the downside of this move is that we may be seeing a divide between elites who have access to a vast selection of media, and masses who get little critical media. “The political conversation involves a maximum of 10 to 15 million people,” he asserts, “but 130 million vote in Presidential elections.”

Ellen Goodman offers a nutritional analogy. “People don’t want to eat their broccoli, but they still might vote.” She’s suspicious of the idea that public media will produce the broccoli and be able to get people to eat it, because “public broadcasting in the US is weak and designed to be weak.” Proposals that are unrealistic but still worth making for the production of marketing of broccoli might not be directed to our existing public media institutions, she argues, because these institutions may not be capable of innovation. “It’s reasonable to ask these actors to solve our problems, but they are not going to solve them.”

Virginia Heffernan, cultural critic for the New York Times, suggests we consider not just news. When we look at television entertainment, especially HBO and Bravo, we’re no longer facing a vast wasteland. Minow invites us to imagine the forces of art, daring and imagination unleashed on the television screen, and the artistic explosion we’ve seen the last few years suggests that “television both as an art form and a public health hazard makes these things possible.”

She offers a caution to Alter’s skepticism about digital media and direct sources – we quickly found dangerous media online, like Loose Change, a video that offered the conspiracy theory that 9/11 was an inside job. But we also were able to find video of Saddam Hussein’s execution, shot and distributed by an American serviceman. “Our million dollar Baghdad bureau didn’t get the execution story right” because they were working from eyewitness testimony from individuals in the room, and that testimony wasn’t correct. The actual account of Hussein’s final words came from the video, not the reporting.

What’s key in this world of internet video, she offers, is contextualization. As the New York Times invests in international reporting, they need to make a major investment in contextualizing these images and videos. Asked by Jonathan Zittrain, our moderator, how we might take on Minow’s challenge to “do better”, Heffernan asks us to “register as a Wikipedia editor today. Twice, if you’re a woman.”

Zittrain observes that the phenomenon of Doris Kearns Goodwin, sitting next to Heffernan, registering on Wikipedia could lead to some interesting edit battles over Lincoln’s biography. Asked whether she will register as a Wikipedian, Goodwin offers, “I didn’t know I could!” (Note to Jimmy Wales – we still have work to do.) C

With three former FCC chairs in the room, Susan Crawford – introduced as a “shadow FCC commissioner” in the Obama administration – is offered the first FCC response to Minow provocations with a line about “beauty before age”. She responds to Reed Hundt with a quip about pearls before swine(!) and suggests we think about parallels between Minow’s speech in the service of a “handsome young president, with a beautiful family” and suggests that such a speech would be unthinkable nowadays. For one thing, Minow would have been speaking to the wrong people. Distribution networks are now so much more powerful than content providers, and players like Comcast now control programming and internet access. “There’s only four actors in America who have any power” around these issues of content of the media, “and they really believe that personal preferences equal good programming.”

Kevin Martin, FCC chair under George W. Bush, focused his observations on a topic dear to my heart – the state of international media. He observed that business network Bloomberg now devotes significantly more resources to overseas coverage than the New York Times. (For the record, so does the Wall Street Journal – business papers cover international more thoroughly than “general interest” sources…) Despite those coverage resources, some Bloomberg channels have had difficulty gaining carriage on some cable systems, where they are perceived as specialist content.

Reed Hundt, who chaired the FCC under President Clinton, calls his moves to force broadcasters to show three hours a week of children’s programming his way of honoring Minow’s legacy. “Mandating children’s programming turns out to be a violation of the first amendment, to my amazement.” Like Minow, Hundt was “honored” by broadcasters’ response to his work – the WB network’s show Animaniacs introduced a clown named Reed Blunt… and offered the show as evidence of their compliance with creating children’s programming.

Minow points out that lawyers end up as chairmen of the FCC because “it’s the only government agency that’s regulating a medium of communication.” Lawyers who understand the first amendment understand how treacherous it is and how complicated regulation in the space can be.

Asked to comment on Minow’s legacy, Nicholas Negroponte offers the observation that photography is a medium where artists have been the technical innovators, while broadcasting is a field where the engineers have worked out the tech while the artists were creative. What the Media Lab tries to do, he tells us, is do for computer media what photographers have done – advance the field by advancing both the tools and the creativity.

Zittrain invites Minow to comment on the rise of Twitter: “threat or menace?” Minow demurs, arguing “the more communication the better.” And he thanks us for considering these issues of public interest fifty years after he raised their importance.

Terry Fisher offers a summation that introduces several new, important ideas. New technologies, and some of the practices that surround them (though are not dictated by them) are eroding some existing, long-standing dichotomies: public/private, professional/amateur, speaker/audience, news/entertainment, university/society. There are huge benefits and costs to this corrosion. We see the collapse of oligarchies, address of systematic biases, democratization of processes. But we also have fragmentation, loss of a coherent single culture, the rise of a tower of pundit babel, and the superficiality of much programming. This move, he argues, is impossible to stop. Instead, we need to think through the new opportunities the shift presents: the ability to change who contributes to this process. And we need to figure out how to ameliorate the costs we suffer. That means creating distributed models for sifting, curating, organizing, like Wikipedia, Slashdot and academic projects like Jeffrey Schapp’s Digital Humanities project. In this new world, the FCC may not be the prime mover – the real power is located in intermediaries like Google, and if we were to push for the public interest, that’s where we’d apply leverage.

June 10, 2011

Martin Nowak and the mathematics of cooperation

Filed under: Berkman,hyperpublic — Ethan @ 4:48 pm

Mathematical biologist Martin Nowak talks to us about the evolution of cooperation. Cooperation is a puzzle for biologists because it doesn’t make obvious evolutionary sense. In cooperation, the donor pays a cost and the recipient gets a benefit, as measured in terms of reproductive success. That reproduction can be either cultural or biological and the challenge to explain remains.

It may be simplest to consider this in mathematical terms. In game theory, the prisoner’s dilemma makes the problem clear to us. Given a set of outcomes where we’re individually better off defecting, it’s incredibly hard to understand how we get to a cooperative state, where we both benefit more. Biologists see the same problem, even removing rationality from the equation. If you let different populations compete, the defectors win out against the cooperators and eventually extinguish them. Again, it’s hard to understand why people cooperate.

There are five major mechanisms that biologists have proposed to explain the evolution of cooperation:
– kin selection
– direct reciprocity
– indirect reciprocity
– spatial selection
– group selection

Nowak works us through the middle three in some detail.

In direct reciprocity, I help you and you help me. This is what we see in the repeated prisoner’s dilemma. It’s no longer best to defect. As originally discovered by Robert Axelrod in a computerized tournament, the three-line program “Tit for Tat” wins:

At first, cooperate.
If you cooperate, continue to cooperate.
If you defect, defect.

While it’s a powerful strategy, it’s very unforgiving. If there’s a mistake, there’s an endless cycle of retaliation. Nowak wondered what would happen if natural selection designed a strategy. He created an environment to allow this, and permitting random errors to create a harder environment. If the other party plays randomly, the best strategy is to defect every time. But when tit for tat is introduced, it doesn’t last for long, but it does lead to rapid evolution. You’ll see “generous tit for tat” – if you cooperate, I will. If you defect, I will still cooperate with a certain probability. Nowak suggests that this is a good strategy for remaining married, and step towards the evolution of forgiveness.

In a natural selection system, you’ll eventually reach a state where everyone communicates, always. A biological trait needs to be under competition to remain – we can lose our ability to defect and become extremely susceptible to a situation where an always defect strategy can come into play. Cooperation is never stable, he tells us – it’s about how long you can hold onto it and how quickly you can rebuild it. Mathematically, direct reciprocity can come about if the benefits of cooperation, on average, outweigh the costs of playing a new round.

Indirect reciprocity is a bit more complex. The good samaritan wasn’t thinking about direct repayment. Instead, he was thinking “if I help you, someone will help me.” This only happens when we have reputation. If A helps B, the reputation of A increases. The web is very good at reputation systems, but we’ve got simple offline systems as well. We use gossip to develop reputation systems. “For direct reciprocity, you need a face. And for indirect reciprocity, you need a name and the ability to talk about others.” In indirectly reciprocal systems, cooperation possible if the probability to know someone’s reputation exceeds the costs associated with cooperation. And this only works if the reputation system – the gossip – is conducted honestly.

In spatial selection, cooperation happens based on people who are close geographically, in terms of graph theory. Graph selection favors cooperation if there’s a few close neighbors – it’s much harder to do with lots of loose collaborators. A graph where you’re loosely connected to a lot of people equally doesn’t tend towards cooperation.

Charlie Nesson and a new vision of the public domain

Filed under: Berkman,hyperpublic — Ethan @ 4:28 pm

Charlie Nesson, one of the founders of the Berkman Center, asks us to consider who we are, and what is our public space. The query that informed the early life of the Berkman Center was whether we, on the internet, were capable of governing ourselves. To address this question, we need to ask what our domain as a people is. He offers, “We are the people of the net, and our domain is the public domain.”

If you want an orderly world of real property, you should build a registry. It’s the same in the world of bits. Charlie is now working on a directory of public domain, starting with the Petrucci collection and the IMSLP – the international music score library project. Charlie doesn’t mean public domain in the strict legalistic sense. Instead, he asks us to think of the public domain as the bits you can reach through the net. We can then separate the space into the free and the not-free, as constrained by copyright and by market.

To ensure we can be the people of the public domain, we need to build our domain on a foundation that is solid in law. We’re going to build based on collections organized by registrars. The problem with that strategy is that registries can be the focus of litigation risk. So the goal is to work with a reputable law firm to protect the registrar, the registry and users of the registry. That helps us positively define the public domain and defend it.

How does this relate to privacy? It’s worth thinking about the key actors involved. What are actors that appreciate individual privacy? Governments are interested in surveillance. Corporations are interested in data acquisition. Look at the librarians and we’l find allies. They are connected to powerful institutions that share the values of privacy.

Judith asks Charlie to strengthen the connection to privacy. He responds, “I don’t like privacy. It tends to be too closely associated with fear, and it always seems like a rear-guard action against technology.” Instead, we should work on the architecture of the public space and ensuring we architect for private space.

Hubert Burkert – moving beyond the metaphor

Filed under: Berkman,hyperpublic — Ethan @ 3:47 pm

Herbert Burkert of the University of St. Gallen in Switzerland teaches internet law and heads a center at St. Gallen that parallels the work we do at Berkman. He suggests we consider the space between beauty and cocercion. There’s only a few occasions where an audience takes pity on a lawyer, and it’s when a lawyer ventures in the sphere of aesthetics. There’s such a thing as legal creativity, but it usually leaves you facing the ethics board and quickly turns from pity to self-pity. So he wants to move from a presentation on “criteria” to one about “comments”.

His comments are structured around two names. One is Johann Peter Willebrand, a German writer about public security who encouraged registration of foreigners in towns. But he also encouraged the pledge to treat citizens and foreigners politely, which you can read as you wait for hours to pass through immigration in Boston. He’s become something of a hero to Burkert, as someone who’s tried to change the relationship between beauty and coercion, coercing people into beauty.

Burkert’s point – design and architecture talk is dangerous talk. Le Corbusier wanted to design not just buildings, but how people life. Totalitarian designers gave certain architectures to control people. And today’s contemporary suggestions on public safety, walkability, and security need to be considered in this light. When you consider criteria of design, ask whether you’re designing for people, and whose interest you’re designing for. How much space for opportunities to live are you prepared to leave for others?

This leads us to Lina Bo Bardi, an Italian architect who worked in Brazil. She was asked to turn a factory in Sao Paolo into a recreation area. The city is a remarkable and challenging place: so crowded that it’s got the highest percentage of helicopters per capita, because it’s the only way to beat traffic, and has a serious problem with crime. She built a tower and bridges that connected to the factory, suggesting a dialog between work and play. It’s a very striking building – the windows look like the holes that might be made by grenades than designed openings.

How is this relevant? Bo Bardi was designing to create opportunities for social gatherings, and for cross-generational communication. Burkert suggests that cross-generational communication is quite rare in social media. So is cross-cultural communication. And these spaces encourage opportunity for variety, and opportunity for protected openness.

Perhaps the low walls that appear in her design are metaphors for scaled privacy. Or maybe we need to stop using these kinds of physical metaphors, at least from architecture, in these virtual spaces?

Data, the city and the public object

Filed under: Berkman,hyperpublic — Ethan @ 2:59 pm

Adam Greenfield is the principal designer of Urbanscale, a design firm that focuses on design for networked cities and citizens. He’s interested in the challenge of building spaces that support civic life, public debates, and the use of public space.

The networked city isn’t a proximate future, it’s now. We’ve got a pervasively, comprehensively instrumented population through mobile phones. We have widespread adoption of locative and declarative media through tools like Fourquare and systems of sentiment analysis. And we’re starting to see “declarative objects”, items in public spaces like the London Bridge, which now tweets in its own voice using data scraped from a website. Objets start having informational shadows, like a building in Tokyo, literally clad in a QR code – you can “click” on the building and read more about it.

We’re starting to see cities that have objects, buildings and spaces that are gathering, processing, displaying, transmitting, and taking action on information. We’re subject to new modes of surveillance which aren’t always visual. Tens of millions of people are
already exposed to this, which suggests we may need a new theory and jurisprudence around public objects.

Offering a taxonomy of public objects, Adam starts with the example of the Välkky traffic sensor. This detects the movement of people and bikes in a crosswalk and triggers a bright LED light to warn motorists. This is very important in Finland, which is very dark 20 hours a day, 10 months of the year. He describes this as “prima facie unobjectionable, because the data is not uploaded, not archived, and because there’s a clear public good.

Another example is an ad in the subway system in Seoul. There’s a red carpet in front of a billboard. Walk on it, and the paparazzi in an animated billboard will swivel and photograph you. It’s mildly disruptive and disrespectful, and there’s no consensus public good. On the plus side, it’s purely commercial – there’s no red herring of benefit. And it probably doesn’t rise to the threshold of harm.

And then there’s the soda machine. Adam shows us the Accure touch screen beverage machine in Tokyo, which uses a high resolution display to show you what beverages are available. Each customer is offered different consumables – an embedded camera guesses at age and gender and delivers beverage options to you based on that model. It’s prescriptive and insidiously normative. And it compares information with other vending machines. If you’re a bit abnormal – a man who likes beverages common in the female model, for instance – these systems leave you out of luck. And while they’re commercially viable, there’s no public good associated with this information gathering. We might put this into the same category as interactive billboards with analytics packages, like the Quividi VidiReports, which detects age, gender, and even gaze. There is no opt out – you’re a data point even if you turn away from the ad.

How do we think about these systems when power resides in a network? Adam gives the example of an access control bollard in Barcelona, a metal pile that rises out of the ground to block access to a street unless you present an RFID that gives you permission to pass. This system relies on an embedded sensor grid, RFID system, signage, and traffic law all interacting together. It’s a complex, network system that we largely interact with through that bollard. It’s even easier to understand these systems when they exist solely through code.

There’s a class of public objects that we need to define and have a conversation about. Adam proposes that they include any discrete object in the common spatial domain intended for general use, located on public right of way, or that have de facto shared access to the public. When we build these systems, Adam says, we should design in ways that the data is open and available. That means offering an API, and making data accessible in a way that’s nonrivalrous and nonexcludable.

An open city necessarily has an more open attack surface. It’s more open to griefing and hacking. We need a great deal of affirmative value to run this risk. And we need to develop protocols and procedures to establish precedence and deconfliction around these objects. We’re roughly a century into the motor car in cities and we still don’t handle cars well, never mind these public objects.

Adam advocates a move against the capture of public space by private interest and towards a fabric of freely discoverable, addressable, queryable and scriptable resources. We need to head towards a place where “right to the city” is underwritten by the technology of the space.

Jeffrey Huang of the Berkman Center and EPFL Media x Design Laboratory has been involved with the design of a “hyperpublic” campus in the deserts of Ras Al Khaimah, one of the seven Emirates of the UAE. The Sheik of the state has agreed to fund the joint development of the campus with Huang’s institution in Switzerland, and his design students have been focused on building a university campus that’s deeply public, both in terms of physical and architectural space.

One of the major constraints for the design is lowering water and energy usage. The goal is to make buildings make environmental sense using data. They’ve mapped the building site and located natural low points where water accumulates. The design makes use of these points as “micro-oasises”. The design for the building is large, open spaces around these points, an echo of the EPFL learning center in Laussane, Switzerland.

Within the building, a network of sensors can greet people by name and offer personal services to them. You can interact with people through data shadows, which physically track people through the building, a shadow cast on the wall that shows someone’s name, identity and interests.

He acknowledges the dangers of this system, making reference to Mark Shepard’s Sentient City Survival Kit and an umbrella whose visual pattern scrubs your data from surveillance. But he notes that there’s less need to design the private if hyperpublicness is adequately designed. We should build systems where everyone and no one owns the data, which are fully transparent.

Betsy Masiello from Google works on public policy issues and offers us a practicioner perspective on the topic of the hyperpublic. She tells us she originally misread the title of our session – “The risks and beauty of the Hyper-public life” and skipped over the risk part. She worried we might be celebrating a “Paris Hilton-like existence of life streaming,” making your identifiable behavior available to anyone who chooses to watch.

There’s a better way of thinking about data-driven lives and existences. Systems like Google Flu trends uses lots of discrete points of information to make predictions about health issues – this gets quite important when this helps us target outbreaks of diseases like dengue fever. Unlike the pure performance of a public life, we get public good that comes from big data analysis.

She offers a frame for analysis: predictive analytics based on your behavior, which use your data and make it clear how it’s used veruss systems that are predictive based on other people’s behaviors, like Google’s search, flu trends, and perhaps the soda machine Adam talks about. Both systems can be very valuable. But the risk is the collapse of contexts that happens in a hyperpublic life – the idea that data can be reidentified and attached to your identity.

She recalls Jonathan Franzen’s essay, “The Imperial Bedroom”, from 1998 about the Monica Lewinsky scandal. Franzen suggests that without shame, there’s no distinction between public and private. The more identifiable you are, the more likely you are to feel that shame.

The current challenge we face is contructing and managing multiple identities. Ideally, we’d have ways to manage an identity that includes a form of anonymity. It’s becoming trivial to reidentify people within sets of data. We may need to have policy interventions that put requirements on the data holders, punishing people who release information that allows people to be reidentified.

There’s an interesting argument that arises around privacy and transparency. Adam offers his frustration that Amazon continues recommending Harry Potter to him despite having 15 years of purchasing behavior data, none of which should indicate his desire to read fantasy. Jef sees this as a problem of too little data, not too much. Jeff Jarvis, moderating, criticizes Adam for asking for too much privacy and tells us he doesn’t want a world in which we can’t customize, and where we’re forced away from targeted data when it’s useful.

Latanya Sweeney and rethinking transparency

Filed under: Berkman,hyperpublic — Ethan @ 11:43 am

Latanya Sweeney urges us to rethink the challeges of privacy. She’s worked in the space for ten years and tells us that thinking about privacy in terms of the design of public spaces is a helpful and useful conceptual shift. We tend to look at the digital world in terms of physical spaces. In digital spaces, though, we can often look at someone from different perspectives in parallel spaces, and we can learn things about you that might be considered to be “private”, hidden behind some sort of a wall.

She prefers to talk about semi-public and semi-private spaces, and to consider the tension between privacy and utility. It’s not one or the other, but the sweet spot between the two. She’s rethinking privacy, particulary around the topic of big data: pharmacogenomics, computational social science, national health databases. This movement towards the analysis of huge data sets forces us to rethink within legacy environments. How do we de-identify data? What does informed constent and notice mean in these spaces? And we’re rethinking at architectural levels, too – moving towards a realm of open consent and privacy-protecting marketplaces.

Open consent has been popularized by George Church at the Harvard Medical School. Rather than asking consent or making promises or guarantees, he gives you a contract where you sign away liability, because considering future risks is simply too hard. It sounds kooky, but a thousand people have signed up. Another model is a trade secret model – what if I treat your genomic data as a trade secret? As long as I keep it private, you’re exempt from liability – release it and all bets are off. We might also think of data sharing marketplaces where we insulate participants from harm and compensate them when it occurs.

We need to think through these components:

Data subjects – we need to think through the possibility of economic harm to these actors, in part because humans tend to discount risks around privacy

Technology developers – some of these developers are her students, and she urges them to think about the power over privacy and technology decisions they exert. Video recorders record sound and video, and sound is hard to mute. As a result, videotaping often pushes us against wiretapping laws… and this could have been moderated with a $0.01 cost decision


Belief systems

Benefit structures

and Legacy environments

Zeynep Tufekci asks Sweeney to talk through the question of belief systems and false tradeoffs. She suggests that debates have a false belief that you’re trying to maximize privacy or utility – the key is a relationship between the two.

Walls and thresholds – physical metaphors at Hyper-public

Filed under: Berkman,hyperpublic — Ethan @ 10:50 am

Urs Gasser, the director of the Berkman Center, opens the Hyper-public conference with a discussion of a privacy ruling in his native Switzerland. Swiss data protection law has led to a ruling regarding Google’s Street View product. To operate in Switzerland, Google must blur the faces of anyone caught in a photograph as well as license plate numbers. But they have to go further and eliminate any identifying information, like the skin color of people standing in front of shelters, retirement homes, prisons, and schools. The ruling also prevents Google’s cameras from looking into gardens and back yards, or into any space it’s not possible for a pedestrian to see.

The ruling, Gasser argues, indicates the complexity of delineating between public and private. It points to the need for a nuanced definition of privacy, including privacy in a public space, like in streets or libraries.

The design choices we make have multiple effects. They’ve got enabling effects – there are many services built atop Google Street View. These services can have leveling effects – Google’s product lets people who are physically immobile explore cities. And design choices have constraining effects. As Larry Lessig has famously observed, code is law – the technical constraints can prevent you from taking certain actions.

We made need corrective mechanisms for design choices. There’s a role for social norms – we might consider the fact that a significant percentage of Swiss inhabitants are using Street View, which might provide implicit support for design choices. And we need to consider how norms are changing – are changes like the practice of offering a “public apology” for actions on Facebook an appropriate response to straying across legal or normative boundaries?

Law tends to expect perfection – Swiss law doesn’t consider it sufficient that 99% of faces are blurred by Google’s technology. The law requires 100%, whether or not it’s realistic. To handle the challenges of privacy and technology, we need a feedback system that incorporates tech, law and user behavior.

Judith Donath is the lead organizer of the conference, and she reminds us that there’s no shortage of examples of the tension between public and private brought forward by technology. We can consider Google Street View, or just Anthony Weiner’s involuntary public exposure on Twitter. Technology is having an effect on what’s public and private space, whether you opt into social media systems or simply walk down the street. Ephemeral behavior becomes a permanent record.

Societies evolve norms around privacy. We don’t join other people’s conversations at a restaurant, and if we listen in, we try to disguise our behavior. In traffic jams, people forget their cars are transparent – they get dressed and pick their noses, and we try to look away. Those norms allow for privacy in public spaces. In law, on the other hand, privacy sometimes seen as a goal in itself, not just a means to an end.

We may be reaching a point of very high societal privacy in the US. We’ve got privacy in our dwellings to an unprecedented degree, and the possibility of extreme privacy that comes from moving away from where we grew up. Through a market economy, is possible to live in ways where you no longer rely on tight networks of people to provide essential services like child care – you can live your life in personal isolation and still be clothed and fed.

Our behavior in public has to do with who is watching us. Are we being watched by marketers who want to turn us into consumption machines? By a repressive government? That doesn’t make for an especially cooperative society. On the other hand, at the extremes of privacy, it’s hard to have society at all.

Jonathan Zittrain leads the first discussion – he suggests that this privacy conference is different from other privacy conferences because we’ve got a mix of people in the room, people who often think about these issues, and those who rarely encounter it. Our goal is to come to a language where we can all understand.

His first panelist is computer scientist Paul Dourish of UC Irvine. He suggests we consider privacy not as something we have, but as something we do. We tidy up before the cleaning lady comes, we might re-stack our magazines, putting Oprah on the bottom and the Atlantic on the top before people come to visit.

Privacy is also a function of group identity. There’s information that’s public within a group, and being in a family or a group requires a compromise in terms of privacy. We might think of this in terms of Michael Warner‘s concept of publics and counterpublics. There are
multiple publics that emerge in terms of address and encounter with media objects – there’s media aimed at people like me and other people’s media, which leads to other publics.

And there are infrastructures, networks that provide new ways of connecting people. They’re reusable and interchangeable – if we cannot plug in our computer into a plug in a particular place, that’s not infrastructural. These infrastructures make new relationships with spaces possible.

He offers a provocative example: how sex offenders navigate in California while constrained by GPS-enabled tracking anklets. How do you think about moving through a space if you can’t come within 2000 feet of schools, parks or swimming pools? It turns out that you simply can’t navigate the world at this scale. Instead, you end up thinking in terms of safe towns you can be in, and safe areas you can wander around.

What does it mean to be connected to other people in online and offline spaces? It’s about accountability to each other. The movements of parolees aren’t just their responsibility, but those of parole officers who need to be accountable for the complex, detailed log of where parolees go. Those officers need to account both for their behaviors and the vagaries of the tracking system.

Zittrain wonders whether we might consider an iPhone ap that solves “the travelling sex offender problem”, even as an art piece that displays how difficult it is to move through real spaces.

Laurent Stalder, an architecture professor at ETH Zurich, has recently been studying two topics: the emergence of the English House as it entered German culture in the 1890s, and the nature of the threshold. Privacy is associated with enclosed spaces, he tells us. The desire for intimacy and protection, enclosed on all sides, reached its apogee with the Victorian house.

Since then, we’ve seen a reconsideration of the wall as a limit between interior and exterior space. We can think of the “unprivate house”, like Philip Jonson’s glass house in New Canaan CT, a house which has the state of being permanently accessible. On the one hand, we have open houses – a thresholdless space, a seamless environment – and on the other hand, spaces that are inherently about control: airports, laboratories.

The traditional door was a clear boundary between public space and complete privacy. The emergence of different threshold devices has fractured that space. These devices are anthropomorphic – they shape our activities by prescribing certain behaviors. And we see rituals associated with thresholds: cleansing, absolution. We need to think through the difference between a border and a threshold – a border can be closed, while threshold is a neutral space, and a contested one.

John Palfrey, vice-dean of Harvard Law School and librarian of the Law Library, suggests that it’s simply not true that young people have given up on privacy. We care about it in particular contexts, and understanding those contexts is critical to understand our practice. Unfortunately, we may not be very good at figuring out how to correctly navigate these new spaces.

Palfrey suggests that the design of the fourth amendment – which determines, when the state wants information about you, what are the rulesets? – doesn’t always work well in the news spaces we’re building. When we build a space like Facebook, we’ve not done the hard work about those permissions and tradeoffs.

We need to consider some of the basic design notions behind systems of internet and social media. The “check in” applications like Foursquare that seem to thrive are those that are interoperable. We want to check in once and have it posted in all systems simultaneously. But given the free flow of that data, we need to consider breakwalls and safe harbors, situations where the data can be slowed or stopped. What do those breakwalls look like in those highly interoperable systems?

Zittrain suggests that both Palfrey and Stalder are considering thresholds, limits and interfaces between space. Palfrey points out that designers generally want lower walls, but there are costs associated with those low walls – we try to keep walls low at Berkman, to maximize participation, but there are literal costs associated with it.

Zittrain reminds us that our colleague Charlie Nesson used to life stream, recording the conversations he had. This was a step towards moving into a world of 24/7 streaming, wearing microphones and cameras at all times. He wonders what happens when we merge this data stream with a market economy that allows us to surveil the world by purchasing parts of people’s lifestreams.

Jeff Jarvis wonders how architectural innovation on the web is changing our understanding of public and private spaces, offering the analogy of the hall in the 19th century home as introducing the possibility of privacy in a bedroom. Stalder suggests that private and public spaces aren’t changing much in the private home, but ways we go inside and outside, potentially through the internet and through remote cameras, may be changing. Dournish reminds us that these aren’t just spatial notions (inside/outside, public/private), but social notions. Not everyone had privacy in English homes – it was very different upstairs than downstairs. He suggests that not only are we still trying to figure out boundaries in cyberspace – we’re figuring out them in real space as well.

Zittrain wonders whether the reformulation of ideas of public and private are changing more quickly now than in years past. We build buildings and they last for many years. Virtual spaces can change much more quickly. When we build a house for someone, we know who’s the customer. It’s far less clear who’s the customer for Facebook, the user who gets it for free, or the advertisers who want access to you. How do we think of privacy in these reconfigurable spaces?

Facebook today is not the same Facebook as yesterday, not just because of Facebook’s decisions, suggests Dournish. We reshape the space as well. Reconfiguration, he argues, is sociological and technological.

Nell Breyer asks Stalder to clarify the nature of a threshold in the context of cyberspace – what’s the purpose of the threshold in a virtual space? He explains that it’s about a double meaning, a unity of space between the public and private. Breyer pushes forward and wonders what we lose with the ability to “apparate” in digitial space, appearing deep within a space, avoiding the engineered transition. Zittrain wonders whether we might see a visual representation of where other people are entering into a website from.

Dournish points out that webmasters actually have all this information. This might be a reminder that we need to be careful about overusing spacial metaphors – we maintain multiple windows, we’re in different places at the same time. We need to recognize that part of the power of digital spaces is the dehistoricizing nature of new spaces. We need to consider the creative opportunities for reconfiguring space.

David Weinberger offers the observation that physical architecture is always local. The web is global, and the norms of privacy, which had been intensely local, are now being forced to interact with this truly public space. Is there any hope we’ll come to global privacy norms that we can rely on?

Kenneth Carson suggests we think about private and public spaces in terms of the creation of community. He wonders how we change the nature of community in public and private spaces.

An offer is posed to David Weinberger’s question: privacy actually begins with the invention of the chimney – it’s possible to have an enclosed space and heat. But this didn’t exist for the poorest people. In general, we’ve built spaces that eliminate privacy, like the factory, for those who are disadvantaged.

A woman who introduces herself as “a lowly intern” suggests we consider spaces where people use technologies, not just the virtual spaces: cybercafes versus the use of computers in a private home. That spacial aspect can shape how we encounter these spaces. How do we feel about using Facebook in the library? In a repressive nation where government officials might be looking over our shoulder?

Dournish tells us about work one of his students is doing on World of Warcraft in China. Many players come into public spaces to play together, and their discourse about a game, which they know is American, is very Chinese – they consider it a Chinese game because it places huge weight on Chinese values like teamwork. There won’t be global agreements in part because we can have encounters that are inherently local.

May 6, 2011

Media Cloud, relaunched

Filed under: Berkman,Media — Ethan @ 1:20 pm

Today, the Berkman Center is relaunching Media Cloud, a platform designed to let scholars, journalists and anyone interested in the world of media ask and answer quantitative questions about media attention. For more than a year, we’ve been collecting roughly 50,000 English-language stories a day from 17,000 media sources, including major mainstream media outlets, left and right-leaning American political blogs, as well as from 1000 popular general interest blogs. (For much more about what Media Cloud does and how it does it, please see this post on the system from our lead architect, Hal Roberts.)

We’ve used what we’ve discovered from this data to analyze the differences in coverage of international crises in professional and citizen media and to study the rapid shifts in media attention that have accompanied the flood of breaking news that’s characterized early 2011. In the next weeks, we’ll be publishing some new research that uses Media Cloud to help us understand the structure of professional and citizen media in Russia and in Egypt.

With our relaunch of the site, many of our most powerful tools are now available for your use. We’re hoping Media Cloud proves useful to anyone interested in asking questions about what bloggers and journalists are paying attention to, ignoring, celebrating or condemning.

We hope the tools we’re providing are a complement to amazing efforts like Project for Excellence in Journalism’s News Coverage and New Media indices – we consider their tools the gold standard for understanding what topics are discussed in American media. PEJ works their magic using talented teams of coders, who sample different corners of the media ecosystem to find out what’s being discussed. We use huge data sets, algorithms and automation to give a different picture, one focused on language instead of topic.

At its most basic, Media Cloud gives a picture of what journalists and bloggers and writing about by counting the words used in recent stories. Above is a cloud of language used in our set of political blogs during the week starting on Monday, May 2nd. We can see language about the US raid on Osama bin Laden’s compound, including obvious words like Abbotabad, Bin Laden and raid, as well as words that suggest particular interests within those stories: helicopter, SEALs, intelligence, interrogation, Pakistan. Even with a major story dominating discussion, we see glimpses of other issues, like the US Congress Caregiver’s Act and speculation that Indiana governor Mitch Daniels will enter the Presidential race. You can click each word in the cloud and see what sentences in different blogs contained the term in question, how often it was used, and how that source compared to others.

Comparison is where our tool is most powerful. The cloud above shows the differences between words used in left and right wing blogs during the same time period. We start to see differences in what aspects of the Bin Laden story bloggers focused on. Bloggers on the left used the words “torture” and “waterboarding” while bloggers on the right use “interrogation” and “terrorist”. Other comparisons are less obvious – we see more discussion of debate about releasing raid photos on the right than on the left, and a discussion about expanding the Hyde Amendment (which affects congressional funding for abortion) on the left.

We’re also able to make general statements about the similarity or difference in word usage in these comparisons. While the left and right may both be focused on the raid in Pakistan, the similarity score (near the bottom of the word cloud, towards the right) suggests a larger disparity in agendas than we saw looking at these two sets of media a year ago, when both sides were talking primarily about Arizona’s tightened immigration laws. I’ve been taking an in-depth look at similarity scores to understand how media attention can shift at moments of international crisis, and how the recent, internationally-focused media cycle may differ from the news we often get in the US.

What our tools let you do with Media Cloud are really just the tip of the iceberg. The code behind our system is published under an open source license, so other researchers can build systems to monitor media in other countries and other languages. (We’ve got a system monitoring Russian media and blogs that you’ll hear more about soon.) We are publishing huge sets of data that include information on word frequencies in different stories for researchers who want to analyze American media without collecting their own data. And we’re hoping to collaborate with researchers around the world who’d like to use our tools and data to ask and answer pressing questions about what’s covered and how.

This new release is thanks for the hard work of Hal Roberts, architect of the project, David Larochelle, developer extraordinaire, Zoe Fraade-Blanar, whose skill at interface design has made our work vastly more useable as well as more attractive. Thanks to them and everyone else involved with the Media Cloud project. Hope you’ll check our work out and let us know what you discover.

« Newer PostsOlder Posts »

Powered by WordPress