In 2006, American adman Dan Ligon shared a video, “Ha Ha Ha America”, that he’d entered in the Sundance film festival. The video presents itself as an angry and dismissive rant about China’s superiority and America’s inferiority, badly subtitled in Chinglish. I wrote about the film when it came out, troubled by the racism associated with the Chinglish narration, and my fear it would be misread as reality, not satire, by American audiences. The Shanghaiist and some other China-based commentators were similarly troubled, though one Daily Kos reader found it a helpful wakeup call about China’s rise and America’s failure to compete economically.
A story about shooting Ha Ha Ha America, from Ligon’s site.
The film is shot in Wenzhou, and central to its narrative is the idea that Wenzhou, China’s 16th largest city, is likely to surpass New York City in population soon. This requires some blurring of the numbers – the Wenzhou jurisdiction, which includes two satellite cities and six counties, has a population of about 9 million, though only 3 million live in the city proper. New York City has an urban population of over 8 million and 20 million in the broader metropolitan area. Ligon’s comparison is apples to oranges (metropolitan area to urban population), but it’s a provocative idea that a city most Americans had never heard of could rival the population of America’s largest cities.
What interested me about Ligon’s film was the juxtaposition of a narrative about China’s rise with the images of a cityscape that isn’t going to challenge New York City for tourists any time soon. If Ligon’s argument is that size matters, then perhaps discovering that a massive city that reads visually as a somewhat sleepy provincial capital tells us that a future of Chinese megacities is going to look very different from the European/American 20th century. Or perhaps there’s a subtler message that size isn’t everything, and that iconic, aspirational cities occupy another conceptual space entirely.
Ha Ha Ha America, on YouTube
I was thinking about “Ha Ha Ha America” because I realize I don’t have a very clear picture of what Chinese cities look like. I’ve recently been to Guangzhou and Hong Kong, and in the more distant past, to Beijing, but it’s very hard for me to picture what I think Wenzhou would look like.
I’ve been thinking about Chinese cities because my colleague Catherine d’Ignazio is working on a project called Terra Incognita, an online game that tracks your reading about different cities and invites you to explore readings about unfamiliar parts of the world. The project is a reaction, in part, to my writings about homophily and serendipity. By helping you monitor your reading behavior, Terra Incognita can reveal your blind spots, and then help you find ways to explore content from those unknown parts of the world.
Catherine’s current implementation of Terra Incognita uses a browser plugin to track your reading (only on a whitelisted set of news sites) and opens a portal to one of the world’s 1000 largest cities when you open a new tab. Should you read a lot about Europe, you won’t get a page on Berlin, but might get Brazzaville, which could include a piece from my blog about Congolese sapeurs.
That we’re relying on this blog as a source of compelling content designed to help you explore unfamiliar places is an indicator of the main problem with the project: it’s hard to find compelling readings on many of the world’s cities. This problem is especially acute for China. Roughly 40% of the cities on the list Catherine is working from are in mainland China, and it’s not always easy to find English-language readings that introduce what’s exciting or special about a city to an international audience.
A Google search for Wenzhou will, tragically, turn up a lot of documents, due to a horrific train crash outside the city in July 2011. This New Yorker article by Evan Osnos is an excellent overview of the crash and the factor that led to it, but doesn’t tell you much about Wenzhou itself. The Wikipedia page on Wenzhou offers the intriguing hint that the city is legendary for its entrepreneurialism, and is the “birthplace of China’s private economy.” More bluntly, the article notes, “A popular saying calls Wenzhounese the “Jews of the Orient” (东方的犹太人). ”
Exploring this idea, I found Peter Hessler’s article for National Geographic, “China’s Instant Cities”. Hessler explores the growth of Lishui, a rapidly growing manufacturing city 80 kilometers from Wenzhou through the story of Boss Gao, a Wenzhouese entrepreneur who builds a factory to build bra underwires and rings (the wire rings that bra clasps hook into.) It’s a brilliant story, featured in a collection of 2008′s best magazine writing, and it did exactly what I hope Terra Incognita can do: help readers develop an interest in places they knew nothing about. (I’m now using magportal.com, a magazine search engine, to look for other Wenzhou articles, like Stephen Glain’s article in Smithsonian magazine, “A Tale of Two Chinas”, which contrasts entrepreneurial Wenzhou with Shenyang, a former government stronghold now facing hard times.
As I was writing “Rewire”, I had a helpful and long-running argument with David Weinberger, who worried that my hopes of engineering serendipity by tracking what we read, identifying blind spots and making suggestions would be less effective than a much simpler strategy – just read a really good magazine. The promise of Granta, The New Yorker or other elite magazines is simple: it doesn’t matter if you’re interested in the topic, because the writing is so good it will draw you in.
David’s right that quality matters. But I wonder if the magazine format is the key issue. Introducing someone to your community via news stories doesn’t work, as they lack context to understand the news. An encyclopedia article offers background, but no seduction, no reason to read and explore. Magazine articles need to draw you in and to expose you to the unfamiliar, and can’t assume as much context. Part of the success of Terra Incognita may rest on whether we can find these sorts of high quality, low context stories for a thousand cities.
How would you explain your hometown to a foreign visitor in half a dozen weblinks, or less? Wikipedia’s article on Pittsfield, MA includes an article in the Financial Times that generously describes our little city as “The Brooklyn of the Berskhires”, an article on retirement that points out that Pittsfield is the only US city where the majority of retirees are single, and an ESPN piece that details Pittsfield’s tenuous claim to be the birthplace of baseball. (A historian discovered an early reference to baseball in a 1791 Pittsfield bylaw, prohibiting playing the game near the city’s new meetinghouse, which featured glass windows. Pittsfield also features one of only two professional baseball stadiums that faces west, meaning the batter faces the setting sun. This means ballgames in Pittsfield routinely feature “sun delays”, during which play stops because the batter is blinded, leading also to our baseball team being named the Pittsfield Suns.)
Incomplete? Yes. Biased? Indeed. But if you’re interested in learning more either about Wenzhou or Pittsfield, perhaps Catherine is on to something.
Catherine and I would love your help on Terra Incognita – please sign up for the alpha site here, and if you have specific suggestions of stories to represent a city, please use this form. I’d also welcome general thoughts on how we should be looking for great stories linked to global cities.
Hugo Barra is a long-time veteran of the technology industry. Raised in Brazil, he came to MIT in 1996 and completed B.S. and M.Eng. degrees in computer science and electrical engineering before joining wireless software company Lobby7. From there, he joined Nuance Communications and later, Google, working on the Android team, where he rose to Vice President of Android Product Management, becoming one of the public faces of the company, introducing new phones and software to audiences at trade shows.
Most people, even those who follow tech closely, didn’t know who Barra was until he announced in August of last year that he was leaving Google for Xiaomi, a Chinese manufacturer of smartphones. The departure of a non-Chinese Google executive for a Chinese company was surprising enough to merit coverage throughout the tech press and in the Guardian, where Charles Arthur saw the move as a coup for Xiaomi and reason to ask questions about Google’s strategic leadership around Android.
Stories about Barra’s job change took on a tabloid quality when writers began speculating that his real reason for leaving Google was a romantic rivalry. Business Insider reported that Barra had been involved with a Google Glass product manager, Amanda Rosenberg, who was now dating Sergey Brin, and Sydney’s Morning Herald reported that Barra’s departure from Google was a “collateral casualty” of the complicated love life of Google’s founder.
After all, a star executive at America’s most-admired company would never leave for a Chinese phone company because he saw opportunity there. Putting the Pacific between you and a vengeful software billionaire is one of the few logical explanations for an American to want to work in China.
Barra patiently explained to reporters that he’d come to Xiaomei to work with Bin Lin, the head of Xiaomi, who had been the head of Google’s mobile engineering unit in China. While he was at Google, Barra was impressed with the ways Lin’s team had extended and modified Android, and frequently brought Xiaomi products to the Android team to show off their functionality.
Barra re-entered the tech press limelight in December when he spoke at Le Web in Paris. His speech was, unsurprisingly, a celebration of the new corporation he joined. But it was, more broadly, a education for European and US techies on the wonders of the Chinese technology industry. Business Insider’s crib of his talk makes Barra sound like a latter-day Marco Polo, returning to Venice with tales of 600 million internet users, 15% annual growth rates and billion dollar IPOs.
Hugo Barra at his favorite dumpling joint in Beijing
On the rare occasions American geeks think about the internet in China, they tend to think about the Great Firewall and the 50 Cent Party. This focus on censorship – which is an important fact of life on the Chinese internet – tends to blind Americans to the creativity and vitality of the Chinese internet. (This 2010 article by David Talbot for Technology Review, China’s Internet Paradox, explores this idea in depth.) As a result, we are surprised to learn that China’s most popular social networking site, QZone, has over 600 million users. That Jingdong, an Amazon-like online store offers three hour delivery in major Chinese cities. That tools like WeChat and MoMo offer functionality that’s surprisingly different from social networking models offered by most American and European social networking tools.
I used the story of Barra and his reports from China to open a recent talk on Rewire at Harvard’s Coop. Our surprise that there’s a thriving and interesting tech industry in China strikes me as a symptom of a larger phenomenon, the ways in which we are insulated from information from places that are culturally distant, even if we’re tightly tied to those nations in terms of migration and trade.
I give dozens of examples in Rewire of ways in which barriers of language, culture and interest keep us from learning about what’s happening in other parts of the world. But the lack of knowledge of Chinese internet tools is a wonderful example I wish I’d included. QZone, with over 600 million users, is represented in the English-language Wikipedia with a 3k stub, while Twitter, with a slightly smaller userbase, has a massive, 140kb article whose table of contents is longer than the QZone entry.
When I speak about Rewire, I try to explain why I think it’s important that increased internet connectivity doesn’t inevitably lead to increased interest in or understanding of other cultures. I talk about the challenge of solving massive international problems like global warming without international cooperation, or the missed opportunities to think creatively by maximizing cognitive diversity and approaching problems from different points of view.
But Hugo Barra’s story offers a much more straightforward motivation: there’s a ton of opportunity in China’s tech industry and Americans and Europeans will be shut out of that opportunity if they’re not aware of what’s going on. Americans may not be especially interested in building tools for Chinese users, but Chinese companies are looking aggressively at overseas markets. Xiaomi recruited Barra precisely because they are excited about expanding beyond manufacturing phones for Chinese markets.
There’s a massive information asymmetry because the US and China right now. Teams of volunteer translators work to render US and European political and tech media into Chinese – one community, Yeeyan, features more than 100,000 registered translators. Other teams work to subtitle US television programming in Chinese within 12 hours of broadcast. Information in the other direction is brokered by small, underfunded, hardworking projects like Tea Leaf Nation, which provide great translation and contextualization of Chinese stories for the small audiences interested in them.
Perhaps Barra’s celebration of Chinese internet culture will inspire others to follow his lead and work with Chinese technology companies. Perhaps others will learn what’s exciting about the tech industry in Brazil or Kenya. At the very least, Barra’s story might remind us that there’s a huge world out there we don’t hear enough about and that it takes work on our part to learn more.
It’s not obvious from looking at me, but while I’m American, I’m deeply partisan towards the nation of Ghana. I moved to Accra, Ghana in 1993 to study xylophone music, and I’ve traveled back to the country almost every year since 2000. I ran a nonprofit organization in Ghana from 1999-2004 and I now work closely with a Ghanaian journalism nonprofit. This dual allegiance is a good thing: I have two teams to root for in the upcoming World Cup (unfortunately, they’ll see each other in the first round), and I take disproportionate pride in Ghana’s economic and political success over the past two decades.
Ghana has a lot to be proud of, in political terms. After almost twenty years of rule by a man who took power through a coup, Ghana democratically elected a President from the opposition NPP party in 2000. After eight years of his rule, they elected a President from the NDC, which had ruled for the previous decades. Political scientists call this a “double alternation”, and it’s considered the gold standard for stability in a democracy, evidence that an electoral system is free and fair enough that either of two major parties can win an election. Due to its clean elections and history of stability, demonstrated when the death of President Atta Mills in office led to a seamless transition to his vice-president John Mahama, Ghana has become the exemplar for democratic transition in West Africa. Ghanaian politicians and NGOs are now working to export models and best practices from Ghana to the region and the continent.
But there’s something uncomfortable about Ghana’s elections. Many of the politicians from the NPP party come from a single ethnic group, the Akan or Ashanti, and their close allies. The NDC has a broader ethnic base of support, but the Ewe are particularly powerful within the party. You can see these alliances in a map of electoral results – the NPP candidate won in the Ashanti and Eastern regions, the home of the Akan, while the NDC won elsewhere, but dominated in the Volta region, where the Ewe hail from. Some critics worry that Ghana’s free and fair elections may be masking elections that are less about political issues and more about ethnic allegiances.
Economist Paul Collier warns of this problem in his book “Wars, Guns and Votes”, He warns that we may be seeing a lot of elections in the developing world that are free, fair and bad. They are free and fair because we’ve gotten very good at monitoring elections for obvious signs of rigging and fraud, but they’re bad because they are decided for reasons other than political issues. In bad elections, Collier argues, people vote for a candidate because they expect some personal financial gain (a job, a handout) or because they see an electoral victory as a victory for their tribe or group. A good election is one in which people vote for a candidate because they expect he or she will make positive policy changes, benefiting a broader community and the nation as a whole.
Free, fair and bad elections happen because it’s hard to hold politicians accountable. We elect politicians because we share their aspirations and visions, but we also elect them because we hope they will ensure that tax dollars are distributed fairly and ensure that our communities benefit from those investments in schools, hospitals, roads and other essential infrastructures. But in many countries, it’s very hard to find out whether our politicians are doing a good or poor job.
Sometimes politicians don’t do a good job because they are corrupt, more interested in their personal gain than serving their communities. In most cases, politicians work hard and their shortcomings are the result of being constrained by finances, thwarted by bureaucracy or otherwise held in check. If we had better ways of tracking what governments do in their communities and documenting the progress of taxpayer-funded projects, we would have far more information we could use to hold our politicians accountable, to re-elect the best and oust the worst. This means a strong, free press is important, as are efforts at government transparency, and systems to ensure access to government information, like freedom of information laws.
In other words, if we want strong, responsive democracies, we can’t just fix electoral systems – we have to fix monitorial systems. And we can’t just establish a culture of clean elections, as Ghana has done – we need a culture of monitorial citizenship.
The idea of monitorial citizenship is one I’ve borrowed from journalism scholar Michael Schudson. Schudson argues that we often understand democracy in terms of “informed citizenship” – our job as citizens is to be informed about the issues and to vote, then let our elected representatives do their jobs. This model became popular in the United States during the progressive era of the early 20th century, and Schudson worries that the model may be out of date, not accurately representing how most people participate in democracies today. One of the models Schudson suggests to describe our current reality is monitorial democracy, where a responsibility as citizens is to monitor what powerful institutions do (governments, corporations, universities and other large organizations) and demand change when they misbehave. The press is a powerful actor in monitorial democracies, as demonstrated during the Watergate scandal and the end of the Nixon presidency in the US. And new media may broaden the potential for monitorial democracy, allowing vastly more citizens to watch, document and share their reports.
This year, my students and I have been experimenting with projects that connect monitorial democracy with the mobile phone. We’ve conducted small experiments locally, monitoring the on-time performance of subway trains and wait times in post offices, and examined what sorts of infrastructures in our local community are built and maintained by different government and private sector actors. And now we’re heading to Belo Horizonte and São Paulo, Brazil for the next round of our experiments.
We’ll work with community organizations in neighborhoods in both cities to identify promises local governments have made that citizens see as high importance. We’ll work with these volunteers to map a few, carefully chosen, infrastructures in their communities and to track the status of those infrastructures over time. And we’ll work with the community to figure out how we should reward governments that live up to their promises and challenge governments that fall short… all within the course of two three-day, student led workshops. (!?!)
Our core insight – that citizens can use mobile phones to document infrastructure and monitor government performance – is not a new one. We are inspired by a number of exciting projects that have demonstrated the potential and pitfalls of citizen monitoring and documentation, notably:
- Map Kibera, which has demonstrated the importance of mapping squatter cities and informal settlements to show both the deficiencies and the vitality of infrastructure in those communities
- Ushahidi, which shows that mobile phones combined with mapping can help individuals work together to map crises and opportunities with little central planning
- Fix My Street and related projects, which have helped citizens see governments as service providers, responsible for maintaining infrastructures, and capable of providing customer service to citizens
- Safecast, which has encouraged Japanese citizens to monitor radiation levels in the wake of the Fukushima disaster, helping create data sets citizens can use to lobby the government for better cleanup plans and responses
- The Earth Institute’s collaboration with the Government of Nigeria, to use citizen enumerators, armed with mobile phones, to monitor schools, hospitals and other government-procured infrastructure to establish the country’s progress towards meeting Millenium Development goals
We hope to learn from these projects and push our work in a slightly different direction. Our system, Promise Tracker, starts from promises government officials (local, state and federal) have made to a community, and then helps communities track progress made on those promises by monitoring infrastructures like power grids, roads, schools and hospitals. The use case for Promise Tracker is simple: if the mayor of a city makes an electoral promise that roads in a neighborhood will be paved during her time in office, Promise Tracker helps the local community collect data on the condition of the roads and monitor progress made on the promise over time. If the mayor meets her goal, Promise Tracker offers proof generated by the community that’s benefitted. If the government is in danger of falling short, Promise Tracker offers an open, freely shared data set that citizens and officials can use to consult on solving the problem.
It’s this idea of tracking promises that has led us to Brazil. I spoke about the Promise Tracker idea at the Media Lab’s fall sponsors meeting and had two transformative conversations with Brazilians who heard me speak. One conversation was with Oded Grajew, a celebrated Brazilian social entrepreneur and innovator, one of the founders of the World Social Forum, and founder of Rede Nossa Sao Paulo, “Our Sao Paulo Network”, a network of community organizations dedicated to transforming and improving that remarkable city. One of Grajew’s many achievements is a successful campaign to get the city of São Paulo to change its constitution and require the mayor to publish campaign promises, allowing citizens to monitor the government’s progress. Grajew invited my students to São Paulo to meet with his organization and see whether the tools we’re building could help his organization keep a close eye on the government’s performance.
The second conversation was more surprising: it was with the government of the state of Minas Gerais, specifically from Andre Barrence, CEO at the Office for Strategic Priorities, who is in charge of innovation in government and the private sector. Minas Gerais is a sponsor of the Media Lab and has been looking for partnerships where Media Lab students and faculty can work with residents of Belo Horizonte and other Minas Gerais communities. It’s not easy for a government to volunteer to open itself to citizen monitoring, and it’s a great credit to the innovative leaders in the Minas Gerais government that they’ve been working hard to find community organizations we can partner with to monitor the government’s progress and enter into a partnership to celebrate successes and work to fix potential failures.
In our workshops in Belo Horizonte and São Paulo, my students – Jude Mwenda, Alexis Hope, Chelsea Barbaras, Heather Craig and Alex Gonçalves – will work with community leaders to understand what promises politicians have made to the community, to identify promises the community is most concerned about, and to identify promises we can evaluate my monitoring infrastructure over time. We’re using codesign methods promoted by our friend and colleague Sasha Costanza-Chock, trying to ensure that what we monitor is what the community cares about, and that we build the tools with the community, who will be responsible for using them over the next few months or years. Our short-term goal is to collect data on a couple of infrastructures in a community, leverage some of Rahul Bhargava’s work on community data visualization to help our partners present data, and to open a conversation with local authorities about tracking an infrastructure over time.
Our long-term ambitions are broader. We hope to build a tool that communities can customize to their own needs and campaigns, but which centers on the idea that mobile phones can collect photographic data, cryptographically stamp it with location information and a timestamp, and release it to public repositories under a CC0 license. We hope we’ll see groups around the world use the tool to track everything from road and power grid condition to air and water quality, integrating low-cost sensors into the system and asking citizens to engage in environmental data collection as well as civic monitoring.
The key idea behind the project is a simple one: civic engagement is too important to be something we do only at elections.
I’ve been writing and speaking about the recognition that many people feel alienated from existing political processes and like there’s no good path for them to engage in decisionmaking about their communities. This alienation leads to disengagement, and can lead to more dramatic forms of dissent, including public protest. The work I’m trying to do on effective citizenship focuses on the idea that we need to engage in citizenship more than once every four years… and also more often than we take to the streets in protest. It’s my hope that helping people monitor powerful institutions and evaluate the successes and setbacks of their elected representatives will be a way people can engage in citizenship every day.
I’m writing this post while enroute to Belo Horizonte, and I’ll share a report on what happened in our workshops and how this idea has changed as I fly home. I’ll also add more links once I have better connectivity. The really good stuff will likely come from the trip report my students put together – I’ll share that as soon as they share it with me.
The title comes from a post on Transom.org, the community newspaper of the Amerian public radio community, by Nate DiMeo. DiMeo is a brilliant freelance producer and the creator of “The Memory Palace”, a beautiful and bittersweet podcast about history and storytelling. At the end of an essay about the podcast and the financial struggles to support it, DiMeo offers this observation:
“Audio never goes viral.
There’s something much more intentional about choosing to listen to something than choosing to click on a video or article. If you posted the most incredible story—literally, the most incredible story that has ever been told since people have had the ability to tell stories, it will never, ever get as many hits as a video of a cat with a moustache.”
You can tell how maddening this is for DiMeo and for many other people creating innovative and important audio. We are in the midst of a moment of extreme creativity in the world of audio production. Podcasting has made it reasonably easy for a competent producer to share original audio with an audience of potentially millions, but generally, dozens, of listeners. Some of these podcasts are finding audiences on public radio through new distributors, like the Public Radio Exchange. But many aren’t. And while there are numerous stories of people who’ve become briefly, sometimes uncomfortably, famous through viral videos (a conference, ROFLCon, exists solely to examine the phenomenon of internet famous), there are very few examples of internet memes that are audio-only.
Alcorn examines this phenomenon structurally, considering the weaknesses in the audio ecosystem that make audio less likely to spread online than text or video. Audio is often something we encounter when we’re doing something else – driving, walking, working with our hands, cooking dinner. As a result, we’re less likely to remember to share that experience online. When we do share, we’re thwarted by the fact that audio is difficult to embed, tied up in different proprietary players. Unlike text, audio is hard to skim. And the “tastemakers” who are in the business of amplifying viral content don’t have a good source for potentially viral audio to audition and spread.
All these are good points. But I wonder if Alcorn and DiMeo are limiting the conversation by focusing on “going viral”. For DiMeo, the failure of audio to go viral is part of a larger phenomenon, that high-quality audio storytelling doesn’t receive the audience it deserves and that the small size of the audience means that it’s exceedingly hard to make a living. DiMeo, in his Transom piece, explains that he’s had book offers that didn’t pan out because he is insufficiently famous – “going viral”, which is a form of unexpected (and sometimes unwanted) fame is something that would be deeply helpful to his career.
But “going viral” is a phenomenon built on passing on content that requires little investment by the viewer. It takes a few seconds to realize that something interesting is going on in the “Harlem Shake”, and if that’s your thing, you might spend a few minutes more finding different versions of the video, perhaps going further to read critiques of the video or the appropriation of the song and dance. But the whole experience is no more than a glance.
Glance-based media is perfect for a world where we’re inundated with choices and forced to make up our mind very quickly. But, as Alcorn argues, it’s hard to glance at audio – by its very definition, audio takes time.
But that may be why audio is so important in a viral age.
The first assignment I ask my students in News and Participatory Media to complete is a media diary, tracking everything they read, watch and listen to over the course of a week. It’s a helpful assignment, shocking the career journalists into the realization that most college students never read print media. I ask students to track not only what mediums they encounter, but what kinds of stories, and to think about whether they were following up on existing interests or learning about new topics.
What’s been most surprising to me is that many participants list radio or podcasts where they get the most international news, and where they get the most unexpected and surprising news over the course of the week. Often, this is because people are listening when they’re doing other tasks. While DiMeo is concerned that it’s harder to get audiences to choose to listen rather than choose to watch or read, I’m seeing evidence that audio is most powerful when choice is not involved.
If you’re working with your hands or driving, there’s a high switching cost involved with being selective about radio or podcast programming – I have to be really uninterested in an NPR story to start searching around the airwaves for an alternative when I’m driving. As a result, I listen to far more stories on subjects I have no explicit interest in on the radio, and often, I discover that I’m interested in a topic I previously knew nothing about.
Radio is a serendipity engine precisely because it downplays choice. Had I turned off Morning Edition when I got bored with a story about the US auto industry, I never would have heard the story about the Ukranian protests that I hadn’t known I was interested in. Viral videos work because I choose to watch and choose to forward – radio works because I don’t choose, and because I’m rewarded for my lack of choice.
When I wrote about serendipity in Rewire, my friend David Weinberger wondered whether serendipity was simply a function of good writing: you end up reading lots of articles on topics you’re not explicitly interested in when you read The New Yorker or Granta. You’ve made the choice of the publication, but not of the content, and you’re along for the ride based on the quality of the writing. I think podcasts are like that – I frequently have no interest in the topics Roman Mars explores on 99% Invisible, but I value his storytelling, and I’m along for the ride.
I don’t think Nate DiMeo wants to be viral – I think he wants to be heard. There’s a need for media that creates serendipity, even if that need isn’t well understood and is far from well met by the market. Alcorn is right that we need to consider the environment for sharing audio, but I think we’d benefit from examining the ways people share long-form readings, a closer analogy to podcasts in the great battle for attention. Audio content would benefit greatly from Instapaper’s “read later” functionality, and from a Longreads that compiled great stories from live radio and podcasts for those who’ve got time to explore.
We need to find better ways of supporting long form media, media that encourages serendipity, media that asks that you give up some choice in exchange for unexpected discovery. We need ways for producers like DiMeo to find audiences who can support their work. But I would hate to see audio producers give up what they do well in search of virality. At its best, audio has a way of blindsiding you, of helping you discover that you are deeply invested in a story you thought you were only half listening to, of changing your life in a small, subtle way by introducing a stray and unexpected thought.
I read Alcorn’s piece in a burger joint in Portsmouth, New Hampshire last night, on Instapaper, on my phone. Walking back to the hotel, I decided to catch up on Nate DiMeo’s work on The Memory Palace and turned to an old episode on my phone, “Heard Once”. I’d heard it once before, but again, walking by the water with the wind whipping my scarf around, I was blindsided again. It’s a story about Jenny Lind, a musician I have never given more than a moment’s thought to, but the story is about so much more.
It’s eight minutes. It may never go viral, but it’s one of the best things I’ve ever heard. Please listen and see if it changes your life in a small way.
As I’ve mentioned in years past, Microsoft Research’s Social Computing Symposium is my favorite conference to attend, mostly because it’s a chance to catch up with dozens of people I love and don’t get to see every day. I wasn’t able to blog the whole conference, in part because I was moderating a session, but I wanted to post my notes on the event to share these conversations more widely. I’ve added some of my thoughts at the end as well. Many thanks to Microsoft Research for running this event and to all participants in the panel.
The session is titled “Data and its Discontents”, and it was curated by RIT’s Liz Lawley and MSR/NYU’s danah boyd. They decided not to focus on “big data” – the theme of virtually every conference these days – but data through different lenses: art and creative practice, an ethical perspective, a rights perspective and through a speculative perspective.
The opening speaker is professor and artist Golan Levin (@golan), who’s based at CMU. He’s spent the last year working on an open hardware project, so he’s exploring other work, not his own. His exploration is motivated by a tweet from @danmcquillan: “in the longer term, the snowden revelations will counter the breathless enthusiasm for #bigdata in gov, academia, humanitarian NGOs by showing that massive, passive data collection inevitably feeds the predictive algorithms of cybernetic social control”
Levin offers the idea of “the quantified selfie” and suggests we consider it as a form of post-Snowden portraiture. In a new landscape defined by drones, data centers and secret rendition, can these portraits jolt us into new understanding, or give us some comfort by letting us laugh at the situation we are encountering? He shows us John Lennon’s FBI file, and a self-portrait Lennon drew and argues that they are the same thing, “two different GUIs for a single database query.”
Artist Nick Felton is blurring the line between data portrait and portrait by offering data-driven annual reports of his life, analyzing his personal data for the year: every street he walked down in NYC, every plant killed. In honor of the Snowden revelations, he is preparing a 2014 edition that examines the uneasy relationship between data and metadata.
A more confrontational artwork comes from Julian Oliver and Danja Vasilev, called The Man in Grey. Two figures in grey suits carry mirrored briefcases. The suitcases are “man in the middle suitcases”, sniffing packets from local wireless and displaying what they find on the suitcase monitors. The artwork makes visible a form of surveillance that’s possible (and, as Kate Crawford will later explain, commercializable.)
If the ethical issues associated with street-based surveillance don’t give you some pause, consider Kyle McDonald, a Brooklyn-based new media artist who pushes the legal questions around these issues even further. He became interested in the inadvertent expressions he made when he used the computer. Seeking more imagery, he installed monitoring software on all computers in the Apple stores he could reach in New York City, and captured a single frame each minute (only when someone was staring at the screen), uploading it to Tumblr. The images reveal some of the stress and anxiety many of us face when we stare into the screen of a computer – McDonald’s photos reveal expressions from empty to confused, unhappy and unsure.
Apple was pretty unhappy with McDonald’s project, and he was forced to de-install the software, and is not able to show the photos he captured – instead, he shows watercolor versions of the images. But Levin notes that such surveillance isn’t hard to accomplish, and that the project “pushed the legal boundaries of public photography”.
A piece that pushes those boundaries even further is Heather Dewey-Hagborg’s “Stranger Visions”. The artist collects detritus from public places that could contain traces of DNA – cigarette stubs, chewing gum, pubic hairs from the seats of public toilets – and scans the DNA to measure 50 markers associated with physical appearance. Based on these markers, she constructs 3D models of the people she’s “encountered” this way. The portraits are less literal than McDonald’s, but transgressive in their own way, built from information inadvertently left behind.
And that’s the point, Levin argues – “These inadvertent, careless biometric traces and our constructed identities are creating entries in a database whose scope is breathtaking.” None of the art Levin features in his talk was made post-Snowden – surveillance is a theme many artists engage with – but they take on an especially sinister character when we consider the mass surveillance thats become routine in America, as revealed by Edward Snowden.
Kate Crawford (@katecrawford) is a professor based at Microsoft Research and MIT’s Center for Civic Media. She’s a media theorist who’s written provocatively about changing notions of adulthood, about gender and mobile technologies, about media and social change, and she’s now working on an examination of the promises, problems and ethics around “big data”. She notes that danah asked speakers on the panel to be provocative, so she offers a barnburner of a talk, titled “Big Data and the City: Ethics, Resistance and Desire”
Her tour of big data starts in the nation of Andorra, a tiny nation in the Pyrenees that’s been facing hard times in the European economic crisis. The government decided to try a novel approach to economic recovery: they decided to gather and sell the data of their citizens, including bus and taxi data, credit card usage data and anonymized telephony metadata. The package of data and the opportunity to study Andorrans is being marketed as a “real-world, living lab”, opening the possibility of a “smart nation” that’s even more ambitious than plans for smart cities.
These labs, Kate tells us, are being established around the world, and according to their marketing brochures, they look remarkably similar no matter where they are located. “There’s always a glowing city skyline, then shots of attractive urbanites making coffee and riding bikes.” But behind the scenes, there’s a different image: a dashboard, usually a map, that’s a metaphor for the central controller – a government agency? a retailer? – to examine the data. You leave a data trail, and someone else gathers and analyzes it. What we’re seeing, Kate offers, is the wholesale selling of data-based city management.
This form of pervasive data collection raises questions of the line between stalking and marketing. Turnstile, a corporation that has set up hundreds of sensors in Toronto – gathers the wifi signals of passing devices, mostly laptops and phones. If you have wifi enabled on your phone, you are traceable as a unique identifier, and if you sign onto Turnstile’s free wifi access points, the system will link your device to your realworld ID via social media, if possible. You don’t agree to this release of data – Turnstile simply collects it. They’re using it to provide behavioral data to customers – an Asian restaurant discovers that many of their customers like to go to the gym, so they create a workout t-shirt to market to their customers. This leads Kate to offer a slide of a man wearing a t-shirt that reads “My life is tracked 24/7 by marketers and all I got was this lousy t-shirt.”
Often this pervasive tracking is justified in terms of predictive policing, improving traffic flow, and generally improving life in cities. But she wonders what kind of ethical framework comes with these designs. What happens if we can be tracked offline as easily as we are online? How do we choose to opt out of this pervasive tracking? She notes that the shift towards pervasive tracking is happening out of sight of the less-privileged – some of the people affected by these shifts may be wholly unaware they are taking place.
Behind these systems is the belief that more data leads us to more control. She notes that Adam Greenfield, author of “Against the Smart City”, argues that the idea of the smart city is a manifestation of nervousness about the unpredictability of urbanity itself. The big data city is, ultimately, afraid of risk and afraid of cities.
When people react to these shifts by arguing for rights to privacy, Kate warns that we need to move beyond an analysis that’s so individualistic. The affects are systemic and societal, not just personal, and we need to consider implications for the broader systems. Not only do these systems violate reasonable expectations of privacy and control of personal data – “this would never get past an IRB – human data is taken without consent, with no sense of how long it will be held and no information on how to control your data” – it has a deeper, more corrosive effect on societies.
She quotes James Bridle, creator of the site-specific artwork “Under the Shadow of the Drone“, who notes one difficulty of combatting surveillance: “Those who cannot perceive the network cannot act effectively within it and are powerless to change it”. Quoting De Certeau’s “Walking in the City, she sees the “transparency” of big data as “an implacable light that produces this urban text without obscurities…”
Faced with this implacable light, we can design technologies to minimize our exposure. We can use pervasive, strong cryptography; we can design geolocation blockers. We can opt out or, as Evgeny Morozov suggests, participate in “information boycotts”. But while this is fine for certain elites, Kate postulates, it’s not possible for everyone, all the time. In the smart city, you are still being tracked and observed unless you are taking extraordinary measures.
What does resistance look like to these systems when opt-in and opt-out blur? Citing Bruce Schneier, Kate suggests that we need to analyze these systems not in terms of individual technologies, but in terms of their synergistic effects. It’s not Facebook ad targeting or facial recognition or drones we need to worry about – it’s the behaviors that emerge when those technologies can work together.
What do we lose when we lose a space without surveillance. Hannah Arendt warned of the danger to the human condition from the illumination of private space, noting “there are a great many things which cannot withstand the implacable, bright light of the constant presence of others on the public scene.”
Kate offers desire lines, the unpredictable shortcuts that emerge in public spaces, as a challenge to the smart city. We need a reflective urban unplanning, an understanding of the organic ways in how cities should work, the anarchy of the everyday. This is a vision of cities that values improvisation versus rigidity, communities versus institutions. In the process, we need to imagine a different ethical model of the urban, a model that allows us to change our minds and opt for something different altogether. We need a model that allows us to reshape, to make shortcuts and desire lines. We need a city that lets us choose, or we will be forever followed by whoever is most powerful.
Mark Latonero of USC Annenberg offers a possible counterweight and challenge to Kate’s concerns about big data. Latonero works at the intersection of data, tech and human rights, focusing on human trafficking. Human trafficking is common, and in severe cases, is a gross violation of human rights, sometimes involving indentured servitude or forced sex. It doesn’t have to involve transportation – he reminds us that human trafficking happens if someone is held against their will in Manhattan – and involves men, women, girls and boys.
His work has focused on human trafficking on girls and boys under 18 in the sex trade, a space where intervention is especially important as victims often experience severe psychological and physical trauma. (The children involved are also below the age of consent, which makes it easier in ethical terms – there are no considerations of whether a victim voluntarily chose to become a sex worker.)
Both victims and exploiters are using digital media, Mark tells us, if only mobile phones to stay in touch with family members. As a result, there are digital traces of trafficking behavior. Mark and colleagues are working to collect and analyze this data, including facial recognition as well as algorithmic pattern identification that could indicate situations of abuse. “It’s hard not to feel optimistic that this work could save a human life.”
But this work forces us to consider not only the promises of data and human rights, but the quagmires. This sort of work draws upon a kind of surveillance, and this kind of watching that’s intended for a social good that raises concerns about trust and control. “Gathering data in aggregate helps us monitor for human rights abuses, but intervention involves identifying and locating someone – a victim, or a perpetrator,” he explains. “Inevitably, there is a point where someone’s identity is revealed.” The question the human rights community has to constantly ask is “Is this worth it?”
Human rights work always involves data: data about humans, both about individual humans and aggregate data and statistics about groups of humans. At best, it’s a careful process relying on judgement calls made by human rights professionals. It’s worth asking whether it’s a process big data companies could help with. As we ask about the involvement of big data companies, we should ask about the balance between civil liberties risks and human rights benefits.
Despite those questions, the human rights community is moving head first into these spaces. Google Ideas, Palantir and Salesforce are assisting international human trafficking hotlines, analyzing massive data sets for patterns of behavior, hot spots where trafficking may be common. But all the questions we wrestle with when we consider big data – what are the biases in the data set? Whose privacy are we compromising and what are the consequences? – need to be considered in this space as well.
“Big data can provide answers, but not always the right ones,” Mark offers. One of the major issues for the collaboration between data scientists and human rights professionals is the need to work through issues of false positives and false negatives. Until we have a clearer sense of how we navigate these practical and ethical issues, it’s hard to know how to value initiatives like “data philanthropy”, where the private sector offers to share data for development or for protection of human rights.
There’s a growing community of data researchers who are able to bear witness to human rights violations. He shares Kate’s desire for an ethical framework, a way of balancing the risks and benefits. Is the appropriate model adopted from corporate social responsibility, which is primarily self-regulatory? Is it a more traditionally regulated model, based on pressure from NGOs, consumers and others? He references the “Necessary and Proportionate” document drafted by activists to demand limits to surveillance. If we could move towards an aspirational set of international principles on the use of big data to help human rights, we’d find ourselves in a proactive space, not playing catch up.
The session’s final speaker is Ramez Naam, a former Microsoft engineer who’s become a science fiction author. His talk, “Big Data 7000″ offers two predictions: big data will be big, and will cause big problems. The net effect is about the who, not the what, he offers. It’s about who has access to these technologies, who sets the policies for their use.
Ramez shows a snippet of DNA base pairs, a string of ATCGs on a screen. “This is someone’s genome, probably Craig Ventner’s, and as promised, once we sequenced the genome, we ended all health problems, cracked ageing and conquered disease.” It turns out that genes are absurdly complex – they turn each other on and off in complex and unpredictable ways. “We can barely grok the behavior of half a dozen genes as a network.” To really understand the linkages between genes and disease, we’d need to collect lots more genetic data. Fortunately, the cost of gene sequencing is dropping much faster than Moore’s law, and there’s now the long-promised $1000 gene sequencer. But to really understand genes and disease, we’d need to collect behavioral and trait data about people whose genomes were sequenced – what was the person like, what diseases did they suffer, did they have high blood pressure, what was their IQ?
Personal monitoring tools like Fitbit generate lots of individual value, and potentially lots of societal value, by helping us understand what behavioral and diet interventions are most helpful. Will you get fitter on the paleo diet? Or will red meat kill you? Our data about behavior and health is so sparse that we don’t know which is true, despite one third of health spending on weigh loss and fitness programs and tools.
Is Nest a $3 billion distraction for Google? Or the first step towards the Google-powered smart electrical grid. Enormous financial and environmental benefits could come from a smart grid – if we could manipulate electrical usage we might be able to take thousands of “peaker” plants, plants that run for only a few hours a day, offline.
Given the field, we can imagine situations where more data would be helpful. Education? Sure – if we had more rigorous understandings of what teaching techniques work and fail, what makes a good teacher and a poor one, could we potentially transform that critical field?
Ramez pivots to the problems. There will be accidental disclosures of data. He suggests we look at two stories with Target, one where they accidentally revealed a daughter’s pregnancy to a distraught father by sending her coupons for baby supplies, and the recent leak where Target lost 70 million credit card numbers (including mine.) It could have been worse, and it probably will, Ramez argues. It could have been data about where you go, your SMS messages, your email – they will inevitably be released.
Anonymization of data sets doesn’t really work, he reminds us. Chevron recently lost a massive lawsuit in Ecuador and has sued to identify the activists who sought charges against the company. It’s very scary to consider what might happen to those activists, Ramez tells us.
“The NSA is not the worst abuse of surveillance we’ve seen,” he points out. J. Edgar Hoover bugged Martin Luther King Jr’s hotel rooms with the approval of JFK and RFK, who were worried that MLK was a communist sympathizer. In the process, Hoover discovered that MLK was having an affair, and sent threatening letters to him promising to reveal the secret if MLK didn’t commit suicide. This is heinous abuse, on a scale that’s not been revealed in recent revelations. But if the current abuses are significantly more minor, the scale is massive, with millions of individuals potentially at risk of blackmail.
Still, what’s critical to consider is not the what, but the who. There are checks and balances between we the people, corporations, government. There are conflicts between all of these. We vote within a democracy, Ramez argues, and we can vote with our feet and with our dollars. Sometimes corporations and governments are in collusion – sometimes they’re in conflict. Sometimes government does the right thing, as with the Church Committee, which investigated intelligence activities and helped curb abuses. We may need to consider the legacy of the Committee closely as we examine the current situation with the NSA.
There’s some hope. Ramez reminds us that “leaking is asymmetric.” As a result, conspiracies are hard, because it’s hard to keep secrets. “If you’re doing something heinous, it’s going to get out,” he says, and that’s a check.
His talk is called Big Data 7000 and he closes by imagining big data 7 millennia ago, showing an image of a clay tablet covered with cuneiform. “When the Sumerians began writing in linear A – that was a dystopian period of big data.” Writing wasn’t empowering to the little people, Ramez tells us – the use of written language created top-heavy, oppressive civilizations. It’s the model Orwell had in mind when he wrote 1984. That image of the control of technology in one mighty hand, not distributed, is at the root of our technological fears.
But technology can be liberating – the rise of the printing press put technology into many hands, allowing for the spread of subversive ideas including civil rights . The future of the net, he hopes, is in from big data as something in the hands of the very few to data in the hands of the very many.
Hi, Ethan here again.
What I really appreciated about this panel was a move beyond rhetoric about big data that is purely at the extremes: Big data is the solution to all of life’s mysteries! Big data is an inevitable path to totalitarianism! What’s complicated about big data is that there’s both hype and hope, reasons to fear and reasons to celebrate.
The tensions Mark Latonero identifies between wanting surveillance to protect against human rights abuses, and wanting to protect human rights from surveillance are ones that every responsible big data scientist needs to be exploring. I was surprised to find, both at this event and in a recent series of conversations at Open Society Foundation, that these are tensions the human rights community is addressing head on, in part due to enthusiasm for the idea that better documentation of human rights abuses could lead to better interventions and prosecutions.
The smartest phrase I’ve heard about big data and ethics comes from my friend Sunil Abraham of the Bangalore Center of Internet and Society, who was involved with those conversations at OSF. He offers this formulation: “The more powerful you are, the more surveillance you should be subject to. The less powerful you are, the more surveillance you should be protected from.” In other words, it’s reasonable to both demand transparency from elected officials and financial institutions, while working to protect ordinary consumers or, especially, the vulnerable poor. Kate Crawford echoed this concern, tweeting a story by Virginia Eubanks that makes the case that surveillance is currently separate and unequal, more focused on welfare recipients and the working poor than on more privileged Americans.
There’s no shortcut to the hard conversation we need to have about big data and ethics, but the insights of these four scholars and those they cite is a great first step towards a richer, more nuanced and smarter conversation.