Because I have a long commute, I listen to a lot of audio: public radio, podcasts and audiobooks. Because I work in academe, I have a massive pile of books and papers I need to read: books by friends, books for research projects, classics in the field that I should have read at this point in my life. Unfortunately, there is near zero overlap between the listening I do and the reading I need to do.
For example, right now I’m reading Hirschman’s “Exit, Voice and Loyalty“. I’m listening to Walter Isaacson’s biography of Ben Franklin, and while it’s very enjoyable, it’s not really what I need to be reading right now. What I need is a business, a collective or a method that makes and distributes high quality recordings of books that are too obscure to become audiobooks through normal channels, but popular enough that they have a non-zero audience.
I’ve been thinking about this because I spent part of this month recording the audiobook for Rewire. I am very fortunate that Audible purchased audio rights to the book from my publisher, and even more fortunate that Audible was willing to let me record the book, which has given me some insight into the process and the costs involved.
Rewire will end up being about 11 hours of audio, and it took me roughly 19 hours of studio time to record it. Readers get paid (very modestly, in my case, as I’m a novice reader.) The audio engineer who patiently followed along, prompting me to re-record sentences I screwed up needs to get paid, as do the engineers at Audible who edit out my breaths and other auditory detritus. I’m going to guess that a setup involving a reader, an engineer and a post-processing engineer costs at minimum $300 per hour of finished audio – with a professional reader and more editing, this figure could be much, much higher. (If you work in this space and have a better cost estimate, please share it in the comments.)
If my estimate is right, I could – in theory – hire a team to record Hirschman’s slim volume for $2000 or so, for my exclusive personal use. But that’s not very cost effective: at that price, it’s a better deal for me to hire a driver for one of my commutes between Pittsfield and Cambridge and spend the time reading the book. But there’s surely dozens of others out there interested in reading Hirschman since Malcolm Gladwell lavished praise on him in a recent New Yorker piece. If I can find 99 others, we could – in theory – hear Hirschman for $20 each.
There’s a rub, of course – I don’t have the rights to Hirschman’s work. That might or might not matter if I hired someone to read it to me, but it would certainly matter if I started selling a reading of Hirschman’s book to others. I wonder if this might be a surmountable problem for “long tail” books, which are unlikely to be made into audio books otherwise. If we added a royalty payment for copies sold of the Hirschman audiobook, paid to a publisher, is it possible we could build a model that’s both feasible and legal for organizing adhoc recordings of books?
Here’s how I think it could work. I’d post my request for Hirschman’s book to our site, and ask others to join with me. We’d each commit at least $20 to ensure we got a copy of the recording, and we could commit more if we really, really wanted the book read. If we reached critical mass, say 110 readers, we’d use the money to pay a reader and engineer and provide a royalty to the publisher. If we fell short of the goal within a certain timeline, we’d invoke the punk rock/DIY option – those who had committed to the project would be asked if they wanted to record a chapter of the book and we’d submit and compile our chapters into a lower quality, but serviceable audiobook.
I’m not actually in a position to launch this project – remember, I’m the guy who doesn’t have time to read a 130 page book and needs it read to him. But I’d be very interested to hear if someone’s already doing a business like this, or whether anyone would be interested in starting a business like this. I’m less interested in hearing that I can just use text to speech on my computer and that should be completely satisfactory – it’s not, I’ve tried – or that I should find a way to access books recorded for the blind (IP issues in that industry are very complicated and having sighted people access those works could screw things up for blind readers.) I’m particularly interested in hearing from people in the publishing industry about whether there are presses that would find this a satisfactory solution, or whether any rightminded publisher would stop a project like this in its tracks. Oh, and if you’ve a better name than Long Tail Audiobooks, post that as well…
More than a billion people a month visit YouTube to watch videos.
Sometimes, those billion people watch the same video. More often, they don’t.
YouTube shares information about what videos are popular in different cities and different countries, and for the US, offers a tool to see what videos are popular with different age groups and genders.
We were interested in seeing what videos were popular in different countries, and especially, what videos were popular in more than one country. For the past six months, we’ve gathered data from YouTube to understand What We Watch. The videos we feature are videos that appear on YouTube’s Trends dashboard. These are the videos trending in any of 61 countries – they are not necessarily the most popular of all time, or even most popular that month, but they are receiving a lot of attention in a short period of time. (Gilad Lotan’s explanation of trending topics on Twitter is useful for understanding that distinction.)
What We Watch is a browser for popular YouTube videos, built by Ed Platt, Rahul Bhargava and Ethan Zuckerman at MIT’s Center for Civic Media. (Rahul did data acquisition, Ed did visualization and Ethan waved his hands and requested features inappropriately late in the design process.)
Click on a country, and you’ll get a list of videos that have trended in that country, and a map that shows other countries that watch the same videos. Click a tab, and you can see videos popular just in that country, and not in other countries. Click on a second country, and you’ll see what top videos the countries have in common. Click a video itself, and you’ll get the video itself and a map of the countries where it was popular.
The results are often surprising. The US has more trending videos in common with Germany and the Netherlands than with near neighbors Canada and Mexico. One of the US’s top videos is a Punjabi music video that’s also got an audience in India and Germany. And a 90 second ad for Google Hangouts is surprisingly popular around the world… though hasn’t trended in the US, it’s apparent target market.
While What We Watch is a fun way to navigate the wealth of content available on YouTube, there are serious research questions behind the project as well. In Rewire, I argue that a network that connects computers throughout the globe doesn’t guarantee that content – like videos – will spread across borders of language, culture and nation. Some of what we’re finding on What We Watch supports that contention, and some challenges it.
The music video for “Roar” by Katy Perry offers evidence that some videos find truly global audiences – the video is has trended from Peru to the Philippines, and one of the top videos in Turkey and Saudi Arabia. Other videos find regional, but not global audiences – take P-Square’s “Personally”, which was in the top 10 in Nigeria for 17% of dates we tracked, and is popular in Ghana, Uganda, Kenya, and Senegal… but no where outside of sub-Saharan Africa. And some videos never leave home: Brazil’s top trending video, a humorous ad for a phone company that requires no translation, doesn’t show up on the top charts for any other country.
I’ve been deeply influenced by Pippa Norris’s work on the spread of culture and values across national borders, specifically her book “Cosmopolitan Communications” with Ronald Inglehart. They argue that people tend to overestimate the Katy Perry effect in which US culture sweeps the globe, leveling everything in its path. In some cases, people encounter another culture and reject it violently (the Taliban model), shape it and incorporate it into a new hybrid (the curry model) or simply decide it’s not for them (the firewall theory.) We see evidence for three of the four in our data – it’s hard to see the Taliban model because violent rejection would likely mean banning YouTube, which gives us no data to measure.
We also get some hints on what countries have videos in common. Language matters: countries in Latin America tend to have videos in common with other Spanish-speaking countries. But Brazil and Portugal don’t share much content (and Brazil’s viewing habits have little overlap with anyone, offering another theory: if you have a big enough domestic internet, you may develop your own, insular internet culture, as in Japan as well.)
We got very interested in countries that share content with lots of other countries. To identify these countries, we used a metric called “betweenness centrality“. Imagine the countries as nodes on a graph, connected by links that represent videos in common. If you calculate paths from each node of the graph to each other, nodes that many paths move through have high betweenness centrality – they are bridges through the network.
The countries with highest betweenness centrality are United Arab Emirates and Singapore. Both have lots of weak ties to other countries, which means they may act as cultural bridges between unconnected countries – we can imagine a video popular in India making its way to Yemen through the United Arab Emirates. It’s interesting to note that Singapore and UAE both have massive populations of expatriates and “guest workers” (over 90% of the population in UAE and over 40% in Singapore). Culture travels with people, and it’s no surprise that Indians in the UAE would want to watch videos from home, or that Poles living in the UK mean there are Polish-language videos in the UK’s top ten.
What we don’t know yet is whether videos spread through the networks: i.e., does a video made in India spread to Yemen through UAE, for example? To test that, we’ll need to watch how a popular video spreads over time, and, ideally, we’d want to know where a video originates. That’s harder than you might think. We’ve looked at the possibility of hand-coding the videos as to their nation of origin, so we can see whether a UK video might appear on the charts first in Australia or Poland. But we’re flummoxed by the fact that many of the popular videos aren’t easily pinned down to one nation or another – take this ad, popular in both Russia and Ukraine. It’s a Nike ad about street soccer, which suggests we should attribute it to the US, where the company is based… but the ad’s in Russian, clearly aimed at urban audiences in Eastern Europe and not for a US market. Do we code it as US, Russian or global?
And then, of course, there’s this ad for Google Hangouts. It’s a sweet and sappy 90 second story about a girl who moves to the big city and stays in touch with her dad via Hangouts. The accents are American and it appears to be an ad designed for the US market, but it has trended around the world, including in many countries with high rates of emigration for work or education. Google may have wanted to encourage American twenty-somethings to connect with their parents, but the message seems to resonate for people around the world.
Please experiment with What We Watch and let us know what you think – you can post comments here about anything interesting you discover, or research questions you think we should ask. The code and data behind the system is available on GitHub should you wish to build your own, or to see what we did. One caution for researchers – we are not showing videos that have been taken down by Google, for copyright or other reasons. In some cases, this means we’re removing many videos from top lists. We hope, in the long run, to show the metadata of those videos, but for now, they’re just not in the set, which means the data is not entirely representative of what we’ve collected.
There are ten graduate students associated with the Center for Civic Media, half a dozen staff and a terrific set of MIT professors who mentor, coach, advise and lead research. But much of the work that’s most exciting at our lab comes from affiliates, who include visiting scholars from other universities, participants in the Media Lab Director’s fellows program and fellow travelers who work closely with our team.
Two of those Civic affiliates are Sean Bonner and Pieter Franken of Safecast. Safecast is a remarkable project born out of a desire to understand the health and safety implications of the release of radiation from the Fukushima Daiichi nuclear power plant in the wake of the March 11, 2011 earthquake and tsunami. Unsatisfied with limited and questionable information about radiation released by the Japanese government, Joi Ito, Peter, Sean and others worked to design, build and deploy GPS-enabled geiger counters which could be used by concerned citizens throughout Japan to monitor alpha, beta and gamma radiation and understand what parts of Japan have been most effected by the Fukushima disaster.
The Safecast project has produced an elegant map that shows how complicated the Fukushima disaster will be for the Japanese government to recover from. While there are predictably elevated levels of radiation immediately around the Fukushima plant and in the 18 mile exclusion zones, there is a “plume” of increased radiation south and west of the reactors. The map is produced from millions of radiation readings collected by volunteers, who generally take readings while driving – Safecast’s bGeigie meter automatically takes readings every few seconds and stores them along with associated GPS coordinates for later upload to the server.
It’s hard to know what an appropriate response to the Safecast data is – Safecast is careful to note that there’s no consensus about what’s “safe” in terms of radiation exposure… and that there’s questions to be asked both about bioaccumulation of beta radiation as well as exposure to gamma radiation. Their work provides an alternative set of information to official government statistics, a check on official measurements, which allows citizen scientists and activists to check on progress made on cleanup and remediation. This long and thoughtful blog post about the progress of government decontamination efforts, the cost-benefit of those efforts, and the government’s transparency or opacity around cleanup gives a sense for what Safecast is trying to do: provide ways for citizens to check and verify government efforts and understand the complexity of decisions about radiation exposure. This is especially important in Japan, as there’s been widespread frustration over the failures of TEPCO to make progress on cleaning up the reactor site, leading to anger and suspicion about the larger cleanup process.
For me, Safecast raises two interesting questions:
- If you’re not getting trustworthy or sufficient information from your government, can you use crowdsourcing, citizen science or other techniques to generate that data?
- How does collecting data relate to civic engagement? Is it a path towards increased participation as an engaged and effective citizen?
To have some time to reflect on these questions, I decided I wanted to try some of my own radiation monitoring. I borrowed Joi Ito’s bGeigie and set off for my local Spent Nuclear Fuel and Greater-Than-Class C Low Level Radioactive Waste dry cask storage facility.
Monroe Bridge, MA is 20 miles away from my house, as the crow flies, but it takes over an hour to drive there. Monroe and Rowe are two of the smallest towns in Massachusetts (populations of 121 and 393, respectively) and are both devoid of any state highways – two of 16 towns in Massachusetts with that distinctively rural feature. Monroe, historically, is famous for housing workers who built the Hoosac Tunnel, and for a (long-defunct) factory that manufactured glassine paper. Rowe historically housed soapstone and iron pyrite mines. And both now are case studies for the challenge of revitalizing rural New England mill towns.
Yankee Rowe, prior to decommissioning
But from 1960 to 1992, Rowe and Monroe were best known for hosting Yankee Rowe, the third commercial nuclear power plant built in the United States. A 185 megawatt boiling water reactor, Yankee Rowe was a major employer and taxpayer in an economically depressed area… and also a major source of controversy. I was in school at Williams College, 13 miles from Yankee Rowe, when the NRC ordered the plant shut down in 1991, nine years before its scheduled license renewal, over fears that the reactor vessel might have grown brittle. The plant was a source of fascination for me as a student – the idea that a potentially dangerous nuclear power plant was so nearby led to a number of excursions, usually late at night, to stare at a glowing geodesic dome (the reactor containment building) from across the Sherman Reservoir.
Since 1995, Yankee Rowe has been going through the long process of decommissioning, with the goal of returning the site to wilderness or to other public uses – the plant’s website features an animated GIF of the disassembly process. But there’s a catch – the fuel rods. Under the Nuclear Waste Policy Act, spent fuel was supposed to start moving from civilian power plants like Yankee Rowe to underground government storage facilities in 1989. That hasn’t happened. Fierce opposition from Nevada lawmakers and citizens to storing the waste at Yucca Mountain and from people who don’t want nuclear waste traveling through their communities enroute to storage facilities have meant that there’s no permanent place for the waste.
During the decades nuclear waste storage has been debated in Congress, more waste has backed up, and Yucca Mountain would no longer accomodate the 70,000 metric tons of waste that needs storage. The Department of Energy is now planning on an “interim” disposal site, ready by 2021, in the hopes of having a permanent disposal site online by 2048. The DOE needs the site, because companies like Yankee are suing the US government – successfully – to recover the costs of storing and defending the spent fuel in giant above-ground casks. (Yankee’s site has a great video of the process of moving these fuel rods from storage pools into concrete casks, a process that involves robotic cranes, robot welders and giant air bladders that help slide 110 ton concrete casks into position.)
So… at the end of a twisty rural road in a tiny Massachusetts town, there’s a set of 16 casks that contain the spent fuel of 30 years of nuclear plant operation, and those casks probably aren’t going anywhere for the foreseeable future. So I took Joi’s geiger counter to visit them.
I’d been to Yankee Rowe before, and remembered being amused by the idea of a bucolic nuclear waste facility. The folks involved with Yankee Rowe have worked very hard to make the site as unobtrusive as possible – it’s marked by a discrete wooden sign, and the only building on site looks like an overgrown colonial house. Not visible from the road is the concrete pad where the 16 casks reside, but it’s 200 meters from the road and 400 meters from “downtown” Monroe Bridge.
I was curious whether I’d be able to detect any radiation using the Safecast tool. Sean and Pieter pride themselves on the fact that the bGeigie is a professional grade tool and routinely detects minor radiation emissions, like a neighbor who had a medical test that involved radioisotopes. I drove to Yankee Rowe late yesterday afternoon, took the bGeigie off my truck (it had been collecting data since I turned it on in Greenfield, the closest big town) and tried to get as close as I could to the casks.
That turned out to be not very close. Before I had time to read the NRC/Private Property sign, I was met at the gate – the sort of gate you expect to see at a public garden, not a barbed-wire, stay out of here gate – by two polite but firm gentlemen, armed with assault rifles and speaking by radio to the control center that had seen my truck over the surveillance cameras, make clear that I was not welcome beyond the parking lot.
That said, I got within 300 meters of the casks. And, as you can see from the readings – the white and green circles on the map – I didn’t detect any radiation beyond what I’ve detected anywhere else in Massachusetts. That’s consistent with the official reports on Yankee Rowe – dozens of wells are monitored for possible groundwater contamination, and despite a recent scare about Cesium 137, there’s been no evidence of leakage from the casks.
It would have been a far more exciting visit had I somehow snuck past the armed guards and captured readings from the casks suggesting significant radiation emissions, I guess… though what it would demonstrate is that you probably shouldn’t sneak in and stand too close to those casks. Better might have been to use Safecast’s new hexacopter-mounted drone to fly a bGeigie over the casks, though I can only imagine what sort of response that might have prompted from the guards.
While I’m reassured that there’s no measurable elevated levels of radiation at Yankee Rowe, it still seems like a weird state of affairs that Yankee’s waste is going to remain on a hillside by a reservoir for the foreseeable future, protected by armed guards. (The real estate listings for property owned by Yankee Atomic Energy Corporation are pretty wonderful – “Special Considerations: An independent spent fuel storage installation (ISFSI) associated with the previous operation of the Yankee Rowe Plant is located in the former plant area and remains under a U.S. Nuclear Regulatory Commission license. Future ownership of the 300 meter buffer surrounding the ISFSI will be negotiated as part of the property disposition.”)
And there’s lots of sites like Yankee Rowe that already exist, and more on the way. The map above, from Jeff McMahon at Forbes, shows sites in the US where nuclear fuel is stored in pools or dry casks. And more plants are shutting down – Yankee Rowe’s sister plant, Vermont Yankee, announced closure this week to speculation that nuclear plants aren’t affordable given the low cost of natural gas. Of course, given the realization that cleaning up Yankee Rowe has cost 16 times what the plant to build and will continue until the waste is in a permanent repository might give natural gas advocates pause – will we have similar discussions of the problems of remediating fracking sites in a few years or a few decades?
Projects like Safecast – and the projects I’m exploring this coming year under the heading of citizen infrastructure monitoring – have a challenge. Most participants aren’t going to uncover Ed Snowden-calibre information by driving around with a geiger counter or mapping wells in their communities. Lots of data collected is going to reveal that governments and corporations are doing their jobs, as my data suggests. It’s easy to track a path between collecting groundbreaking data and getting involved with deeper civic and political issues – will collecting data that the local nuclear plant is apparently safe get me more involved with issues of nuclear waste disposal?
It just might. One of the great potentials of citizen science and citizen infrastructure monitoring is the possibility of reducing the exotic to the routine. I suspect my vague unease about the safety of nuclear waste on a hillside is similar to the distaste people feel for casks of spent fuel passing through their towns on the way to a storage site. I feel a lot more comfortable with Yankee Rowe having read up on the measures taken to encase the waste in casks, and with the ability to verify radiation levels near the site. (Actually, being confronted by heavily armed men also reassures me.) I’m more persuaded that regional storage facilities are a good idea than I was before my experiment and reading yesterday – my opinion previously would have been based more on a kneejerk fear of radioactivity than consideration of other options. (The compact argument: if we’ve got fuel in hundreds of sites around the US, each protected by surveillance cameras and security teams, it seems a lot more efficient to concentrate that problem into a small number of very-well secured sites.)
If the straightforward motivation for citizen science and citizen monitoring is the hope of making a great discovery, maybe we need to think about how to make these activities routine, an ongoing civic ritual that’s as much a public duty as voting. Monitoring a geiger counter that never jumps over 40 counts per minute isn’t the most exciting experiment you can conduct, but it might be one that turns a plan like Yucca Mountain into one we can discuss reasonably, not one that triggers an understandable, if unhelpful, emotional reaction of “not in my backyard.”
With Rewire out in the world, I’ve had some time this August to think about some of the big questions behind our work at Center for Civic Media, specifically the questions I started to bring up at this year’s Digital Media and Learning Conference: How do we teach civics to a generation that is “born digital“? Are we experiencing a “new civics”, a crisis in civics, or just an opportunistic rebranding of old problems in new digital bottles? My reading this summer hasn’t given me answers, but has sharpened some of the questions.
Earlier this summer, I was invited by the Mobilizing Ideas blog to react to Biella Coleman’s excellent book, “Coding Freedom“. In my response, I noted that Coleman’s ethnography of hacker culture makes clear her hacker friends aren’t the stereotypical geeks, surgically attached to their computers, sequestered in their parents’ basement – they go to conventions, write poetry, and engage in political protest, as well as writing code.
The sort of hackers Biella documents engage in politics, and when they do, they’ve got multiple tools they can use. They organize political campaigns and lobby congresspeople, as Yochai Benkler and colleagues so aptly documented in this recent paper on resistance to SOPA/PIPA. They can write code that makes a new behaviors possible, like Miro, written by the Participatory Culture Foundation, which makes peer to peer filesharing and search easier and more user-friedly. They protest artistically, as with Seth Schoen’s DeCSS haiku (which prominently features in Biella’s writing.)
Hackers engage in instrumental activism, seeking change by challenging unjust laws. They engage in voice-based activism, articulating their frustration and dissent from systems they either cannot or are not willing to exit. But hackers aren’t merely competent activists in Biella’s account – they are able to engage in civics in a more broad way than most citizens. In addition to traditional channels for civic engagement, they can engage by creating code, giving them a more varied repertoire of civic techniques than non-coders have. (We might make the same argument for artists, who may be more effective in spreading their voices than those of us with less artistic talent.)
I’ve been thinking about Biella’s hackers in the context of some ideas from Michael Schudson. Schudson is a brilliant thinker about the relationship between media and civic engagement, the question that currently shapes my work at the Center for Civic Media. In his book “The Good Citizen”, and this 1999 lecture, Schudson challenges the idea that a good American citizen is one who carefully informs herself about politicians, their positions and the issues of an election. Schudson argues that this is an unrealistic expectation for citizens, pointing to the absurdity of 200 page Voter’s Guides to Elections that, he argues, nobody reads. (I know for a fact that danah boyd not only reads them, but holds parties to get people to read them with her.) But he also argues that this model of the “informed citizen” is only one model of American citizenship the republic has experienced since its foundation.
In “The Good Citizen”, Schudson explores four models of citizenship the US has passed through in the last two centuries and change. When the nation was founded, citizenship was restricted to a small group of property-owning white men, and elections didn’t focus on issues, but elected men of high status and character, who went on to deliberate in Congress with similar social elites. In the age of party politics, Schudson argues, politics was a carnival, with votes based on personal loyalties and social alliances, not on consideration of the issues.
Not until the Progressive reformers attacked corruption in the party system (an attack which included support for prohibition of alcohol, as party bosses were often tavern owners and the ability to supply voters with drink was a key political technique) did the notion of the informed voter come into play. Progressives, through adoption of the secret ballot, the introduction of referenda and the rise of muckraking investigative journalism, shifted responsibility for politics from a small group of elites and party bosses, to the general public. Schudson observes that the general public hasn’t been especially excited by this shift – participation in elections fell sharply during the progressive era and has been below 50% of eligible voters since.
Now, Schudson argues, we are living in an era where change through elections is less important than change through the courts, an age that began with the Civil Rights movement of the 1950s and 60s. Informed citizens are important, but their power to make change comes from suing as much as it comes from voting, and activists and lawyers who understand how to challenge constitutionality through the court system are far more powerful than the average citizen.
While he’s critical of the informed citizen model as unrealistic, Schudson is not arguing for the superiority of the rights-based model, or for a return to party bosses. He’s pointing out that America has experienced different visions of what constitutes “the good citizen” and that these visions can change over time.
That’s helpful context for understanding Biella’s hackers. We may be experiencing a shift in citizenship where the idea of the informed citizen no longer applies well to the contemporary political climate. The entrenched gridlock of Congress, the power of incumbency and the geographic polarization of the US make it difficult to argue that making an informed decision about voting for one’s representative in Congress is the most effective way to have a voice in political dialogs.
Instead, we’re seeing activists, particularly young activists, taking on issues through viral video campaigns, consumer activism, civic crowdfunding, and other forms of civic engagement that operate outside traditional political channels. Lance Bennett suggests that we might see these new activists as self-actualizing citizens, focused on methods of civic participation that allow them to see impacts quickly and clearly, rather than following older prescriptions of participation through the informed citizen model.
Biella’s hackers are exemplars of self-actualizing citizens, using code as one of their paths towards self-actualization, alongside traditional political organizing and lobbying. Larry Lessig’s Code and Other Laws of Cyberspace, a book deeply popular with the hackers Biella studies, offers the possibility that these are only two of four paths towards civic engagement and change.
Lessig’s book is written as a warning about possible constraints to the open internet. While many contemporary scholars warned that the lawless internet would come under control of national and local governments, Lessig warned that it would also be regulated through code, which would make some behaviors difficult or impossible to accomplish online. Lessig outlines four ways complex systems tend to be regulated:
- By laws, created and enforced by governments, which prohibit certain behaviors
- By norms, which are created by or emerge from societies, which favor certain behaviors over others
- By markets, regulated and unregulated by laws, which make certain behaviors cheap and others expensive
- By code and other architectures, which make some behaviors difficult and others easy to accomplish
These four methods of regulation are also ways in which activists and other engaged citizens can participate in civics. Citizens frustrated and angered by NSA surveillance of domestic communications, for example, could lobby Congress to hold hearings on whether the NSA has overstepped its bounds, or whether FISA courts are providing sufficient oversight of government surveillance requests. Civic coders could build tools that make use of PGP encryption easier to protect the privacy of emails. Citizens could punish companies that have complied with surveillance requests and reward those who are moving servers outside of the US to make them more surveillance resistant. And people could begin using Tor and PGP routinely, to influence norms of behavior around encryption and make the NSA’s techniques significantly less effective.
These methods are often applied to non-technical issues as well. Social entrepreneurship uses market mechanisms to seek change, paying farmers a fair wage for their coffee, for instance, by buying from collectives rather than from exploitative wholesalers. Social media campaigns focus on harnessing attention and changing norms, bringing underreported issues to wider audiences. Using code to make government more transparent or more effective is a popular, if possibly overhyped, approach to social change. These models may represent a complement to the informed citizen and rights-based citizenship models Schudson examines, representing new civic capabilities in addition to the capability of influencing laws and governments.
Mastering these four capabilities is a tall order for any civic participant, but some activists are trying. Julian Assange has technical skills, as well as a deep understanding of media, which has allowed him to cooperate and compete for attention in working to change norms around secrecy and whistleblowing. His long run from prosecution has sharpened his understanding of legal systems, and, until the financial “blockade” against Wikileaks, he seemed to be doing reasonably well raising money for his project. (My friend Sasa Vucinic, involved with anti-Milosevic radio station B92 and founder of the Media Development Loan Fund, argues that the key to running a successful anti-government newspaper is to get the funding model right and build a sustainable media outlet.) Edward Snowden has proved extremely technically savvy, legally astute and has had an excellent relationship with the global press, essential to gain a wide audience for his revelations.
Schudson’s portrait of citizenship through the ages focuses on the behavior of large groups of citizens. Assange and Snowden are too idiosyncratic to serve as exemplars of a new class of digitally engaged citizens, promoting a new vision of citizenship. But they demonstrate what a highly competent, multifaceted civic participant might look like and I suspect that we will see more citizens leveraging the full suite of tools that Lessig’s structures of regulation point to.
A challenge for those of us who see the shape of civics changing is how we prepare people to participate in civics where the skills required are so diverse. If it’s difficult to expect citizens to be informed voters, as Schudson argues, it’s very difficult to expect them to be coders, entrepreneurs, lawyers and media influencers. We might hope, as Dewey does, that diverse interests will lead to an interlocking public – I care about surveillance and work to change norms, while you write code, and our friend tackles another challenge through social entrepreneurship. Or it may push us back to a democracy enhanced by expertise, as Walter Lippmann suggests, with citizens throwing fiscal and moral support to organizations that lobby for laws, write code, build just markets and influence public debate, leveraging the expertise and skill of those who dedicate their talents to one or more of these facets of citizenship.
I shared a draft of this post with Erhardt Graeff, who pointed out an inherent tension between ideas of the competent and effective citizen and the “good” citizen. The “good” citizens, in Schudson’s exploration, are those who participated in the system of the times, whether or not we see those systems as laudable in retrospect. A particularly cynical version of this idea would posit that today’s “good citizen” is a predictably partisan consumer, deviating as little as possible from the demographic predictions and models built by pollsters and data analysts to ensure that our candidates are correctly marketed to us. Highly participatory and effective citizens would challenge this sort of model, and it’s certainly possible that a democracy composed purely of Assanges and Snowdens would have a hard time functioning.
Erhardt points out that Lessig has been an activist throughout his career, and that his vision of regulation in Code is one consonant with the effective citizen. But can democracy work if all citizens are effective at promoting and campaigning for their own issues? Have we seen evidence of a society with high, effective engagement and with the other characteristics we expect of a democracy? Should a group like Center for Civic Media be working on thinking through models of effective citizenship or considering the larger question of what a large group of effective, engaged citizens could mean for contemporary visions of democracy?
Charlie DeTar defended his doctoral dissertation this afternoon at the MIT Media Lab. Charlie is a student in Chris Schmandt’s Mobility and Speech group, but has also been an active member of my group, Center for Civic Media, where he’s done very important work including Between the Bars, a platform that allows inmates in some US prisons to blog via the postal service. Charlie is an incredibly thoughtful guy, who takes the time to read deeply and develop nuanced understanding of issues before he builds new technologies.
His work on his doctoral thesis reflects this thoughtfulness – in building “Intertwinkles“, a platform to assist in consensus decisionmaking, Charlie conducted a deep dive into the nature of democracy, decisionmaking, group behavior and technology to assist group decisionmaking. His talk today outlined that work as context for his intervention.
Willow Brugh attended the talk and her visualization of Charlie’s remarks is below. My notes follow below her illustration.
Charlie’s remarks start with the question: “How much democracy do you have left?”
He shows a photo series of people holding papers with X marks on them – the marks represent the number of presidential elections the person expects to have left. The message – we don’t have very much democracy, if democracy means voting every four years. “Most of us wouldn’t volunteer to be governed by kings or dictators,” Charlie offers, but we face lots of non-democratic rule in real life: bosses, landlords, banks, other powerful institutions we have little influence over.
High profile, democratically-governed activist organizations tend to have short lifespans. Even long-lasting movements like Occupy tend to be relatively short lived. But collectives and cooperatives use highly participatory methods and many have been in existence for decades. Twinkles – the practice of waving your fingers to show approval, non-verbally, for a statement – is a practice that originated in the 1970s and thrives today within collectives and cooperatives. But the in-person nature of collective and cooperative governance can be slow, expensive and draining. Charlie’s core research question is whether we can design online tools for democratic consultation which result in more just and effective organizations.
To answer this question, Charlie has build a set of tools to support consensus decision-making processes, documenting the participatory design process used to develop the tools and evaluated these tools in their use by real-world groups. He’s also done deep investigative work exploring the history of non-hierarchicalism, consensus, and decisionmaking with computers.
Non-hierarchicalism looks like a simple concept at first glance – it represents forms of governance that are decentralized, flat, leaderless, or horizontal. But questions immediately arise: are facilitators imposing a covert hierarchy?
Charlie suggests we consider decentralization, using a definition from Yochai Benkler: in decentralized systems, many agents work coherently despite the fact that they do not rely on reducing the number of people participating in decisionmaking. While the number of people does not decrease, most decentralized systems require some centralization, as Charlie discusses by examining multiple models. The blogging platform WordPress is decentralized because you can download, customize and run the code, effectively becoming a chapter or franchise for WordPress. With Wikipedia, different sets of people work on different problems, editing different articles, in what can be thought of as a subsidiary model. In BitTorrent, rather than decentralizing resources, the founders have declare a protocol that determines how we interact, enabling decentralization through federation.
Each decentralization has a corresponding centralization:
- Bittorrent decentralizes servers via a centralized protocol
- WordPress decentralizes hosting via a centralized codebase
- Wikipedia decentralizes editors through a centralized database and policies
- Consensus decentralizes authority through centralizing procedures
Consensus decisionmaking is a field of governance, Charlie tells us, that works to avoid three tyrannies:
- The tyranny of the majority, when the mob beats you up
- The tyranny of the minority, where small group prevent functioning or dominate decisionmaking
- The tyranny of structurelessness, where elimination of overt structure leads to covert structure via dominant personalities, racism, sexism and other forms of dominance.
Consensus decisionmaking is the process of consulting stakeholders in a way that seeks to avoid these tyrannies. Charlie outlines seven forms of consensus, including corporate, scientific, standards, consociationalism (power-sharing), mob, assembly, focusing specifically on affinity consensus, groups of people who’ve chosen to work together on problems of common interest. He offers a matrix for how each form of consensus handles open membership, egalitarianism, formal process, and the binding nature of decisions. For instance, a corporate department that practices consensus decisionmaking still has a boss, and may not always make binding decisions. Not all groups are open – if I want to participate in the decisionmaking of Charlie’s housing cooperative, I’m going to be refused admission.
In the process of building Intertwinkles, Charlie has developed a long list of protocols that people use to enable consensus decisionmaking, including various facilitation tools, meeting phases, hand signals, roles and formats. Intertwinkles implements several of these protocols in an online environment.
To understand the history of digital tools to assist with decisionmaking, Charlie takes us back to J.C.R. Licklidder, who talked about decisionmaking with computers as early as 1962. Douglas Englebart, whose “mother of all demos” introduced many of the ideas that have dominated the next 50 years of computing, began developing methods of computer-aided decisionmaking in the late 1960s. The field was formalized as “group decision support systems”, generating a huge amount of scholarship around three systems, generally dedicated computing systems installed in “decision-support rooms” at corporations and universities. While these systems were very engineering-heavy, they often used very similar techniques to those used in consensus-oriented groups. However, it is difficult to extrapolate from the scholarship, because the vast majority of studies used artificial, composed groups, not groups with existing histories and patterns. Most were face to face and most were one-shot experiments. These methodological limitations make it hard to extrapolate to understand the utility of these tools for affinity groups, which have important existing relationships, group histories and policies.
Charlie notes that these early group decisionmaking support tools tended to provide all services – including email – to their users, because they were huge, expensive systems that often represented an organization’s first exposure to digital communication. Now systems are smaller and decentralized, including tools like Doodle (used for meeting scheduling) and Loomio, a new system designed to support discussion of proposals in forums and voting on those proposals.
While these systems are promising, Charlie hopes we can do more. He notes that Joseph McGrath put forward a helpful typology of group tasks in his 1984 book, Groups, Interaction and Performance. Ideally, we’d want a system that helps groups engage in each of these tasks – generating ideas, generating plans, executing tasks, etc.
Intertwinkles began as a participatory design project with Boston cooperative housing groups. Charlie recruited six houses from 29 collective and cooperative housing groups and hired three research assistants who were “native participants”, residents in the houses. 45 people participated, overall.
The groups he worked with were involved throughout a field trial process, from pre-interviews to help understand how groups made decision, through an extensive training session on the tools and for 8-10 weeks of usage, as Charlie and his team iterated to improve the tools with feedback from users. The process involved both the creation of new tools and a pair of games designed to inspire conversation and reflection on group dynamics, Flame War (which models decisionmaking over email) and Moontalk (a realtime game that models limited communication channels). More information on both games is available on the Intertwinkles site.
Charlie offers brief overviews of three tools. Dotstorm is based around sticky note brainstorming, and supports visual thinkers through stickies with drawings and with photos taken through laptops or other devices. The system supports real-time collaboration and sharing of ideas and runs on any contemporary web browser. Resolve supports a rolling proposal process, which allows one member of a group to propose an idea and others to expand, refine or block it, eventually voting on accepting it. The system maintains a rich history of a proposal and uses a notification system to keep participants involved in the process, but lets participants use email as their channel for free-form discussion. Points of Unity is a tool designed to help come up with a short list of values or statements that a group agrees with, which many groups find useful as a mutually agreed-upon common ground.
Many of the features of Intertwinkles are platform features shared across tools. There’s a group-centric sharing model that gives people access to documents and resources once they join the group. Membership is reciprocal (like membership in Facebook) and overlapping (you are friends with everyone in the group), a model that Charlie hasn’t seen in Facebook, Twitter or other systems. Everything is shared publicly for discrete periods of time, which lowers the barrier to entry to the system, but then reverts documents to private to avoid spam, etc. Users can take actions on behalf of other members of the group, recognizing that not everyone is active online constantly. There is rich, semantic event reporting, which allows for a “quantified group” analysis, understanding and describing a group’s behavior in quantifiable terms about participation. Intertwinkles is built on a plug-in architecture. Core services handle search, authentication, twinkles, events, notices, groups – other features plug into those core services, which makes it possible to develop radically new tools without building up the other essential components.
For the system to work, Charlie believes that participants need extensive training. What’s key is getting to the point where everyone is confident that everyone else is comfortable with the tools. To remind collectives of the tool, Charlie distributed a colorful pillow, a Twinkle Plush Star, as “an ambient reminder of the system and its uses.”
Five of the six groups used the tool, completing 66 processes and making 2155 unique edits and visits. One group didn’t use Intertwinkles beyond training, and one reported neutral to negative experiences, while the other four groups had generally positive reactions. Charlie measured the participation of each cooperative member with the system because he worried there might be uneven participation. His analysis suggests quite even participation, similar to what you might get face to face.
In examining how collectives used the system, Charlie reminds us of the idea of “technology in action”, proposed by proponents of structuration theory. This theory suggests that designers build tools for certain tasks, but the tools get used for whatever tasks a group wants to carry out, which leads to unexpected outcomes, sometimes contrary to designer’s intentions. Charlie makes his intentions clear: he wanted to make non-participation apparent, to increase awareness of conflict, to make group processes explicit, and to handle facilitation “out of band”.
He sees a correlation in satisfaction with the tool and group structure. Groups that had more confrontive approaches to decisionaking and more formal approaches to decisionmaking had better results with the tools. The group that was least satisfied tends to be avoidant of conflict and privileges action over speaking. A group that found the tools most useful makes participation in house meetings mandatory, has explicit channels for communication on conflict, and extensive house norms. This highly structured group was able to take advantage of the system in ways less structured groups did not.
Charlie sees room to improve the tools: more work on in-band facilitation, in-band training,instrumenting the platform for online learning, and building an ecosystem of developers. He plans to continue working on the tool and already sees possible alliances to build the platform in conjunction with others building tools for group decisionmaking. But he also sees value in the theoretical approach, suggesting that design research is powerful as a form of sociology and a potential quantitative and qualitative method for studying group behavior.
Hal Abelson’s report on MIT’s actions around Aaron Swartz’s prosecution was released last week. I was on vacation and offline – I returned home Sunday and read the report and some of the responses to it.
I certainly see why Taren Stinebrickner-Kauffman called it a whitewash. For those hoping that Abelson and his colleagues would identify faults in MIT’s behavior and take responsibility for inaction, the report is deeply disappointing. One of the strongest statements in the report makes in conclusion is, ultimately, quite weak:
“…let us all recognize that, by responding as we did, MIT missed an opportunity to demonstrate the leadership we pride ourselves on.”
That’s a bit of an understatement. The report includes an entire section (Part IV) on opportunities MIT missed, places where MIT could have intervened and might have helped prevent a tragedy. While the report correctly notes that we can’t know how things would have turned out had MIT responded to Robert Swartz’s repeated requests for the Institute to make a statement similar to the one JSTOR made, it’s clear that MIT didn’t just miss an opportunity – it consciously and repeatedly decided not to take any actions that would have helped Aaron Swartz make a successful defense while cooperating fully with requests from prosecutors.
As such, I don’t think the report is a “whitewash”. I don’t think Abelson is trying to conceal details that cast MIT in a bad light – it’s hard to read the report without being deeply disappointed with how MIT makes decisions. By my reading, the report documents a troubling culture of leadership at the university, one where adherence to the (ultimately flawed) idea of “neutrality” overrides making a nuanced decision about how to respond to aggressive prosecution under a poorly written law.
There’s lots I’m angry about with the report. It ends with questions for the MIT community to consider, rather than recommendations. This isn’t the fault of Abelson and colleagues, but the ambit given Abelson by MIT’s President, Rafael Reif. While the report makes clear that MIT cooperated more thoroughly with prosecutors than with Aaron’s defense (and carefully explains why MIT’s “neutral” stance ends up favoring the side that had more power in the equation), it doesn’t lay blame on MIT’s general counsel or any other individuals for MIT’s failure of leadership.
For me, the biggest disappointment is a refrain throughout the report that blames the MIT community for failing to draw more attention to Swartz’s prosecution. In Part V, the authors note, “Before Aaron Swartz’s suicide, the community paid scant attention to the matter, other than during the period immediately following his arrest. Few students, faculty, or alumni expressed concerns to the administration.”
It’s certainly true that there was more anger and attention in the wake of Aaron’s suicide than there was during the indictment and period leading towards trial. But it’s not true that the community was unaware of Aaron’s plight. As the report documents, Joi Ito, director of MIT’s Media Lab, asked MIT’s leadership to see if Aaron’s case could be settled as a “family matter” within the MIT community. Two other faculty members spoke to the administration and Robert Swartz, who works for the Media Lab, approached MIT multiple times, seeking a statement that MIT did not believe Swartz should be prosecuted for his actions.
There are reasons why those of us who were aware of Aaron’s case didn’t lobby MIT more loudly. As the report notes, just following the statement about “scant attention”: “Those most familiar with Aaron Swartz and the issues that greatly concerned him were divided in their views of the propriety of his action downloading JSTOR files, and fearful of harming his situation by taking public or private stands.” This fear was compounded by the fact that it was very difficult for Aaron and those closest to him to talk about the case without creating communications that could be subpoenaed by the prosecutor, which led him to discuss the case with very few people. Also, as the report reveals, an early attempt to draw action to the case online led to an angry reaction from prosecutor Steve Heymann. Given that Aaron and his team were seeking a plea deal with a prosecutor who already escalating charges against Aaron, it’s understandable that people were worried about harming Aaron’s situation by making noise.
Blaming the MIT community’s lack for response for MIT’s studied inaction is, for me, is an embarrassing evasion of responsibility, an admission that MIT was less interested in doing the right thing than in avoiding the sort of negative publicity it faced when it failed to support Star Simpson when she faced prosecution for wearing an LED-enhanced hoodie to Logan Airport.
It’s helpful to understand why MIT’s leadership did what it did. It’s understandable that, before they knew who was accessing JSTOR that they sought help from the Cambridge PD, which ended up bringing the Secret Service into the case. But for well over a year, MIT knew that its network had been accessed by a committed activist who was most likely making a political statement, not attempting to sell JSTOR to the highest bidder. They were extensively lobbied by a long-time employee who made a simple request for MIT to make a statement similar to the statement JSTOR made. They heard from MIT professors and from scholars outside the community, yet they clung to a stance of neutrality that, as Abelson’s report notes, systematically favored the prosecution over the defense.
The New York Times reports that MIT was “cleared” of wrongdoing in Aaron Swartz’s prosecution and death. I think the report presents MIT with two equally serious charges: a failure to act ethically, and a failure to show compassion. According to Abelson’s report, MIT’s president, chancellor and Office of the General Counsel did the minimum – and sometimes less than the minimum, when they failed to respond to defense subpoenas – in allowing Aaron Swartz and his team to mount a defense. In the process, they ignored the pleas of a long-time colleague who was desperately working to defend his son.
MIT has a different president than it did for most of the Swartz case, and the ball is now in President Reif’s court to change a culture that was unwilling to take moral leadership in the case of Aaron’s prosecution. For those of us who are outraged by the inaction of MIT’s leadership in this case, we face Albert Hirschman’s famous choice: exit or voice. My friend Quinn Norton, Aaron’s partner when he was arrested, recently tweeted: “I will never work with MIT, I will never attend events at MIT, I will never support MIT’s work, and I hope dearly that my MIT friends leave.”
I would hope that there’s another option: making clear that members of MIT’s community believe that MIT has responsibilities beyond “neutral” compliance, and working to change the culture that so badly failed Aaron. Evidently, it’s up to the MIT community – and the broader internet community – to make sure this report isn’t the final word on MIT’s role in Aaron’s prosecution and to ensure that Abelson’s questions in the report do not remain unanswered. I hope that President Reif’s promise to engage with Abelson’s questions leads to real change in an institution that has much to answer for, and I plan to push as hard as I can from the inside to ensure that MIT’s response to Aaron’s death does not end with this report.