Geekery – … My heart’s in Accra http://www.ethanzuckerman.com/blog Ethan Zuckerman’s online home, since 2003 Thu, 16 Nov 2017 19:33:51 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.3 Jonathan Zittrain at Freedom to Innovate http://www.ethanzuckerman.com/blog/2015/10/13/jonathan-zittrain-and-star-simpson-at-freedom-to-innovate/ Wed, 14 Oct 2015 02:23:50 +0000 http://www.ethanzuckerman.com/blog/?p=5146 Continue reading ]]> This past weekend, with support from the Ford Foundation, EFF and the MIT Media Lab, Center for Civic Media held a two day conference on the Freedom to Innovate. The first day featured experts on cyberlaw, activists and students who’d experienced legal challenges to their freedom to innovate. Sunday’s sessions included a brainstorm led by Cory Doctorow on imagining a world without DRM, and an EFF-led workshop on student activism around technology issues.

I was MC for the meeting on Saturday, and have only partial notes. I hope to post some impressions from these other sessions once I have more time to digest, but I’ll begin by posting my notes from opening talks by Jonathan Zittrain and Star Simpson.


I asked Jonathan Zittrain to give an opening keynote on the Freedom to Innovate because he’s one of the world’s leading thinkers about technical, legal and normative barriers to innovation. His book, “The Future of the Internet – And How to Stop It”, introduces the idea of generativity, the capacity of a system to enable users to invent and create new technologies.

JZ’s talk was titled “Freedom to Innovate, Beyond the Trenches”, and began with the technologies of, and before, his childhood: computers built from kits, PCs that you could take apart and reassemble, and operating systems that – whether or not they were free software – were rewritable and modifiable. (Waxing lyrical about MS-DOS, JZ notes that the blinking cursor was “an invitation to create: you could rewrite MS-DOS in MS-DOS.”) The PC and MS-DOS were “generative”, in JZ’s language – they don’t have a fixed set of uses, but are expandable and extendable to solve new problems. (To illustrate the expandability of PC hardware, JZ shows off the PC EZ bake oven… which might also function as a helpful heatsink.)


Jonathan Zittrain, and a PC EZ Bake oven

There are three freedoms that characterize this moment in tech history, Zittrain tells us. People are free to create new technologies. They’re free to adapt existing technologies to new purposes, to “tinker around the edges”. And they’re free to join and contribute to communities of like-minded actors. He explains that the next step after building your Heathkit H8 PC was to join a group of hobbyists who’d figured out how to program the machines – learning from others through apprenticeship was core to this moment in tech history.

When Stephen King published “Riding the Bullet” in 2000 – “a story so bad he couldn’t bring himself to publish it in print” – JZ argues that he ushered in a new era of technological creativity. The story was the first widely available commercial e-book, using digital rights management technology, and despite its low price ($2.50, and distributed free by Amazon and Barnes and Noble), folks at MIT hacked to copy protection to see if they could. “I see those MIT hackers as the leading drop on the crest of the wave of content, from people tinkering in the ham radio world to tinkering in the world of commerce.”

As more media went digital, this tinkering went mainstream. Audio Grabber was a piece of PC software that let users “rip” audio from CDs using a CD-ROM player, and make copies. For the audio industry, this was a step too far, a way in which tinkering escaped the hacker community and entered into mainstream parlance.

The music industry’s responses to copying CDs added a new freedom to the freedoms to create, to tinker and to connect with a community: the freedom to liberate. If content was tied up in a bad DRM system, you should be free to find a way to liberate it from those constraints.

Prior to CD ripping, the music industry looked for ways to deal with the “digital threat”. The Audio Home Recording Act – created to govern DAT tapes – sought to ensure that even if copies of digital materials could be made, that copies could not be made of copies. And when copies were made, fees would be charged to users through a fee on blank media and put into a fund that would help artists who might be harmed by this new technology. As JZ explained the intricacies of the AHRA, he noted, “If you’re already getting sleepy, that’s the point.” These agreements weren’t trying to protect user rights, or involve users in any way – they were negotiated between big parties with opposing interests – content creators and technology manufacturers – and were about dividing the spoils. When existing actors encountered the PC, they looked for ways to “make the PC safe for the CD”, to turn the PC into something as simple as an appliance, like a CD player. Audiograbber turned this equation on its head and demonstrated that users would look for ways to liberate their content and use them in other contexts.

As the audio industry sought to cope with audio ripping and the rise of devices like the Rio MP3 player, they began to engage in behavior that resembled hacking. People who purchased certain Sony CDs – The Invisible Invasion, Suspicious Activity?, Healthy in Paranoid Times – found that these CDs had autoexec files that installed rootkits on their PCs. Sony evidently wanted to monitor all actions these users were taking, tracking what content they were playing and trying to determine the origins of all the files on their systems.

People were widely outraged by Sony’s actions, suggesting that ripping of CDs by an individual felt like less of a transgression than systemic hacking by a corporation. Sony’s transgressions suggests another right we might support under the freedom to innovate: the freedom to audit, to understand what the systems we use are doing to our computers and with our information. “We need toe ability ot look it and to say that something isn’t right.”

Five aspects of the Freedom to Innovate

  • Freedom to create new technologies
  • Freedom to tinker with existing technologies
  • Freedom to connect with communities of interest
  • Freedom to liberate content for additional uses
  • Freedom to audit existing systems

These rights – to create, to modify, to join communities, to liberate and to audit technologies, are all deeply complicated by DMCA 1201, a section of the Digital Millenium Copyright Act which shifts responsibility around the freedom to tinker with existing systems. Previously, if you altered a technology, your legal liability came from infringing a copyright by distributed cracked material. But under section 1201, simply circumventing copy-protection mechanisms is enough to face prosecution or liability. This shift puts legitimate security researchers, like Ed Felten – now Deputy U.S. Chief Technology Officer – who took the Secure Digital Music Initiative up on their challenge to remove watermarks from their sound recordings, and ended up threatening Felten with prosecution under section 1201.

The only ways around 1201, Zittrain tells us, are exemptions, like an explicit exemption that allows librarians to defeat copy protection so they can make the decision as to whether they want to acquire a copy of a work. “This has probably never been invoked,” Zittrain speculates. “It’s basically there to let librarians feel a little better about the law.”

“Why should this zone be one of cat and mouse?” asks Zittrain. The industry releases something and hopes the community won’t hack it. The community creates something new and wonders whether they’re going to be prosecuted over it. “There ought to be a way to have fair use without hacking to get it,” Zittrain argues. “And the best you’re ever going to get with litigating under 1201 is that you’ll get permission to hack into something like Facebook for a specific set of good reasons… now good luck hacking in!”

“Why shouldn’t the cat and mouse make peace? Why shouldn’t Facebook be required to make accessible data for certain types of research so we can understand what’s going on in the world?”

The recent discovery that Volkswagen had taught their cars to lie about admissions raises questions about the dangers of this cat and mouse game. But there’s a tension as well – we want to get into the circuit boards, review the code and figure out what the VW is and isn’t doing. But at the same time, we live in a society that is extremely paranoid about security (as we learned with Ahmed Mohamed’s clock) – will we want to drive our cars after hacking into them to review their emissions?

(Zittrain suggests that there may be some technologies where DRM is desirable to prohibit tinkering, like with CT scanners. Cory Doctorow, in the audience, argues that for that argument to hold, DRM would need to work, which it never does, and needs to be auditable because there’s no security through obscurity.)

As we head towards the Internet of Things, we’re going to fight over models for how objects talk to the internet. Will the internet of the Internet of Things be the real internet, where anything can talk to anything, and it’s up to the thing to figure out if it wants to listen. Or should it be a closed, corporate net where objects only talk to their vendors. We’ll end up resolving this against a backdrop of legal liability, a world in which things sometimes go feral. Who’s responsible when your Phillips tuneable bulb is reprogrammed to burn down your house? Amazon recently announced their platform for the internet of things, a framework that fills a genuine need, the ability to constrain what can talk to what. But Amazon is going to charge for this privilege, raising questions about whether we want to hand this responsibility to commercial entities.

When we think about the generative, blinking cursor, Zittrain tells us, MIT and other academic institutions created this environment and this paradigm. And universities have a huge role to
play in defending and promoting freedom to tinker and freedom to innovate. “I feat that this mission has been forgotten, and that people like Peter Thiel, who are encouraging people to innovate outside the university, are helping this be forgotten.” We don’t want these institutions to be oracular, to predict the future of the devices we can use and how we interact with them. But we do want them to be “productively non-neutral”. We need universities to be opinionated about the freedom to innovate and the freedom to create the future.

]]>
Helping Launch the NetGain Challenge http://www.ethanzuckerman.com/blog/2015/02/11/helping-launch-the-netgain-challenge/ Wed, 11 Feb 2015 18:28:24 +0000 http://www.ethanzuckerman.com/blog/?p=5037 Continue reading ]]> This morning, I’m at the Ford Foundation in New York City as part of the launch event for NetGain. NetGain is a new effort launched by the Mozilla, Ford, Open Society, Macarthur and Knight Foundations, to bring the philanthropic community together to tackle the greatest obstacles to digital rights, online equality and the use of the internet to promote social justice.

The event is livestreamed here – in a moment, you can head Tim Berners-Lee and Susan Crawford in conversation about the future of the web.

For the past six months, I’ve been working with Jenny Toomey and Darren Walker at Ford, John Palfrey at Phillips Andover, and friends at these different foundations to launch the NetGain challenges. We’re asking people around the world to propose difficult problems about the open internet that they think governments and companies have not been able to solve. We’re collecting these challenges at NetGainChallenge.org, and asking participating foundations to take the lead on one or more challenges, coordinating a new set of investments in tackling that problem.

I had the privilege of introducing a session at this morning’s event about these challenges. It was an Ignite talk, which means I probably didn’t manage to say all the words I have listed below. But this is what I was trying to say:


45 years ago, the first message was sent over the internet, between a computer at UCLA and one at Stanford University.

25 years ago, Tim Berners-Lee turned the internet from a tool for academics into something most of us use every day, by making it easy to publish and read online – he created the World Wide Web.

What’s followed on Sir Tim’s invention is a transformation of the ways we work, play, shop, argue, protest, give, learn and love.

Given the amazing transformations we’ve seen, it’s easy to forget that the internet is a long, ongoing experiment. The internet as we know it is the result of trying new things, seeing how they break, and working to fix them.

The first message sent on the internet was “login”, as Charley Kline and Len Kleinrock at UCLA were trying to log into a machine at Stanford. They only managed to transmit the letters “lo”, then the system crashed. An hour later, they had it up again and managed to transmit the whole message.

On the internet, we have a long tradition of trying things out, screwing up, fixing what’s broken and moving forward.

Twenty five years into the life of the World Wide Web, there are amazing successes to celebrate: a free encyclopedia in hundreds of world languages, powerful tools for sharing breaking news and connecting with old friends, platforms that help us organize, agitate and push for social justice.

But alongside our accomplishments, there’s still lots that’s broken.

In building an internet where most content and services are free, we’ve also adopted a business model that puts us under perpetual surveillance by advertisers. Worse, our communications are aggregated, analyzed and surveilled by governments around the world.
The amazing tools we’ve built for learning and for sharing ideas are far easier and cheaper to access in the developed world than in the developing world – we’re still far from the dream of a worldwide web.

We’ve built new public spaces online to discuss the issues of the day, but those discussions are too rarely civil and productive. Speaking online often generates torrents abuse, especially when women speak online.

Despite Sir Tim’s vision of a decentralized web, there’s a huge concentration of control with a few companies that control the key platforms for online speech. And as we use the web to share, opine and learn, quickly losing our legacy, erasing this vast new library as fast as we write it.

These problems may well be unsolveable. But it’s possible that we’ve been waiting for the wrong people to solve them.

In 1889, Andrew Carnegie gave money to build a public library in Braddock, Pennsylvania, the first of 1,689 libraries he funded in the US. These were not just spaces that allowed people to feed their minds, but in many towns, the only spaces open to men, women, children and people of all races.

Newspapers and the publishing houses made knowledge available to those who could afford it, but Carnegie made it available to everyone.

As television became a fixture in the nation’s homes in the 1950s, the Ford Foundation worked with other philanthropists to build a public television system in the US, ensuring that this powerful new medium was used to educate and enlighten as well as to entertain

The foundations here aren’t going to be able to put internet into every home the way Carnegie brought libraries to every town. But there are problems philanthropy can tackle in unique ways that provide solutions that go beyond what corporations or governments can do on their own.
That’s what led us to the idea of the grand challenge. We’re drawing inspiration here from Google’s moonshots and from the XPrize Foundation. More importantly, we’re taking guidance from the people we work with everyday, on the front lines of social innovation, to identify the challenges we need to overcome to for the internet to be a true tool for justice and social inclusion

The speakers you’re about to hear aren’t here with solutions: they’re going share with us the thorny problems they’re working to solve. We’re asking each foundation that’s a member of Netgain to take the lead on one of these and other challenges, convening the smartest people in the field, our partners, our grantees, our beneficiaries to understand what we can do together to tackle these deep and persistent problems.

These aren’t the only challenges we need to tackle. We need to hear from you about what problems we can take on and what brilliant guides – like nine speakers we’re about to hear from – can help us navigate our way through these challenges.

We’re taking this high-risk strategy of aiming at the toughest problems because even if we fall short of our goals, we think we’ll make enormous progress by working together. Every six months, we plan to bring our community together, convene around a grand challenge and start a process of collaboration and experimentation. We may only get to “lo” before we crash, restart and rebuild. But every time we do, we’ll be moving towards a web that’s more open, more just, more able to transform our world for the better.


Please join us at NetGainChallenge.org and help us identify the challenges we should be taking on.

]]>
Schneier and Zittrain on digital security and the power of metaphors http://www.ethanzuckerman.com/blog/2013/04/04/schneier-and-zittrain-on-digital-security-and-the-power-of-metaphors/ http://www.ethanzuckerman.com/blog/2013/04/04/schneier-and-zittrain-on-digital-security-and-the-power-of-metaphors/#comments Thu, 04 Apr 2013 23:26:41 +0000 http://www.ethanzuckerman.com/blog/?p=4522 Continue reading ]]> Bruce Schneier is one of the world’s leading cryptographers and theorists of security. Jonathan Zittrain is a celebrated law professor, theorist of digital technology and wonderfully performative lecturer. The two share a stage at Harvard Law School’s Langdell Hall. JZ introduces Bruce as the inventor of the phrase “security theatre”, author of a leading textbook on cryptography and subject of a wonderful internet meme.

The last time the two met on stage, they were arguing different sides of an issue – threats of cyberwar are grossly exaggerated – in an Oxford-style debate. Schneier was baffled that, after the debate, his side lost. He found it hard to believe that more people thought that cyberwar was a real threat than an exaggeration, and realized that there is a definitional problem that makes discussing cyberwar challenging.

Schneier continues, “It used to be, in the real world, you judged the weaponry. If you saw a tank driving at you, you know it was a real war because only a government could buy a tank.” In cyberwar, everyone uses the same tools and tactics – DDoS, exploits. It’s hard to tell if attackers are governments, criminals or individuals. You could call almost anyone to defend you – the police, the government, the lawyers. You never know who you’re fighting against, which makes it extremely hard to know what to defend. “And that’s why I lost”, Schneier explains – if you use a very narrow definition of cyberwar, as Schneier did, cyberwar threats are almost always exaggerated.

Zittrain explains that we’re not debating tonight, but notes that Schneier appears already to be conceding some ground in using the word “weapon” to explore digital security issues. Schneier’s new book is not yet named, but Zittrain suggests it might be called “Be afraid, be very afraid,” as it focuses on asymmetric threats, where reasonably technically savvy people may not be able to defend themselves.

Schneier explains that we, as humans, accept a certain amount of bad action in society. We accept some bad behavior, like crime, in exchange for some flexibility in terms of law enforcement. If we worked for a zero murder rate, we’d have too many false arrests, too much intrusive security – we accept some harm in exchange for some freedom. But Bruce explains that in the digital world, it’s possible for bad actors to do asymmetric amounts of harm – one person can cause a whole lot of damage. As the amount of damage a bad actor can create, our tolerance for bad actors decreases. This, Bruce explains, is the weapon of mass destruction debate – if a terrorist can access a truly deadly bioweapon, perhaps we change our laws to radically ratchet up enforcement.

JZ offers a summary: we can face doom from terrorism or doom from a police state. Bruce riffs on this: if we reach a point where a single bad actor can destroy society – and Bruce believes this may be possible – what are the chances society can get past that moment. “We tend to run a pretty wide-tail bell curve around our species.”

Schneier considers the idea that attackers often have a first-mover advantage. While the police do a study of the potentials of the motorcar, the bank robbers are using them as getaway vehicles. There may be a temporal gap when the bad actors can outpace the cops, and we might imagine that gap being profoundly destructive at some point in the near future.

JZ wonders whether we’re attributing too much power to bad actors, implicitly believing they are as powerful as governments. But governments have the ability to bring massive multiplier effects into play. Bruce concedes that his is true in policing – radios have been the most powerful tool for policing, bringing more police into situations where the bad guys have the upper hand.

Bruce explains that he’s usually an optimist, so it’s odd to have this deeply pessimistic essay out in the world. JZ notes that there are other topics to consider: digital feudalism, the topic of Bruce’s last book, in which corporate actors have profound power over our digital lives, a subject JZ is also deeply interested in.

Expanding on the idea of digital feudalism, Bruce explains that if you pledge you allegiance to an internet giant like Apple, your life is easy, and they pledge to protect you. Many of us pledge allegiance to Facebook, Amazon, Google. These platforms control our data and our devices – Amazon controls what can be in your Kindle, and if they don’t like your copy of 1984, they can remove it. When these feudal lords fight, we all suffer – Google Maps disappear from the iPad. Feudalism ended as nation-states rose and the former peasants began to demand rights.

JZ suggests some of the objections libertarians usually offer to this set of concerns. Isn’t there a Chicken Little quality to this? Not being able to get Google Maps on your iPad seems like a “glass half empty” view given how much technological process we’ve recently experienced. Bruce offers his fear that sites like Google will likely be able to identify gun owners soon, based on search term history. Are we entering an age where the government doesn’t need to watch you because corporations are already watching so closely? What happens if the IRS can decide who to audit based on checking what they think you should make in a year and what credit agencies know you’ve made? We need to think this through before this becomes a reality.

JZ leads the audience through a set of hand-raising exercises: who’s on Facebook, who’s queasy about Facebook’s data policies, and who would pay $5 a month for a Facebook that doesn’t store your behavioral data? Bruce explains that the question is the wrong one; it should be “Who would pay $5 a month for a secure Facebook where all your friends are over on the insecure one – if you’re not on Facebook, you don’t hear about parties, you don’t see your friends, you don’t get laid.”

Why would Schneier believe governments would regulate this space in a helpful way, JZ asks? Schneier quotes Martin Luther King, Jr. – the arc of history is long but bends towards justice. It will take a long time for governments to figure out how to act justly in this space, perhaps a generation or two, Schneier argues that we need some form of regulation to protect against these feudal barons. As JZ translates, you believe there needs to be a regulatory function that corrects market failures, like the failure to create a non-intrusive social network… but you don’t think our current screwed-up government can write these laws. So what do we do now?

Schneier has no easy answer, noting that it’s hard to trust a government that breaks its own laws, surveilling its own population without warrant or even clear reason. But he quotes a recent Glenn Greenwald piece on marriage equality, which notes that the struggle for marriage equality seemed impossible until about three months ago, and now seems almost inevitable. In other words, don’t lose hope.

JZ notes that Greenwald is one of the people who’s been identified as an ally/conspirator to Wikileaks, and one of the targets of a possible “dirty tricks” campaign by H.B. Gary, a “be afraid, be very afraid” security firm that got p0wned by Anonymous. Schneier is on record as being excited about leaking – JZ wonders how he feels about Anonymous.

Schneier notes how remarkable it is that a group of individuals started making threats against NATO. JZ finds it hard to believe that Schneier would take those threats seriously, noting that Anon has had civil wars where one group will apologize that their servers have been compromised and should be ignored as they’re being hacked by another faction – how can we take threats from a group like that seriously? Schneier notes that a non-state, decentralized actor is something we need to take very seriously.

The conversation shifts to civil disobedience in the internet age. JZ wonders whether Schneier believes that DDoS can be a form of protest, like a sit in or a picket line. Schneier explains that you used to be able to tell by the weaponry – if you were sitting in, it was a protest. But there’s DDoS extortion, there’s DDoS for damage, for protest, and because school’s out and we’re bored. Anonymous, he argues, was engaged in civil disobedience and intentions matter.

JZ notes that Anonymous, in their very name, wants civil disobedience without the threat of jail. But, to be fair, he notes that you don’t get sentenced to 40 years in jail for sitting at a lunch counter. Schneier notes that we tend to misclassify cyber protest cases so badly, he’d want to protest anonymously too. But he suggests that intentions are at the heart of understanding these actions. It makes little sense, he argues, that we prosecute murder and attempted murder with different penalties – if the intention was to kill, does it matter that you are a poor shot?

A questioner in the audience asks about user education: is the answer to security problems for users to learn a security skillset in full? Zittrain notes that some are starting to suggest internet driver’s licenses before letting users online. Schneier argues that user education is a cop-out. Security is interconnected – in a very real way, “my security is a function of my mother remembering to turn the firewall back on”. These security holes open because we design crap security. We can’t pop up incomprehensible warnings that people will click through. We need systems that are robust enough to deal with uneducated users.

Another questioner asks what metaphors we should use to understand internet security – War? Public health? Schneier argues against the war metaphor, because in wars we sacrifice anything in exchange to win. Police might be a better metaphor, as we put checks on their power and seek a balance between freedom and control of crime. Biological metaphors might be even stronger – we are starting to see thinking about computer viruses influencing what we know about biological viruses. Zittrain suggests that an appropriate metaphor is mutual aid: we need to look for ways we can help each other out under attack, which might mean building mobile phones that are two way radios which can route traffic independent of phone towers. Schneier notes that internet as infrastructure is another helpful metaphor – a vital service like power or water we try to keep accessible and always flowing.

A questioner wonders whether Schneier’s dissatisfaction with the “cyberwar” metaphor comes from the idea that groups like anonymous are roughly organized groups, not states. Schneier notes that individuals are capable of great damage – the assassination of a Texas prosecutor, possibly by the Aryan Brotherhood – but we treat these acts as crime. Wars, on the other hand, are nation versus nation. We responded to 9/11 by invading a country – it’s not what the FBI would have done if they were responding to it. Metaphors matter.


I had the pleasure of sitting with Willow Brugh, who did a lovely Prezi visualization of the talk – take a look!

]]>
http://www.ethanzuckerman.com/blog/2013/04/04/schneier-and-zittrain-on-digital-security-and-the-power-of-metaphors/feed/ 2
DARPA director Regina Dugan at MIT: “Just Make It” http://www.ethanzuckerman.com/blog/2011/11/29/darpa-director-regina-dugan-at-mit-just-make-it/ http://www.ethanzuckerman.com/blog/2011/11/29/darpa-director-regina-dugan-at-mit-just-make-it/#comments Tue, 29 Nov 2011 20:57:31 +0000 http://www.ethanzuckerman.com/blog/?p=4266 Continue reading ]]> This afternoon, MIT’s Political Science distinguished speakers series hosts Regina Dugan and Kaigham Gabriel, director and deputy director of DARPA, the US defense advanced research project agency, who are here to speak about advanced manufacturing in America. The title for their talk is “Just Make It”, a response Dugan offers to people who ask her to predict the future. “Visionaries aren’t oracles – they are builders.”

She shows a five minute video of nerd porn, a montage of dismissive predictions about technologies (like Lord Kelvin’s statement about the impossibility of heavier than air flight, followed by footage of the Wright Brothers, and then from Top Gun. The video ends with observations about the time to 50 million users for different technologies is rapidly shrinking, pointing to Facebook’s sprint to 100 million users, and offers images of protesters holding banners celebrating the internet. “Still think social media is a fad?” the video asks. The video ends with a challenge for the engineers in the room – “just make it”.

Dugan tells us that the decline in America’s ability to build things is a national challenge, if not a crisis. Americans consume an increasing percentage of goods made overseas, and are less likely to be employed making things. Perhaps this reflects on productivity increases, or on currency manipulations, but it has implications, she warns, for national defense. Adam Smith warned that if an industry was critical to defense, it is not always prudent to rely on neighbors for supply.

There have been many years of debate around the inefficiency of America’s design and building of defense systems, Dugan tells us. One extrapolation of increase in airline design cost – sometimes referred to as “Augustine’s Laws” – suggests that by 2054, a single military aircraft will cost as much as the entire military budget at that time. Obviously, it’s dangerous to extrapolate linearly from current data… but if you do, the cost of military systems is growing much more rapidly than defense budgets. “Quite obviously, this is not sustainable”.

When we design aircraft, she tells us, we’re often designing ten years out. That means we’re trying to understand the threat environment ten years out. That’s risky. “Lack of adaptability is a vulnerability.”

What’s worse is that it’s really expensive. She shows a graph of production costs for the F-22 fighter. The price per unit keeps increasing, and the volume required keeps dropping. This might be because we need to amortize design costs over fewer units. Or it might be because the costs get so high, we simply can’t afford as many units as we wanted. This isn’t just true of the F-22 – it’s true for the Marine EFV project and the Comanche helicopter as well.

This difficulty in building complex systems has implications for defense and for the economy as a whole, she tells us.
“To innovate, we must make. To protect, we must produce.” DARPA is not a policy organization, she tells us, but pushing from “a buy to make strategy” is of strategic importance to the US Department of Defense.

There’s $200 million a year being invested in innovation, looking for ways to change the calculus of cost increase. Can we turn a long problem like vaccine design into one we can solve in weeks? Could we permit the participation of tens of thousands of designers into a process and harness their ideas? She suggests that the future of innovation is around increased speed of production and number and diversity of designs. The rise of electronic design aides revolutionized the semiconductor industry – could this shift in speed and diversity bring a similar paradigm shift?

Dugan tells us that the systems we have to manage complexity are inherited from 1969-era systems engineering. We take complex systems and split them along functional lines – power system, control system, thermal system – then try to put them back together. What happens is that we experience emergent behaviors that weren’t predictable. As a result, we end up with a design, build and test system that we iterate through, trying to solve those emergent problems.

This isn’t the only way to design complex systems. She shows a graph that measures time to design, integrate and test, versus a measure of product complexity, which includes part count and lines of source code. There’s a linear increase in time to build to complexity for aerospace defense systems. Another piece of the graph shows a flat design and test time cycle with increasing complexity – that’s the semiconductor industry. And a third industry – the best in class automotive manufacturers – show a decrease in time with an increase in complexity! How are they pulling this off?

Gabriel tags in here, to explain how the semiconductor industry achieved gains in complexity without extending the timeframe necessary to design and test their products. The key factor was a decision to control for time. “If we aren’t out there with new chips in 18-24 months, we’ll miss the next generation of PCs.” So the principles of VLSI design were optimized around producing new product on a timecycle as tight as that for less complex integrated circuits.

Two major design innovations characterize the VLSI shift, Gabriel tells us. First, it’s critical to decouple design and fabrication, a shift that was comparatively easy for circuit designers to accept. The second was initially heresy: you needed to stop optimizing each transistor, and sacrifice component performance for ease of system design and reliability.

We’ve seen a similar move in computer programming, a shift away from assembler, which produces very efficient code that’s hard to test, to higher level programming languages. Those languages abstract operations, which leads to a decrease in performance efficiency, but since we’re no longer as limited by how many operations a computer can perform, the design speed benefits outweigh the performance compromises. He hints that we may be seeing some similar shifts in biological sciences as well.

How does this work in terms of DARPA projects? Dugan retrieves the mic to speak about the Adaptive Vehicle Make program, designed to build a new infantry vehicle in two years instead of ten. A first step is developing a language to describe and design mechano-electric systems so they can integrate more smoothly. The vehicle, she tell us, will be flexibly manufactured through a “bitstream-configurable foundry-like manufacturing capability for defense systems” capable of “mass production in quantities of one”.

With facilities that can accept a design and custom-forge parts, she believes we can move to an increasingly democratized design system, which enables the participation of many more people to design and submit systems to foundry-like fabrication facilities. We’ll design vehicles “using the most modern techniques of crowd infrastructure and open source development,” in a program called VehicleForge.mil. (While a valid URL, there’s no webserver at that address. Just wanted to save you a Google search or two.)

Critics tell her this approach won’t work. But Dassault recently designed the Falcon 7x aircraft using “digital master models, by tail number, for aircraft” – i.e., building extremely complex individual models for each aircraft they build. The models only do geometric interference (i.e., they test whether the parts fit together), but they’ve halved the time needed to produce a new plane. Critics claim that the analogy between integrated circuits and military vehicles is an inept one. But in terms of part count, ICs are much more complex than vehicles. What’s complex is the diversity of components used in the combat vehicle.

A new experiment, conducted in cooperation with Local Motors, a small-scale vehicle fabrication company (see my notes on the founder’s Pop!Tech talk in 2009) invites designers to compete to design a combat support vehicle, the XC2V. $10,000 in prizes were offered, and instead of getting the 3 designs they get in an invitation-only design scenario, they received 159, 100 of which the judges deemed “high calibre”. It wasn’t a clean sheet of paper design – the chassis and drivetrain were designed by Local Motors – but it was effective at expanding the idea pool, and led to a functioning design within four weeks.

The power of the crowd may be even greater in a field like protein folding, where humans are still able to solve some problems better than algorithms. Foldit is the brainchild of a biochemist, a computer scientist and a gamer, who decided to turn protein folding into a game, building “a Tetris-like environment for folding”. 240,000 people have signed up to play, but what’s really cool is “the emergence of 5 sigma savants for protein folding, some of whom have very little biochemistry training.” Recently, Foldit solved a key protein – a retroviral protease SIV for the rhesus monkeys – which had been unsolved for 15 years. The community folded it in 10 days. Projects like this, she tells us, make her a believer that bringing many diverse minds to a problem and increasing the pace of building will increase the speed and diversity of innovation.

Gabriel offers three other examples where massive innovations are possible through new methods.

Optics are the dominant cost in many imaging and sensor systems. It turns out that making light do something different – bending, focusing, diffusing – requires materials and systems that are heavy, complex and expensive. M-GRIN – manufacturable gradient index optics – moves beyond lenses that are made out of a single material with a single index of refraction. Instead, they use a stack of multiple layers and films, combined via heat and pressure, to make lenses that are smaller and lighter. A test around a shortwave infrared lens produced a device that was 3.5x smaller and 7.5x lighter. That’s a breakthrough… but the real innovation is creation of a set of design rules that let you go from an application to a recipe for combining materials into the lens you need.

In telling us about maskless nanolithography, Gabriel tells us “Moore’s law is dead in circuit design, though the corpse doesn’t know it yet.” The culprit is heat – we can make tighter and smaller circuits, but they’re getting very difficult to cool. As critical is cost. Working at ultra-small line width is prohibitively expensive. It’s hard to spend tens of millions on a set of 45 nanometer masks to create a few hundred chips for a defense system, when building those masks costs tens of millions of dollars.

We know how to do lithography without masks, but it’s traditionally been very slow. So now designers have built a system that creates and bends an electron beam, then splits it into millions of beamlets, controlled by a “dynamic pattern generator”. Program that pattern generator, and it allows millions of writing operations to happen at the same time, leading to a current working speed of 10-15 wafers per hour, the minimum required to produce custom ICs for military applications.

His third example is the accelerated manufacturing of pharmaceuticals, a strategy he tells us was Plan B in 2009-2010 if the H1N1 flu virus had resurfaced. It’s very hard to produce vaccines quickly – egg-based strategies require a piece of virus and many thousands of chicken eggs. These methods work, but can require 6-9 months to build up a stockpile. A new method uses tobacco plants to produce custom proteins, working from strands of DNA in the virus. Envision a football-field sized building filled with lights and trays of tobacco plants. A facility like that can now produce a million doses a month of a novel vaccine. In scaling up capacity to 100 million doses per month, the key problem turned out to be lighting – it was impossible to light everything without switching to LED bulbs. Once they made the switch, they had a new opportunity – tuning the spectrum to optimize production. Using an experiment of “high school science complexity”, they grew plants under different lighting conditions for a few weeks, and determined a mix of blue and red frequencies that doubles protein production.

Gabriel ends with a slide quoting MIT scientist Tom Knight:

“The 19th century was about energy.
The 20th century was about information.
The 21st century is about matter.”

If we embrace this challenge, Gabriel tells us, we will be able to make things at the cost we used to produce and stockpile them in bulk, and this change will change how we innovate.


Above this line are my notes, below, my reaction:

I thought the DARPA folks gave an impressive talk, inasmuch as they got me thinking about a problem I’d not considered – the insane cost and time frame of producing military equipment. But for a talk sponsored by the political science department, it seemed woefully lacking of discussions of politics or markets. If I were trying to explain the difference in production processes between military vehicles, consumer automobiles and integrated circuits, I suspect I might look at the power of markets. IC manufacturers needed to build chips quickly because customers wanted to buy newer, faster chips… and would buy other chips if the manufacturer wasn’t fast enough. Ditto for automobile companies.

The defense industry is different. It’s very hard to terminate a weapons system, even if it’s massively over time and over budget. The competition happens well before a product is built. Discovering that the F-22 production isn’t going well doesn’t create a market opportunity for another company to produce a better product faster – the company producing the F-22 is going to get paid, even if they take an absurd time to produce the product.

I admire the approach Dugan and Gabriel are putting forward, and certainly appreciate that it plays well to a room full of engineers. But I was very surprised not to hear questions (and I only caught the first five or six) about whether the DoD purchasing process can be reformed so long as military budgets are sacrosanct. We’re currently facing mandatory budget cuts with the failure of the budget supercommmittee, and conventional political wisdom suggests that the social service cuts will go through, while the defense ones will not. How do you encourage companies to innovate when they’re currently amply rewarded for dragging design and production out over decades? How do you innovate without market pressures?

My homogeneously left-wing family was talking politics over the Thanksgiving dinner table and realized the solution to America’s current social problems was to simply adopt the Egyptian political system – let the military run everything. The right doesn’t like cutting military budgets, but is okay when the military provides state-sponsored healthcare and subsidizes education. All we need to do is ensure all Americans are employed by the US military and we can build a thriving, successful welfare state. The same absurdity behind that suggestion is what makes DARPA’s ideas so hard to implement – if there’s no pressure to cut military budgets, anything is possible… except real innovation around cost and efficiency.

]]>
http://www.ethanzuckerman.com/blog/2011/11/29/darpa-director-regina-dugan-at-mit-just-make-it/feed/ 2
The science of food… and of resetting your expectations http://www.ethanzuckerman.com/blog/2011/09/08/the-science-of-food-and-of-resetting-your-expectations/ http://www.ethanzuckerman.com/blog/2011/09/08/the-science-of-food-and-of-resetting-your-expectations/#comments Thu, 08 Sep 2011 21:21:49 +0000 http://www.ethanzuckerman.com/blog/?p=4203 Continue reading ]]> This is one of the more surreal weeks of my recent life. On Sunday, I took possession of an adorable and small apartment near Inman Square in Cambridge, fought my way through Ikea and spent the first night of my new itinerant academic existence in Cambridge. Monday, I moved into my office in the Media Lab, using a borrowed ID from a student as my ID card isn’t turned on yet. Tuesday, I met with my new masters students and other colleagues at the Center for Civic Media, then retreated to the Berkman Center to be part of their iLaw series. And then I found myself in a lecture hall in the Harvard Science Center, attending a lecture on cooking, given in part by David Arnold, one of the leading minds in haute cuisine… and a guy I used to hang out with more than twenty years ago. It feels like a very strange compression of history into a single (very long) weekend. But it was a great talk, so I thought I’d share it with you as well. (And that I’m not posting until two days later helps show how crazed the week has been…)


The lecture is a public talk associated with a Harvard class called “Science and Cooking – From Haute Cuisine to Soft Matter Science“, taught by David Weitz. It’s a science class focused on the chemical and physical changes associated with cooking. The text for the class is “On Food and Cooking” by Harold McGee, the opening speaker. McGee wrote the book in Cambridge and tells us “In late 1970s, I never dreamed Harvard would give a course on cooking – I can make a living now.”

McGee is accompanied by Arnold, who he introduces as the director of culinary technologies at French Culinary Institute in NYC and “the one guy in the world who knows the most about cutting edge tech in the modern kitchen.” Arnold insisted that the class needed a definition of cooking, and so we’re working with this definition: “The preparation of food for eating, especially by means of heat”. The term comes from the Latin coquere, “to cook, prepare food, ripen, digest”. Cooking is the application of energy and ingenuity to change foods so they’re easier, safer and more pleasurable to eat.

McGee quotes Arnold as observing that if a peach is perfectly right, the best thing you can do with it is put it on a plate with a knife. Nature, McGee argues, wants us to eat peaches so that we’ll carry seeds far and wide. What we do in cooking is, in part, trying to approach the complexity and the balance of the perfectly ripe piece of fruit.

The first stop on a history of cooking has to be fire. McGee references Richard Wrangham’s “Catching Fire: How Cooking Made us Human”. Cooking allows us to turn raw starch into something digestible. We needed these calories, Wrangham argued, to build our big brains. In that sense, learning to cook may literally have helped us become human. For us to tell if Wrangham is right, we need to see evidence of cooking much further back in history. We currently see evidence of 100,000 years ago, while Wrangham speculates we should see evidence 1 million years back.

By the Middle Ages, cooks had figured out how to make gelatins and clarify them, and how to do very complex decorative work for the courts. They’d also invented food as entertainment. We see a recipe from the 15th century titled “To Make a Chicken Sing when it is dead and roasted”. It involves stuffing a chicken with sulfur and mercury and sounds like a very bad idea… but it is amusing, and that notion of food as amusement is returning to modern kitchens today.

By 1681, we see the introduction of a very different way of cooking – the pressure cooker. In 1681, Denys Papin was a member of the Royal Society, working with Boyle on gases. He figured out that you could cook food using pressurized water and speed cooking processes. Because the Royal Society are mostly bachelors, there’s a wonderful set of literature of dinner parties where scientists brought ingredients and Papin cooked and served them.

Arnold jumps into explain that pressure cookers allow us to cook at temperatures other than what we could normally achieve. This leads to some fun discoveries. He read an influential book on pressure cooking that advised increasing use of onions in pressure cookers because the onion flavor dissipates. So he pressure cooked other similar foods, and discovered that foods like garlic lose their stink when pressure cooked. “The sulfur compounds in horseradish get totally knocked out so you can eat it by the bushel.” Mustard seeds cooked with vinegar puff up like caviar. And other effects can’t be replicated any other reasonable ways. “Pressure cookers speed up Maillard reactions – you can pressure cook an egg for an hour and get browning that you otherwise wouldn’t get without cooking for several days.”

McGee notes that Arnold hasn’t mentioned his durian experiments. Arnold sheepishly explains that this is a lesson in the importance of repetition. Durian smells bad (or wonderful, if you grew up in certain corners of Asia) because of sulfur compounds, and so you should be able to knock out the smell in a pressure cooker. “So I threw some stuff with durian into a pressure cooker and got the most incredible Durian caramel.” But he’s never been able to replicate it, with more than a month’s worth of attempts. “Don’t be a schmuck,” he tells us – document your work so you can replicate.

Replicability is, of course, the essence of experimental science. In 1770, McGee tells us, Ben Franklin was spending a huge amount of time on ships, traveling between the US and France. He noticed that when the cooks threw out the waste from cooking, the wake behind the ship calmed. He later tried an experiment in Clapham Pond in London, putting a teaspoon of oil onto a pond on a windy day. The water calmed over an area of half an acre. Had Franklin made a further leap, he could have pretty easily calculated the size of a molecule based on the experiment, assuming that the layer of oil eventually was a single molecule thick.

To get a sense for the molecular scale, Arnold gives us a demonstration of Dragon’s Beard candy, a preparation seen in China, Turkey and Iran. Cook sugar to a particular hardness and you can stretch and fold it at will. Arnold takes a centimeter-thick piece of sugar, turns it into a loop, and stretches it. Folding it once, it’s now two loops. He repeats until we have over 15,000 strands, each about a micron thick. It’s flavored with cocoa, but Arnold likes to serve it with vinegar and mustard powder, with peanuts wrapped inside.

McGee would like us to take Count Rumford as seriously as we tend to take Franklin. Rumford was a Colonial New Englander who was on the wrong side of the war, so he spent much of his career in England. Amongst his many discoveries, Rumford discovered that slow cooked meat is delicious, a discovery that’s come into fashion recently with sous vide cooking. Rumford accidentally discovered the technique by trying to cook a leg of mutton in his potato drier, and left it overnight. In the morning, he encountered an “amazing aroma”. And because was scientifically minded, he replicated the experiment and tried an objective taste test. At a cocktail party, he cooked one leg of mutton over a fire and another using the slow technique and put them at opposite sides of the room, and weighed the remnants – the slow-cooked mutton was far more popular.

The opposite of Rumford was Justus Liebig, a German chemist who was a theoretician, not an experimentalist. Working only from his own “brilliance”, not from experiments, Liebig introduced a new way of cooking meat – searing it to seal in the juices. It’s revolutionary, but also really bad. Apparently he never actually tasted it.

In 1969, the British scientist Nicholas Kurti suggested that we bring scientific methods back to ordinary, everyday phenomenon. “I think it is a sad reflection on our society that while we can and do measure the temperature in the atmosphere of Venus, we do not know what goes on inside our soufflés”. His investigations were part of a movement towards “soft matter science”, a study of phenomena like soap bubbles that led to a 1991 Nobel prize.

McGee found himself investigating these phenomena in 1984 when he wrote his book on the history of food. In collaboration with scientists, he began testing a Julia Child assertion about whipping egg whites in a copper bowl – Child advocated always whipping in copper. Experiments testing whipping in copper demonstrated that it took a much longer time, but led to lighter whites. The paper was eventually accepted by Nature, though one reviewer commented, “The science is good, but the subject is fluffy.”

While much of what’s emerged in science in the kitchen, like molecular gastronomy, is fairly recent, nouvelle cuisine is very old. In 1759, a poem was published that read:

Every year nouvelle cuisine
Because every year tastes change;
And every day there are new stews:
So be a chemist, Justine.

French cooking, historically, has been far from experimental. Classic French cooking as compiled by Escoffier and others codified cuisine to the point where it was difficult to innovate, since the classic textbook offers 100 “correct” recipes for beef tenderloin. McGee cites Michael Bras as helping invert these dynamics with the melting chocolate cake, an inversion of the “correct” idea that a cake is surrounded by a sauce – instead, a cake contains a ganache. A later dish, the Gargouillou, recreated a salad as a walk through a garden, whatever ingredients were most appropriate on the given day.

Chef Jacques Maximin was influenced by these experiments and observed, ” To be really creative means not copying.” His maxim struck a chord especially with Ferran Adria, who recreated the gargouillou as an endlessly surprising salad – nothing is quite what it seems. Adria went on to thoroughly revolutionize cuisine as we know it, with techniques like flavored foams and the spherification of ingredients like melon into texturally odd balls of flavor.

He’s had many followers. Joan and Jordi Roca use rotary evaporators to separate aromas from ingredients – this makes possible a dish of foods that are shades of white which have flavors usually associated with visually dark ingredients. Jose Andres experimented with a chemical most often used to make cough drops, offering a bonbon of liquid olive oil within a clear shell. Wylie Dufresne uses an enzyme called “meat glue” to offer a chicken nugget that’s white meat wrapped in dark, wrapped in skin. And now the field has been exhaustively documented by Nathan Myrvold, who’s published a massive, five-volume book on Modernist cuisine.


At this point, McGee gives the reins to Arnold, who offers a rapid-fire walk through some of his favorite techniques and his creative process. He shows us a Japanese ring that features a wavy woodgrain effect, produced by beating two different metals together. Arnold achieved something similar using fish as a way of persuading Hobart, the cooking machine company, to give him a really badass slicer. Using meat glue and casein, he glues salmon and fluke together and slices them into a thin sheet that looks a little like mortadella and a bit like wood grain. It’s served with creme fraische seasoned with nitrogen-frozen herbs, a fennel apple salad infused with curry and pressure cooked mushroom seeds, a veritable tour of modernist technique on a plate.

(The nitrogen chilled herbs allow fresh herbs to be broken into very small pieces, as you would break up a dried herb, but maintain the fresh flavor and texture. Arnold recommends you blanch your fresh herbs, flash freeze in liquid nitrogen, shatter into tiny pieces and pass through a chinoise, using only the tiny bits that escape the mesh.)

Using agar, a gelling agent made from seaweed, Arnold produces a concord grape jelly, a thick, stiff substance. He points out that it cuts cleanly and can’t be put back together. But if you break it violently – in a blender, say – you get a different effect: a microgel or fluid gel. It looks like a puree on the plate, but tastes like juice in the mouth.

Agar works well as a clarifier too, in lower concentrations. Arnold makes a loose gel of lime juice, then uses a whisk to separate it into “whey and curds”. He passes this through cheesecloth, making rude comments about “gently massaging the sack”, before producing a liquid that looks very much like water, but turns out to have intense lime flavor.

We clarify liquids, he tells us, because then we can infuse them into other foods. “We can make a cucumber better by adding liquor to it… we can make a lot of things better by adding liquor to them.” Injection techniques work better with clear liquids, and Arnold shows us how to infuse a cucumber with lime and sugar in a vacuum machine. The vacuum pulls air out of the cucumber, and rapidly threatens to boil it, as liquids boil at lower temperatures in vacuum. (Arnold recommends you heavily chill your ingredients as you vacuum infuse…) While the air is sucked out, the liquid is incompressible, and as air floods back into the chamber as he turns the vacuum off, liquid infuses into the cucumber in a flash, turning the vegetable into something that looks like stained glass. “It’s one way to get something that looks cooked, but still has crisp, clean lines to it.”

You can rapidly infuse using pressure as well. Arnold puts vodka and coffee into an ISI whipped cream maker, and uses nitrous oxide to force the coffee into the vodka. What results is heavily flavored, but not carbonated – the tingle of carbonation comes from carbon dioxide escaping from solution. Nitrous oxide offers pressure and fluff without carbonation.

Arnold offers his advice on carbonating some of his favorite things. As with infusion, clarified liquids work better. “If you’re going to carbonate liquor – which I highly recommend – you’re going to need more pressure than carbonating water because CO2 is more soluble in alcohol than in water.” You can force carbonate a wine at 30 psi, sake at 35psi, and liquors at about 40 psi.

Why would you infuse vodka with coffee? “The flavors you pull out of a product are dependent on time, temperature, pressure.” You don’t just get yummy coffee vodka – you can get different flavors than you’d ever experience through conventional means.

It must be fun to have a kitchen where liquid nitrogen is as common as hot water. Arnold chills a glass with liquid nitrogen, pointing out that it’s cold only on the inside, and doesn’t generate condensation. He pours himself a carbonated gin and lime concoction as the audience is served marshmallows frozen with liquid nitrogen. McGee returns to explain the history of the marshmallows – they were served at The Fat Duck as both a palate and “mind cleanser”. The chef responsible wanted to reset his diners’ expectations, so he served them a marshmallow flavored with lime, tea and vodka and frozen. The heat of your mouth melts the treat and you find yourself with vapors pouring from your mouth and nose. We have a similar experience with the frozen marshmallows, and like the Fat Duck diners, find ourselves laughing, our expectations reset.

]]>
http://www.ethanzuckerman.com/blog/2011/09/08/the-science-of-food-and-of-resetting-your-expectations/feed/ 4
Protocol.by – sharing how you want to be contacted http://www.ethanzuckerman.com/blog/2011/04/19/protocol-by-sharing-how-you-want-to-be-contacted/ http://www.ethanzuckerman.com/blog/2011/04/19/protocol-by-sharing-how-you-want-to-be-contacted/#comments Tue, 19 Apr 2011 18:09:16 +0000 http://www.ethanzuckerman.com/blog/?p=4034 Continue reading ]]> Hugo Van Vuuren, Berkman Fellow and graduate student at Harvard’s Graduate School for Design and Gregg Elliott, researcher at MIT’s Media Lab, tell us that we’re experiencing a global communications “crisis”, one that we can address through better communications protocols.

Hugo sets the stage at today’s Berkman Center lunch talk, showing us the beginning of this video from design firm JESS3:

JESS3™ / The State of The Internet from JESS3 on Vimeo.

He summarizes the crisis, as he sees it, with a quote from Swiss designer Tina Roth Eisenberg: “Too many channels. Too many messages. Too much noise. Too much guilt.”

Lots of people are trying to build tools to cope with this flood of information. (Google’s priority inbox is one possible example of a tool to manage an overload of messages.) There’s less effort focused on overcoming the guilt. When we see people talking about reaching “inbox zero” or declaring “email bankruptcy“, they are looking for ways to deal with the guilt.

Even in an age of social media, mail and phone contact are massive in relation to new forms of communication. Russell Munroe’s legendary Online Communities map from 2005 has been updated for 2010, showing that massive social networks like Facebook are dwarfed by SMS, phonecalls and email.

Some recent articles in the New York Times – “Don’t Call Me, I Won’t Call You“, “Keep Your Thumbs Still When I’m Talking to You” – suggest that we’re seeing a conflict in cultural norms. Some people (me, for one) don’t answer the phone except for scheduled phonecalls, which is deeply confusing for people who consider phones the primary way to contact people. Some people check mobile phones while carrying on conversations, which can feel extremely rude to people who focus on face to face contact. Hugo points out that there can be differences in community protocol from one side of a university to others: “The Media Lab is much more of a phone-centered place than the GSD. At the GSD, email is something you do at your desk…”

We’re starting to see the explicit emergence of communications protocols. danah boyd‘s “email sabbatical” involves discarding all email received during a vacation – if you want to reach her, her autoresponder tells you, email her again once she’s come home. Tim Berners-Lee’s home page includes a complex protocol about what you should and should not email him. Harvard CS professor Harry Lewis suggested to Hugo that one of the massive problems in organizing a conference is figuring out how to contact academics, who tend to hide between different media, letting some emails go to administrative assistants while “real”, direct email addresses are carefully preserved commodities.

Hugo shows five.sentenc.es, an intriguing attempt to simplify email conversations by declaring that emails will be answered in five sentences or less. The hope is, by declaring a different protocol, it will no longer be considered rude to answer emails compactly and succintly. But this is “a kernel, not a generalized idea” for communications, Hugo offers. We need something broader and more inclusive.

One option is “stop and go signaling”, which we see on tools like instant messenger. But these status messages, which Greg explains used to be expressive, much like Facebook status messages, have turned into their own sort of protocol. “Away usually means that you’re at your keyboard, but busy.” It’s a step in the right direction, but perhaps too limited a vocabulary.

Hugo shows us a code of manners presented by the “Children’s National Guild of Courtesy”, a British organization from early last century. There are no single norms for behavior these days, set by institutions like this one. Norms are now set by individuals, or illustrated by example for leaders within communities.

To address these issues, Greg suggests that we need to:
– Define our rules of engagement
– Organize a system to execute on those rules, and
– Share your rules and expectations

Protocol.by is a first pass at defining and sharing these rules of engagement. Coming out of a closed alpha test shortly, it lets you register an account and compactly state the ways in which you’d prefer to be contacted. Greg explains that he dislikes spontaneous phonecalls – his protocol tells people not to call him before noon, and not to expect an answer to unscheduled calls. For emails, he urges correspondents to avoid polite niceties and get to the point. For people unsure of how to contact him, these protocols can make it easier for people to contact him in a way that’s minimally intrusive and maximally effective. (I have a protocol, if you’re interested…)

The goal for the site, Hugo offers, is for the site to become a “social anchor” to help bridge across multiple identities and online presences. In the long term, it could plug into location-based services and offer richer, more targeted information on how to contact people politely. A group could use protocol.by with voting systems which could help group protocols emerge.

Going forward, protocol.by might offer suggested protocols based on your identity – if you’re a technophile, you might want to be contacted with email and IM, not phone, for instance. Over time, these might emerge as a small set of cultural norms, rather than purely personal norms.

There’s dozens of questions from the Berkman crowd, as well as many observations phrased as questions. Some of the highlights, to the best of my reporting ability:

Q: Is there a revenue model for protocol.by?
A: Not at present – it’s a research project. In the long run, there might be fun ways to use the data, perhaps the way OKCupid analyzes dating information, in a way that might have financial value.

Q: Protocol-free communication leaves a lot of ambiguity in communications, which can be a good thing. Is someone not answering their email because you contacted them the wrong way, or because they don’t want to talk to you. Is it such a good idea to squeeze out this ambiguity?
A: You’ve got a good degree of freedom with the tool in how explicit you want to be. If you offer promises – “Emails will be answered within 48 hours” – you eliminate ambiguity. But a prioritized list of communication protocols is still pretty ambiguous.

Q: This system is very elegant, but it doesn’t recognize that you might communicate differently with a babysitter calling you about an emergency and an undergrad asking to interview you for a paper. How does the system handle this?
A: Protocols will likely differ for complete strangers versus friends and family. Protocol.by is mostly for people outside your circle of trust.

Q (David Weinberger): How many users do you need for this to be an effective research project and how will you get them?
A: There are about 500 users thus far. Having a few thousand may let us run bigger experiments. We’ll get more by embedding the tool into webpages and social networks.

Q (David Abrahms): I might want to be contacted via phone, but if I’m in Beijing, I’d like the system to accomodate that.
A: Great idea.

Q: (David Weinberger) There’s certainly a need for more metadata about your norms when you communicate with people outside your community. We need it for IP issues as well – Creative Commons helps us communicate what you can do with your content. Maybe this is a model for getting people to adopt this protocol?
A: Figuring out how to embed this well is going to help us work through these issues.


David took notes, too…

]]>
http://www.ethanzuckerman.com/blog/2011/04/19/protocol-by-sharing-how-you-want-to-be-contacted/feed/ 2
Those ducking yankers who designed T9 http://www.ethanzuckerman.com/blog/2010/11/10/those-ducking-yankers-who-designed-t9/ Wed, 10 Nov 2010 14:39:54 +0000 http://www.ethanzuckerman.com/blog/?p=3833 Continue reading ]]> Someone on Twitter pointed me to Damn You Auto Correct, a site that’s at least as narrow in focus as your average LOLCats site, but pretty funny nevertheless. I suppose it’s useful mostly as a warning not to invite someone over for gelato unless you’ve really thought things through. Then again, anyone who’s listened to Benjamen Walker’s 13th episode of Too Much Information, where an innocuous text message to a notoriously cranky rock star is transformed into a curt insult by autocorrect. Suffice it to say, I’ve never since typed “NP. Thanks so much” on my iPhone again.

It does seem like the manufacturers of autocorrect should keep up with the times in editing their dictionaries, realizing that “NP” has become pretty common slang and to find a different way to correct misspellings without alienating quick-fingered radio producers and SMSing computer scientists. And then I remembered a routine from British comics Armstrong and Miller:

The key phrase for me: “Our job, Gilbert, is to offer people not the words they do use but the words they should use.”

And you thought technology was value neutral…

]]>
Crisis Commons, and the challenges of distributed disaster response http://www.ethanzuckerman.com/blog/2010/09/02/crisis-commons-and-the-challenges-of-distributed-disaster-response/ http://www.ethanzuckerman.com/blog/2010/09/02/crisis-commons-and-the-challenges-of-distributed-disaster-response/#comments Thu, 02 Sep 2010 17:52:19 +0000 http://www.ethanzuckerman.com/blog/?p=3748 Continue reading ]]> Heather Blanchard, Noel Dickover and Andrew Turner from Crisis Commons visited the Berkman Center Tuesday to discuss the rapidly growing technology and crisis response space. Crisis Commons, Andrew tells us, came in part from the recognition that the volunteers who respond to crises aren’t necessarily amateurs. They include first responders, doctors, CEOs.. and lately, they include a lot of software developers.

Recent technology “camps” – Transparency Camp, Government 2.0 Camp – sparked discussion about whether there should be a crisis response camp. Crisis Camp was born in May, 2009 with a two-day event in Washington DC which brought together a variety of civic hackers who wanted to share knowledge around crisis technology and response. The World Bank took notice and ended up hosting the Ignite sessions associated with the camp, giving developers a chance to put ideas for crisis response in front of people who often end up providing funds to rebuild after crises.

The World Bank wasn’t the only large group interested in working with crisis hackers. Google, Yahoo! and Microsoft came together to found the Random Hacks of Kindness event, designed to let programmers “hack for humanity” in marathon sessions around the world.

While these events preceded the earthquake earlier this year in Haiti, that crisis was the seminal event in increasing interest in participating in technology for crisis relief efforts. A crisis camp to respond to the Haitian earthquake involved 400 participants in five cities and pioneered 13 projects. Over time, the crisis camp model spread to Argentina, Chile and New Zealand, with developers focused on building tools for use in Haiti, Chile and Pakistan. Blanchard explained that the events provided space for people who “didn’t want to contribute money – they wanted to do something.”

The camps had some tangible outcomes:
I’m Okay, a simple application that allows people to easily tell friends and family that they’re okay, in an emergency situation, was developed at Random Hacks of Kindness
– Tradui, an English/Kreyol dictionary for the Android was developed during the Crisis camps
– Crisis camps also developed a better routing protocol to enable point to point wireless between camps in Haiti, writing new drivers in 48 hours that were optimized for the long ping times associated with using WiFi over multi-kilometer distances

Perhaps the most impressive collaboration to come from the Crisis Camps was work on OpenStreetMap for Port au Prince. Using satellite imagery released by the UN, a team created a highly detailed map, leveraging the work of non-programmers to trace roads on the satellite images and diasporans to identify and name landmarks and streets. As the map improved in quality, the volunteers were eventually able to offer routing information for relief trucks, based on road damage that was visible on the satellite imagery. A convoy would request a route for a 4-ton water truck, and volunteers would use their bird’s eye view of the situation – from half a continent away – to suggest the safest route. Ultimately, the government of Haiti requested access to the information, and Crisis Camps provided not only the data, but training in using it.

The conversation turned to the challenges Crisis Camps have faced in making their model work:
– About 1/3rd of the participants are programmers. The others range from the “internet savvy” to those with complementary skill.
– Problems and requirements are often poorly defined
– It’s challenging to match volunteers to projects
– There’s a shortage of sustainable project management and leadership
– Projects often suffer from undocumented requirements and code, few updates on project status.
– Little work focuses on usability, privacy and security.
– Code licensing often isn’t carefully considered, and issues can arise about reusability of code on a licensing basis.
– Projects can be disconnected from what’s needed on the ground
– Disconnection happens in part because relief organizations don’t know what they want and need and are too busy to work with an untested, unproven community
– Volunteer fatigue – the surge of interest after a disaster tends to dissipate within four weeks
– There’s a lack of metrics and performance standards to evaluate project success.

The goal is to move from a Bar Camp/Hackathon model to a model that’s able to build sustainable projects. This means bringing project management into the mix, and asking hard questions like, “Does this project have a customer? Is it filling a well-defined need?” It also means building trust with crisis response organizations and groups like the World Bank and FEMA, who can help bring volunteer technology groups and crisis response groups together.

Crisis Commons see themselves as mediating between three groups: crisis response organizations like the Red Cross; volunteer technology organizations like OpenStreetMap; and private sector companies willing to donate resources. Each group has a set of challenges they face in engaging with these sorts of projects.

Crisis response organizations have a difficult time incorporating informal, ad-hoc citizen organizations into their emergency response plans. There’s a notion in the crisis response space of “operating rogue” if you’re not formally affiliated with an established relief organization… which further marginalizes volunteer tech communities. Many CROs have little tech understanding, which means they aren’t able to make informed decisions about collaboration with technical volunteers. In a very real way, crises are economic opportunities for relief organizations – that reality doesn’t breed resource sharing, which in turn, gets in the way of sharing best practices and lessons learned.

Volunteer tech communities frequently don’t understand the processes used by CROs, and frequently fail to understand that there’s often a good reason for those processes. While VTCs provide tremendous surge capacity that could help CROs, if there’s no good way for CROs to use this surge capacity, it’s a waste of effort on all sides. At the same time, tech communities inevitably suffer from the “CNN effect” – when crises are out of sight, they’re out of mind, and participation slumps. This is particularly challenging for managing long-term projects… and tech communities have massive project management and resource needs. Finally, successful VTCs can find themselves in a situation where they have a conflict of interest – they’re seeking paid work from relief organizations and may choose to cooperate only with those who can support them in the long term.

Private sector partners are usually participating in these projects led by their business development or corporate social responsibility divisions… while cooperation with the other entities often requires technical staff. Response organizations are often the clients of private sector players – the Red Cross is a major customer for information systems – which can create financial conflicts of interest. And working with large technology companies often raises intellectual property challenges, especially around joint development of software.

Meeting with a subset of crisis response organizations, Crisis Commons understands that there’s a need for long term relationships between tech volunteers and relief organizations, tapping the innovation power of these charitably minded geeks. But this requires relief organizations to know what solutions are already out there and what are reasonable requests to make of volunteers. And volunteer organizations need to understand the processes CROs have and how to work within them.

The hope for Crisis Commons is to become an “independent, nonpartisan honest broker” that can “bridge the ecosystem and matrix the resources.” This means “translating requirements of the CRO to the crisis crowd, helping the public understand CRO requirements,” and the reasons behind them. This could lead towards being able to set up a service like “Crisis Turk”, which could allow internet savvy non-programmers to engage in data entry tasks during a crisis.

In the long term, Crisis Commons might emerge as an international forum for standards development and data sharing around crises. Building capacity that could be active between crises, not just during them, they could direct research projects on lessons learned from prior disaster relief, could build a data library and begin preparing operations centers and emergency response teams for future crises. Some scenarios could involve managing physical spaces to encourage cooperation within and between volunteer tech teams and providing support for future innovation through a technology incubation program.


Starting from the shared premise the Crisis Commons founders presented us with – “Anyone can help in a crisis” – the discussion at Berkman focused on the structure Crisis Commons might take. The goal behind a “commons” structure is to be able to be an independent and trusted actor in the long term, to be able to be objective source of tech requirements, and to be able to bring non-market solutions to the table. But the founders realize that this is an inherently competitive space, and that volunteer organizations might find themselves in conflict with professional software developers in providing support to relief organizations, or with relief organizations if volunteer organizations began providing direct support.

It’s also possible that another player in the space could compete with Crisis Commons in this matchmaking role. Red Cross could develop an in-house technology team focused on collaborating with technology volunteers. Google could use the power of their tech resources to provide services directly to relief organizations. A partnership like Random Hacks of Kindness could emerge as the powerful leader in the space. Other volunteer technology organizations – Crisis Mappers, Strong Angel – might see themselves providing this bridging function. FEMA could start a private-public partnership under the NET Guard program. What’s the sweet spot for Crisis Commons?

One of our participants suggested that Crisis Commons could be valuable as a developer of standards, working to train the broader community about the importance of standards, and on the challenge of defining problems where solutions would benefit a broad community.

Another participant, who’d been involved with several Crisis Camp events worried that “the apps, while neat, never really made it into the field,” suggesting that the problems raised are real, not theoretical. It’s genuinely very difficult for tech volunteers to know what problems to work on… and hard for relief organizations under tremendous pressure to learn how to use these new tools.

This, I pointed out, is the problem that could prove most challenging for Crisis Commons in the long term. When crises arise, people want to help… but it’s critical that their help actually be… helpful. Clay Shirky told the story of his student, Jorge Just, who’s worked closely with UNICEF to develop RapidFTR, a family tracking and reunification tool. It’s been a long, engaged process with enormous amounts of time needed for the parties to understand each other’s needs and working methods… and it’s easy to understand why it might be difficult to convince volunteers to participate to this depth in a project.

I offered an observation from my time working on Geekcorps – I meet a lot of geeks who are convinced that the tech they’re most interested in – XML microformats, mesh wireless, cryptographic voting protocols – are precisely what the world needs to solve some pressing crisis. Occasionally, they’re right. Often, they’re more attached to their tech of choice than to addressing the crisis in question.

As such, the toughest job is defining problems and matching geeks to problems. At Geekcorps, it often took six months to design a volunteer assignment, and a talented tech person needed to meet several times with a tech firm to understand needs, brainstorm projects and create a scope of work, so we could recruit the right volunteer. While that model was expensive – and ultimately, made Geekcorps unsustainable – I think aspects of it could help Crisis Commons find a place in the world.

I ended up suggesting that Crisis Commons act as:
– a consultant to relief organizations, helping them define their technical needs, understand what was already available commercially and non-commercially and to frame needs to volunteer communities who could assist them
– a matchmaking service that connected volunteer orgs to short term and long term tech needs, preferably ones that had been clearly defined through a collaborative process
– a repository for best practices, collective knowledge about what works in this collaboration.

Unclear that this is the right solution for Crisis Commons or the road they’ll follow, but I came away with a strong sense that they are wrestling with the right questions in figuring out how to be most effective in this space. Very much looking forward to discovering what they come up with.

]]>
http://www.ethanzuckerman.com/blog/2010/09/02/crisis-commons-and-the-challenges-of-distributed-disaster-response/feed/ 10
Counting International Connections on Facebook http://www.ethanzuckerman.com/blog/2010/07/29/counting-international-connections-on-facebook/ http://www.ethanzuckerman.com/blog/2010/07/29/counting-international-connections-on-facebook/#comments Thu, 29 Jul 2010 16:37:25 +0000 http://www.ethanzuckerman.com/blog/?p=3723 Continue reading ]]> My friend Onnik Krikorian has become a Facebook evangelist. Onnik, a Brit of Armenian descent, living in Armenia, is the Global Voices editor for the Caucuses, which means he’s responsible for rounding up blogs from Armenia, Georgia, Azerbaijan as well as parts of Turkey and Russia. This task is seriously complicated by the long-term tensions in the region. Armenia and Azerbaijan are partisans in a “frozen” conflict – the Nagorno-Karabakh war, which lasted from 1988 – 1994, and remains largely unresolved.

It’s taken Onnik years to build up relationships with bloggers in Azerbaijan, relationships he needs to accurately cover the region. Azeri bloggers are often suspicious of his motives for connecting and wonder whether he’ll cover their thinking and writing fairly. But Onnik tells me that Facebook has emerged as a key space where Azeri and Armenians can interact. “There are no neutral spaces in the real world where we can get to know each other. Facebook provides that space online, and it’s allowing friendships to form that probably couldn’t happen in the physical world.” (Onnik documents some of the conversations taking place between Azeri and Armenian bloggers in a recent post on Global Voices.)

Picture 1
Graph from the front page of peace.facebook.com

Onnik was talking about his love of Facebook at an event hosted by the US Institute for Peace, where I and colleagues at George Washington University and Columbia were presenting research we’d carried out on the use of social media in conflict situations. Onnik’s hopes for Facebook as a platform for peace were echoed by Adam Conner of Facebook, who showed the company’s new site, Peace on Facebook. The site documents friendships formed between people usually separated by geography, religion or politics. Some of the statistics seem clearly like good news – 29,651 friendships between Indians and Pakistanis per day. Others are rather dispiriting – 974 Muslim/Jewish connections in the past 24 hours.

I’m a data junkie, and there’s little more frustrating to me than an incomplete data set. Basically, by showing us a very small portion of the nation to nation social graph, Facebook is hinting that the whole graph is available: not just how many friendships Indian Facebook users form with Pakistani users, but how many they form with Americans, Canadians, Chinese, other Indians, etc. Obviously, this is info I’m interested in – I’ve been building a critique that argues that usage of social networking tools to build connections between people in the same country vastly outpaces use of these tools to cross national, cultural and religious borders.

Without the whole data set, it’s hard to know whether these numbers are encouraging or not. Are 29,651 Indian/Pakistani connections a lot? Or very few, in proportion to how many connections Indians and Pakistanis make on Facebook in total? In other words, we’ve got the numerator, but not the denominator – if we had a picture of how many connections Indians and Pakistanis make per day, we might have a better sense for whether this is an encouraging or discouraging number.

I made a first pass at this question this morning, using data I was able to obtain online. Facebook tells us that the average user has 130 friends – a number that might be out of date, as the same statistics page lists “over 400 million users”, not the half billion currently being celebrated in the media. (Ideally, we’d like to know how many new friends are added per day so we can compare apples to apples, but you got to war with the data you have…)

We also need a sense for how many Facebook users there are per country. Here, we turn to Nick Burcher who publishes tables of Facebook users per country on a regular basis. Nick tells readers that the data is from Facebook, and the Guardian appears to trust his accounts enough to feature those stats on their technology blog. They are, alas, incomplete – Burcher published stats for the 30 countries with the largest number of Facebook users, and revealed a few more countries in the comments thread on the post.

Because we don’t have data for Pakistan, we can’t answer the India/Pakistan question. But we can offer some analysis for Israel/Palestine and Greece/Turkey.

Facebook for Peace tells us that there are 15,747 connections between Israelis and Palestinians for the past 24 hours. The term “connection” is not clearly defined on the site – it’s not clear whether a reciprocated friendship is 1 connection or 2 – because I’m going to count the number of Israeli friends and Palestinian friends, it makes sense to count a reciprocal friendship as two connections. (If Facebook is counting differently than I am, my numbers are going to be half what they should be.)

3,006,460 Israelis are Facebook users… a pretty remarkable number, as it represents 39.92% of the total population of the nation and roughly 57% of the country’s 5.3 million internet users. There are very few Palestinian internet users – 84,240, or 2.24% of the population… This mostly reflects how few Palestinians are online, as Facebook is used by 21% of Palestine’s 400,000 internet users.

At 3,090,700 Palestinian and Israeli Facebook users, we should see almost 402 million friendships involving an Israeli or a Palestinian. If we extrapolate from 15,747 friendships a day to 5.7 million a year, we’re looking at Israeli/Palestinian friendships representing 1.43% of friendships in the Israeli/Palestinian space… with all sorts of caveats. (The biggest is that the use of a year-long interval to calculate total friendships is totally arbitrary and probably not supportable. If you’ve got better data or a suggestion for a better estimation method, please don’t hesitate to speak up.)

We get very different results from looking at Greece and Turkey. 2,838,700 Greeks are Facebook members (25.11% of the national population), while 22,552,540 Turks (31.08% of the population) are. That’s roughly 3.3 billion friendships projected, and our year-long approximation finds us just over 4 million Greek/Turkish connections. That suggests that only 0.12% of friendships in the pool are Turkish/Greek friendships.

What explains the disparity between these numbers? While there’s certainly a long history of tension between Greece and Turkey, the last major military confrontation between the nations ended in 1922. Israel and Palestine, on the other hand, are involved with an active conflict and Israel’s recent incursion into Gaza ended a few months ago. What gives?

It’s possible that the numerous efforts designed to build friendship between Israeli and Palestinian youth are having an impact, much as Onnik’s work in Armenia and Azerbaijan is showing positive results. But there’s another possibility – 20% of the Israeli population are Arab citizen of Israel, and the majority of this set is of Palestinian origin. It’s certainly possible that the high percentage of Israeli/Palestinian friendship includes a large set of friendships between people of Palestinian origin in Israel and Palestinians… indeed, given the difficulty for both populations in meeting in physical space, we’d expect to see increased use of the internet as a meeting space to compensate for the difficulties of meeting in the physical world. This could be a factor in explaining India/Pakistan friendships as well, as well as Albanian/Serbian friendships, as the emergence of new nations through partition and conflict left groups united by cultures, separated by borders.

My goal in this post isn’t to belittle the power of Facebook for providing a border-transcending space where friendships can be built – Onnik’s story makes it clear that Facebook is a real and powerful tool for good, at least in the Armenian/Azeri space. But I continue to think that we overestimate how many of our online contacts cross borders and underestimate how often these tools are used to reinforce local friendships. I’d invite friends at Facebook to correct my numbers or my math… and mention that we could do a much better job of answering these questions if Facebook would release a data set that shows us all the cross-national connections made on the service.

—-

Ross Perez has created some great interactive maps that visualize the adoption of Facebook around the world, using Burcher’s data – worth your time.

]]>
http://www.ethanzuckerman.com/blog/2010/07/29/counting-international-connections-on-facebook/feed/ 19
Democrats, Republicans and Appropriators http://www.ethanzuckerman.com/blog/2010/05/22/democrats-republicans-and-appropriators/ Sat, 22 May 2010 18:29:19 +0000 http://www.ethanzuckerman.com/blog/?p=3586 Continue reading ]]> I had the good fortune to catch a small part of a conference at Harvard yesterday on text analysis. Good fortune, because I was there long enough to hear Justin Grimmer‘s talk on his dissertation, Measuring Reputation Outside Congress. Grimmer is interested in an important – and tough to answer – question: how responsive are the people we elect to their constituents?

We could look for ways to answer this question by studying the voting record of legislators (qualitatively or quantitatively), examining their work in Washington (through Congressional literature) or through examining their communications with constituents at home. This latter set of questions is referred to as the “Home Style” of a politician, following the work of Richard Fenno (1978).

Home style tells us something about politicians that their voting record often doesn’t, Grimmer tells us. He invites us to compare Senators Jeff Sessions (R-Alabama) and Richard Shelby (also R-Alabama). If we consider them simply in terms of their voting behavior, they look nearly identical – they vote together the vast majority of the time and both can be described, in voting terms, as conservative republicans.

But anyone who knows Alabama politics will tell you that Sessions and Shelby are vastly diferent guys. Grimmer characterizes Session as “an intense policy guy” who will bore you to tears with incredibly long, thorough explanations of issues when all you wanted was a photo with him. Shelby, on the other hand, is all about bringing home the bacon… and there are Shelby Halls at two Alabama universities to prove it.

Evidence suggests that representational style – policy versus pork, heavy versus light communicators – cuts across party lines. And it’s likely that politicians have diverse, stable, nonpartisan home styles. If we can find ways to characterize these differences – Grimmer proposes studying the difference in communications with constituents that claim credit and those that discuss policy – we have the opportunity to compare across senators, and connect these differences to what senators do within the institutions of power.

When Fenno studied the “home style” of politicians in 1978, he engaged in “soaking and poking” – intense participant observation, which involved following 18 members of Congress over 8 years. This method, Grimmer observes, is expensive, underrepresentative (and really hard to replicate as a graduate student.) Instead, we might study texts produced by senators. One candidate is newspaper articles… but editorial bias makes it hard to use editorials as representative of senatorial communications. We might use the constituent newsletters produced by Senate offices… but they’re sent using the constitutional Franking privleges and are very hard to get hold of.

Instead, Grimmer has been studying the press releases that senate offices produce – over 64,000 in all. The average senator issues 212 press releases per year, and while the quantity produced has a wide range (some produce only a few dozen, while Hillary Clinton’s senate office produced over a thousand a year), there’s no strong correlation between political party and usage of the tool.

After collecting the releases, Grimmer used machine learning techniques to separate transcripts of floor statements (which are usually released as press releases) from pure press releases, which let him study how a senator chooses to speak to her constituents. Once that sorting has taken place, the task is pretty simple – determine the topic of a press release. This is simplified by the fact that congressional aides try hard to ensure that press releases are on a single topic.

Grimmer’s work clusters senators by the topics discussed in their press releases. His research reveals four basic clusters:

– Senate Statespersons. These folks speak like they’re running for president… and they may well be. Their releases discuss the Iraq war, intelligence issues, international relations and budget issues. John McCain’s office communicates this way.

– Domestic policy. These senators are also policy wonks, but their focus is domestic – the environment, gas prices, DHS, and consumer safety.

– Pork and policy – Communication from these senators includes discussions of water rights grants, but also has serious discussion of health and education policy. Sometimes this is because the office simply issues lots and lots of releases – (former) Senator Clinton’s office fits in this camp.

– Appropriators – These guys communicate about the grants they’ve won – fire grants, airport grants, money for universities, and for police departments.

As well as clustering press releases based on topic, Grimmer’s work considers another metric – how often a press release claims credit for an appropriation. There turns out to be a vast spectrum, ranging from John McCain, who basically only issues statements about policy, and a guy like Mike DeWine, an Ohio Republican, where virtually every press release claims credit for an appropriation. There’s a very strong correlation between the topic clusters in releases and the percentage of releases claiming credit. (That’s at least in part because claiming credit is one of the topic clusters – you’re correlating between, in part, the same factor. Interesting nevertheless.)

What’s most interesting is that this classification – either by type of politician or by their place on the credit spectrum – is tightly correlated to their voting behavior on a particular issue: votes on appropriations rules, or as Grimmer puts it, “How do legislators self-regulate the porkbarrel”. These votes aren’t partisan – the late Ted Kennedy voted with Richard Shelby on these sorts of votes, which suggests truth to the truism that there are three parties in Congress: Democrats, Republicans and Appropriators. In other words, the way a Senator communicates with constituents is strongly predictive of their legislative behavior, specifically on how they allocate funds.

I thought this was excellent stuff – I hadn’t seen someone take a large database of political communications and subject it to automated analysis, and I thought the demonstration of this “third party” was particularly compelling.

]]>