… My heart’s in Accra Ethan Zuckerman’s online home, since 2003

February 11, 2015

Helping Launch the NetGain Challenge

Filed under: Developing world,Geekery,Human Rights,Media Lab,newcivics — Ethan @ 2:28 pm

This morning, I’m at the Ford Foundation in New York City as part of the launch event for NetGain. NetGain is a new effort launched by the Mozilla, Ford, Open Society, Macarthur and Knight Foundations, to bring the philanthropic community together to tackle the greatest obstacles to digital rights, online equality and the use of the internet to promote social justice.

The event is livestreamed here – in a moment, you can head Tim Berners-Lee and Susan Crawford in conversation about the future of the web.

For the past six months, I’ve been working with Jenny Toomey and Darren Walker at Ford, John Palfrey at Phillips Andover, and friends at these different foundations to launch the NetGain challenges. We’re asking people around the world to propose difficult problems about the open internet that they think governments and companies have not been able to solve. We’re collecting these challenges at NetGainChallenge.org, and asking participating foundations to take the lead on one or more challenges, coordinating a new set of investments in tackling that problem.

I had the privilege of introducing a session at this morning’s event about these challenges. It was an Ignite talk, which means I probably didn’t manage to say all the words I have listed below. But this is what I was trying to say:

45 years ago, the first message was sent over the internet, between a computer at UCLA and one at Stanford University.

25 years ago, Tim Berners-Lee turned the internet from a tool for academics into something most of us use every day, by making it easy to publish and read online – he created the World Wide Web.

What’s followed on Sir Tim’s invention is a transformation of the ways we work, play, shop, argue, protest, give, learn and love.

Given the amazing transformations we’ve seen, it’s easy to forget that the internet is a long, ongoing experiment. The internet as we know it is the result of trying new things, seeing how they break, and working to fix them.

The first message sent on the internet was “login”, as Charley Kline and Len Kleinrock at UCLA were trying to log into a machine at Stanford. They only managed to transmit the letters “lo”, then the system crashed. An hour later, they had it up again and managed to transmit the whole message.

On the internet, we have a long tradition of trying things out, screwing up, fixing what’s broken and moving forward.

Twenty five years into the life of the World Wide Web, there are amazing successes to celebrate: a free encyclopedia in hundreds of world languages, powerful tools for sharing breaking news and connecting with old friends, platforms that help us organize, agitate and push for social justice.

But alongside our accomplishments, there’s still lots that’s broken.

In building an internet where most content and services are free, we’ve also adopted a business model that puts us under perpetual surveillance by advertisers. Worse, our communications are aggregated, analyzed and surveilled by governments around the world.
The amazing tools we’ve built for learning and for sharing ideas are far easier and cheaper to access in the developed world than in the developing world – we’re still far from the dream of a worldwide web.

We’ve built new public spaces online to discuss the issues of the day, but those discussions are too rarely civil and productive. Speaking online often generates torrents abuse, especially when women speak online.

Despite Sir Tim’s vision of a decentralized web, there’s a huge concentration of control with a few companies that control the key platforms for online speech. And as we use the web to share, opine and learn, quickly losing our legacy, erasing this vast new library as fast as we write it.

These problems may well be unsolveable. But it’s possible that we’ve been waiting for the wrong people to solve them.

In 1889, Andrew Carnegie gave money to build a public library in Braddock, Pennsylvania, the first of 1,689 libraries he funded in the US. These were not just spaces that allowed people to feed their minds, but in many towns, the only spaces open to men, women, children and people of all races.

Newspapers and the publishing houses made knowledge available to those who could afford it, but Carnegie made it available to everyone.

As television became a fixture in the nation’s homes in the 1950s, the Ford Foundation worked with other philanthropists to build a public television system in the US, ensuring that this powerful new medium was used to educate and enlighten as well as to entertain

The foundations here aren’t going to be able to put internet into every home the way Carnegie brought libraries to every town. But there are problems philanthropy can tackle in unique ways that provide solutions that go beyond what corporations or governments can do on their own.
That’s what led us to the idea of the grand challenge. We’re drawing inspiration here from Google’s moonshots and from the XPrize Foundation. More importantly, we’re taking guidance from the people we work with everyday, on the front lines of social innovation, to identify the challenges we need to overcome to for the internet to be a true tool for justice and social inclusion

The speakers you’re about to hear aren’t here with solutions: they’re going share with us the thorny problems they’re working to solve. We’re asking each foundation that’s a member of Netgain to take the lead on one of these and other challenges, convening the smartest people in the field, our partners, our grantees, our beneficiaries to understand what we can do together to tackle these deep and persistent problems.

These aren’t the only challenges we need to tackle. We need to hear from you about what problems we can take on and what brilliant guides – like nine speakers we’re about to hear from – can help us navigate our way through these challenges.

We’re taking this high-risk strategy of aiming at the toughest problems because even if we fall short of our goals, we think we’ll make enormous progress by working together. Every six months, we plan to bring our community together, convene around a grand challenge and start a process of collaboration and experimentation. We may only get to “lo” before we crash, restart and rebuild. But every time we do, we’ll be moving towards a web that’s more open, more just, more able to transform our world for the better.

Please join us at NetGainChallenge.org and help us identify the challenges we should be taking on.

April 4, 2013

Schneier and Zittrain on digital security and the power of metaphors

Filed under: Berkman,Geekery,ideas — Ethan @ 7:26 pm

Bruce Schneier is one of the world’s leading cryptographers and theorists of security. Jonathan Zittrain is a celebrated law professor, theorist of digital technology and wonderfully performative lecturer. The two share a stage at Harvard Law School’s Langdell Hall. JZ introduces Bruce as the inventor of the phrase “security theatre”, author of a leading textbook on cryptography and subject of a wonderful internet meme.

The last time the two met on stage, they were arguing different sides of an issue – threats of cyberwar are grossly exaggerated – in an Oxford-style debate. Schneier was baffled that, after the debate, his side lost. He found it hard to believe that more people thought that cyberwar was a real threat than an exaggeration, and realized that there is a definitional problem that makes discussing cyberwar challenging.

Schneier continues, “It used to be, in the real world, you judged the weaponry. If you saw a tank driving at you, you know it was a real war because only a government could buy a tank.” In cyberwar, everyone uses the same tools and tactics – DDoS, exploits. It’s hard to tell if attackers are governments, criminals or individuals. You could call almost anyone to defend you – the police, the government, the lawyers. You never know who you’re fighting against, which makes it extremely hard to know what to defend. “And that’s why I lost”, Schneier explains – if you use a very narrow definition of cyberwar, as Schneier did, cyberwar threats are almost always exaggerated.

Zittrain explains that we’re not debating tonight, but notes that Schneier appears already to be conceding some ground in using the word “weapon” to explore digital security issues. Schneier’s new book is not yet named, but Zittrain suggests it might be called “Be afraid, be very afraid,” as it focuses on asymmetric threats, where reasonably technically savvy people may not be able to defend themselves.

Schneier explains that we, as humans, accept a certain amount of bad action in society. We accept some bad behavior, like crime, in exchange for some flexibility in terms of law enforcement. If we worked for a zero murder rate, we’d have too many false arrests, too much intrusive security – we accept some harm in exchange for some freedom. But Bruce explains that in the digital world, it’s possible for bad actors to do asymmetric amounts of harm – one person can cause a whole lot of damage. As the amount of damage a bad actor can create, our tolerance for bad actors decreases. This, Bruce explains, is the weapon of mass destruction debate – if a terrorist can access a truly deadly bioweapon, perhaps we change our laws to radically ratchet up enforcement.

JZ offers a summary: we can face doom from terrorism or doom from a police state. Bruce riffs on this: if we reach a point where a single bad actor can destroy society – and Bruce believes this may be possible – what are the chances society can get past that moment. “We tend to run a pretty wide-tail bell curve around our species.”

Schneier considers the idea that attackers often have a first-mover advantage. While the police do a study of the potentials of the motorcar, the bank robbers are using them as getaway vehicles. There may be a temporal gap when the bad actors can outpace the cops, and we might imagine that gap being profoundly destructive at some point in the near future.

JZ wonders whether we’re attributing too much power to bad actors, implicitly believing they are as powerful as governments. But governments have the ability to bring massive multiplier effects into play. Bruce concedes that his is true in policing – radios have been the most powerful tool for policing, bringing more police into situations where the bad guys have the upper hand.

Bruce explains that he’s usually an optimist, so it’s odd to have this deeply pessimistic essay out in the world. JZ notes that there are other topics to consider: digital feudalism, the topic of Bruce’s last book, in which corporate actors have profound power over our digital lives, a subject JZ is also deeply interested in.

Expanding on the idea of digital feudalism, Bruce explains that if you pledge you allegiance to an internet giant like Apple, your life is easy, and they pledge to protect you. Many of us pledge allegiance to Facebook, Amazon, Google. These platforms control our data and our devices – Amazon controls what can be in your Kindle, and if they don’t like your copy of 1984, they can remove it. When these feudal lords fight, we all suffer – Google Maps disappear from the iPad. Feudalism ended as nation-states rose and the former peasants began to demand rights.

JZ suggests some of the objections libertarians usually offer to this set of concerns. Isn’t there a Chicken Little quality to this? Not being able to get Google Maps on your iPad seems like a “glass half empty” view given how much technological process we’ve recently experienced. Bruce offers his fear that sites like Google will likely be able to identify gun owners soon, based on search term history. Are we entering an age where the government doesn’t need to watch you because corporations are already watching so closely? What happens if the IRS can decide who to audit based on checking what they think you should make in a year and what credit agencies know you’ve made? We need to think this through before this becomes a reality.

JZ leads the audience through a set of hand-raising exercises: who’s on Facebook, who’s queasy about Facebook’s data policies, and who would pay $5 a month for a Facebook that doesn’t store your behavioral data? Bruce explains that the question is the wrong one; it should be “Who would pay $5 a month for a secure Facebook where all your friends are over on the insecure one – if you’re not on Facebook, you don’t hear about parties, you don’t see your friends, you don’t get laid.”

Why would Schneier believe governments would regulate this space in a helpful way, JZ asks? Schneier quotes Martin Luther King, Jr. – the arc of history is long but bends towards justice. It will take a long time for governments to figure out how to act justly in this space, perhaps a generation or two, Schneier argues that we need some form of regulation to protect against these feudal barons. As JZ translates, you believe there needs to be a regulatory function that corrects market failures, like the failure to create a non-intrusive social network… but you don’t think our current screwed-up government can write these laws. So what do we do now?

Schneier has no easy answer, noting that it’s hard to trust a government that breaks its own laws, surveilling its own population without warrant or even clear reason. But he quotes a recent Glenn Greenwald piece on marriage equality, which notes that the struggle for marriage equality seemed impossible until about three months ago, and now seems almost inevitable. In other words, don’t lose hope.

JZ notes that Greenwald is one of the people who’s been identified as an ally/conspirator to Wikileaks, and one of the targets of a possible “dirty tricks” campaign by H.B. Gary, a “be afraid, be very afraid” security firm that got p0wned by Anonymous. Schneier is on record as being excited about leaking – JZ wonders how he feels about Anonymous.

Schneier notes how remarkable it is that a group of individuals started making threats against NATO. JZ finds it hard to believe that Schneier would take those threats seriously, noting that Anon has had civil wars where one group will apologize that their servers have been compromised and should be ignored as they’re being hacked by another faction – how can we take threats from a group like that seriously? Schneier notes that a non-state, decentralized actor is something we need to take very seriously.

The conversation shifts to civil disobedience in the internet age. JZ wonders whether Schneier believes that DDoS can be a form of protest, like a sit in or a picket line. Schneier explains that you used to be able to tell by the weaponry – if you were sitting in, it was a protest. But there’s DDoS extortion, there’s DDoS for damage, for protest, and because school’s out and we’re bored. Anonymous, he argues, was engaged in civil disobedience and intentions matter.

JZ notes that Anonymous, in their very name, wants civil disobedience without the threat of jail. But, to be fair, he notes that you don’t get sentenced to 40 years in jail for sitting at a lunch counter. Schneier notes that we tend to misclassify cyber protest cases so badly, he’d want to protest anonymously too. But he suggests that intentions are at the heart of understanding these actions. It makes little sense, he argues, that we prosecute murder and attempted murder with different penalties – if the intention was to kill, does it matter that you are a poor shot?

A questioner in the audience asks about user education: is the answer to security problems for users to learn a security skillset in full? Zittrain notes that some are starting to suggest internet driver’s licenses before letting users online. Schneier argues that user education is a cop-out. Security is interconnected – in a very real way, “my security is a function of my mother remembering to turn the firewall back on”. These security holes open because we design crap security. We can’t pop up incomprehensible warnings that people will click through. We need systems that are robust enough to deal with uneducated users.

Another questioner asks what metaphors we should use to understand internet security – War? Public health? Schneier argues against the war metaphor, because in wars we sacrifice anything in exchange to win. Police might be a better metaphor, as we put checks on their power and seek a balance between freedom and control of crime. Biological metaphors might be even stronger – we are starting to see thinking about computer viruses influencing what we know about biological viruses. Zittrain suggests that an appropriate metaphor is mutual aid: we need to look for ways we can help each other out under attack, which might mean building mobile phones that are two way radios which can route traffic independent of phone towers. Schneier notes that internet as infrastructure is another helpful metaphor – a vital service like power or water we try to keep accessible and always flowing.

A questioner wonders whether Schneier’s dissatisfaction with the “cyberwar” metaphor comes from the idea that groups like anonymous are roughly organized groups, not states. Schneier notes that individuals are capable of great damage – the assassination of a Texas prosecutor, possibly by the Aryan Brotherhood – but we treat these acts as crime. Wars, on the other hand, are nation versus nation. We responded to 9/11 by invading a country – it’s not what the FBI would have done if they were responding to it. Metaphors matter.

I had the pleasure of sitting with Willow Brugh, who did a lovely Prezi visualization of the talk – take a look!

November 29, 2011

DARPA director Regina Dugan at MIT: “Just Make It”

Filed under: Geekery,ideas — Ethan @ 4:57 pm

This afternoon, MIT’s Political Science distinguished speakers series hosts Regina Dugan and Kaigham Gabriel, director and deputy director of DARPA, the US defense advanced research project agency, who are here to speak about advanced manufacturing in America. The title for their talk is “Just Make It”, a response Dugan offers to people who ask her to predict the future. “Visionaries aren’t oracles – they are builders.”

She shows a five minute video of nerd porn, a montage of dismissive predictions about technologies (like Lord Kelvin’s statement about the impossibility of heavier than air flight, followed by footage of the Wright Brothers, and then from Top Gun. The video ends with observations about the time to 50 million users for different technologies is rapidly shrinking, pointing to Facebook’s sprint to 100 million users, and offers images of protesters holding banners celebrating the internet. “Still think social media is a fad?” the video asks. The video ends with a challenge for the engineers in the room – “just make it”.

Dugan tells us that the decline in America’s ability to build things is a national challenge, if not a crisis. Americans consume an increasing percentage of goods made overseas, and are less likely to be employed making things. Perhaps this reflects on productivity increases, or on currency manipulations, but it has implications, she warns, for national defense. Adam Smith warned that if an industry was critical to defense, it is not always prudent to rely on neighbors for supply.

There have been many years of debate around the inefficiency of America’s design and building of defense systems, Dugan tells us. One extrapolation of increase in airline design cost – sometimes referred to as “Augustine’s Laws” – suggests that by 2054, a single military aircraft will cost as much as the entire military budget at that time. Obviously, it’s dangerous to extrapolate linearly from current data… but if you do, the cost of military systems is growing much more rapidly than defense budgets. “Quite obviously, this is not sustainable”.

When we design aircraft, she tells us, we’re often designing ten years out. That means we’re trying to understand the threat environment ten years out. That’s risky. “Lack of adaptability is a vulnerability.”

What’s worse is that it’s really expensive. She shows a graph of production costs for the F-22 fighter. The price per unit keeps increasing, and the volume required keeps dropping. This might be because we need to amortize design costs over fewer units. Or it might be because the costs get so high, we simply can’t afford as many units as we wanted. This isn’t just true of the F-22 – it’s true for the Marine EFV project and the Comanche helicopter as well.

This difficulty in building complex systems has implications for defense and for the economy as a whole, she tells us.
“To innovate, we must make. To protect, we must produce.” DARPA is not a policy organization, she tells us, but pushing from “a buy to make strategy” is of strategic importance to the US Department of Defense.

There’s $200 million a year being invested in innovation, looking for ways to change the calculus of cost increase. Can we turn a long problem like vaccine design into one we can solve in weeks? Could we permit the participation of tens of thousands of designers into a process and harness their ideas? She suggests that the future of innovation is around increased speed of production and number and diversity of designs. The rise of electronic design aides revolutionized the semiconductor industry – could this shift in speed and diversity bring a similar paradigm shift?

Dugan tells us that the systems we have to manage complexity are inherited from 1969-era systems engineering. We take complex systems and split them along functional lines – power system, control system, thermal system – then try to put them back together. What happens is that we experience emergent behaviors that weren’t predictable. As a result, we end up with a design, build and test system that we iterate through, trying to solve those emergent problems.

This isn’t the only way to design complex systems. She shows a graph that measures time to design, integrate and test, versus a measure of product complexity, which includes part count and lines of source code. There’s a linear increase in time to build to complexity for aerospace defense systems. Another piece of the graph shows a flat design and test time cycle with increasing complexity – that’s the semiconductor industry. And a third industry – the best in class automotive manufacturers – show a decrease in time with an increase in complexity! How are they pulling this off?

Gabriel tags in here, to explain how the semiconductor industry achieved gains in complexity without extending the timeframe necessary to design and test their products. The key factor was a decision to control for time. “If we aren’t out there with new chips in 18-24 months, we’ll miss the next generation of PCs.” So the principles of VLSI design were optimized around producing new product on a timecycle as tight as that for less complex integrated circuits.

Two major design innovations characterize the VLSI shift, Gabriel tells us. First, it’s critical to decouple design and fabrication, a shift that was comparatively easy for circuit designers to accept. The second was initially heresy: you needed to stop optimizing each transistor, and sacrifice component performance for ease of system design and reliability.

We’ve seen a similar move in computer programming, a shift away from assembler, which produces very efficient code that’s hard to test, to higher level programming languages. Those languages abstract operations, which leads to a decrease in performance efficiency, but since we’re no longer as limited by how many operations a computer can perform, the design speed benefits outweigh the performance compromises. He hints that we may be seeing some similar shifts in biological sciences as well.

How does this work in terms of DARPA projects? Dugan retrieves the mic to speak about the Adaptive Vehicle Make program, designed to build a new infantry vehicle in two years instead of ten. A first step is developing a language to describe and design mechano-electric systems so they can integrate more smoothly. The vehicle, she tell us, will be flexibly manufactured through a “bitstream-configurable foundry-like manufacturing capability for defense systems” capable of “mass production in quantities of one”.

With facilities that can accept a design and custom-forge parts, she believes we can move to an increasingly democratized design system, which enables the participation of many more people to design and submit systems to foundry-like fabrication facilities. We’ll design vehicles “using the most modern techniques of crowd infrastructure and open source development,” in a program called VehicleForge.mil. (While a valid URL, there’s no webserver at that address. Just wanted to save you a Google search or two.)

Critics tell her this approach won’t work. But Dassault recently designed the Falcon 7x aircraft using “digital master models, by tail number, for aircraft” – i.e., building extremely complex individual models for each aircraft they build. The models only do geometric interference (i.e., they test whether the parts fit together), but they’ve halved the time needed to produce a new plane. Critics claim that the analogy between integrated circuits and military vehicles is an inept one. But in terms of part count, ICs are much more complex than vehicles. What’s complex is the diversity of components used in the combat vehicle.

A new experiment, conducted in cooperation with Local Motors, a small-scale vehicle fabrication company (see my notes on the founder’s Pop!Tech talk in 2009) invites designers to compete to design a combat support vehicle, the XC2V. $10,000 in prizes were offered, and instead of getting the 3 designs they get in an invitation-only design scenario, they received 159, 100 of which the judges deemed “high calibre”. It wasn’t a clean sheet of paper design – the chassis and drivetrain were designed by Local Motors – but it was effective at expanding the idea pool, and led to a functioning design within four weeks.

The power of the crowd may be even greater in a field like protein folding, where humans are still able to solve some problems better than algorithms. Foldit is the brainchild of a biochemist, a computer scientist and a gamer, who decided to turn protein folding into a game, building “a Tetris-like environment for folding”. 240,000 people have signed up to play, but what’s really cool is “the emergence of 5 sigma savants for protein folding, some of whom have very little biochemistry training.” Recently, Foldit solved a key protein – a retroviral protease SIV for the rhesus monkeys – which had been unsolved for 15 years. The community folded it in 10 days. Projects like this, she tells us, make her a believer that bringing many diverse minds to a problem and increasing the pace of building will increase the speed and diversity of innovation.

Gabriel offers three other examples where massive innovations are possible through new methods.

Optics are the dominant cost in many imaging and sensor systems. It turns out that making light do something different – bending, focusing, diffusing – requires materials and systems that are heavy, complex and expensive. M-GRIN – manufacturable gradient index optics – moves beyond lenses that are made out of a single material with a single index of refraction. Instead, they use a stack of multiple layers and films, combined via heat and pressure, to make lenses that are smaller and lighter. A test around a shortwave infrared lens produced a device that was 3.5x smaller and 7.5x lighter. That’s a breakthrough… but the real innovation is creation of a set of design rules that let you go from an application to a recipe for combining materials into the lens you need.

In telling us about maskless nanolithography, Gabriel tells us “Moore’s law is dead in circuit design, though the corpse doesn’t know it yet.” The culprit is heat – we can make tighter and smaller circuits, but they’re getting very difficult to cool. As critical is cost. Working at ultra-small line width is prohibitively expensive. It’s hard to spend tens of millions on a set of 45 nanometer masks to create a few hundred chips for a defense system, when building those masks costs tens of millions of dollars.

We know how to do lithography without masks, but it’s traditionally been very slow. So now designers have built a system that creates and bends an electron beam, then splits it into millions of beamlets, controlled by a “dynamic pattern generator”. Program that pattern generator, and it allows millions of writing operations to happen at the same time, leading to a current working speed of 10-15 wafers per hour, the minimum required to produce custom ICs for military applications.

His third example is the accelerated manufacturing of pharmaceuticals, a strategy he tells us was Plan B in 2009-2010 if the H1N1 flu virus had resurfaced. It’s very hard to produce vaccines quickly – egg-based strategies require a piece of virus and many thousands of chicken eggs. These methods work, but can require 6-9 months to build up a stockpile. A new method uses tobacco plants to produce custom proteins, working from strands of DNA in the virus. Envision a football-field sized building filled with lights and trays of tobacco plants. A facility like that can now produce a million doses a month of a novel vaccine. In scaling up capacity to 100 million doses per month, the key problem turned out to be lighting – it was impossible to light everything without switching to LED bulbs. Once they made the switch, they had a new opportunity – tuning the spectrum to optimize production. Using an experiment of “high school science complexity”, they grew plants under different lighting conditions for a few weeks, and determined a mix of blue and red frequencies that doubles protein production.

Gabriel ends with a slide quoting MIT scientist Tom Knight:

“The 19th century was about energy.
The 20th century was about information.
The 21st century is about matter.”

If we embrace this challenge, Gabriel tells us, we will be able to make things at the cost we used to produce and stockpile them in bulk, and this change will change how we innovate.

Above this line are my notes, below, my reaction:

I thought the DARPA folks gave an impressive talk, inasmuch as they got me thinking about a problem I’d not considered – the insane cost and time frame of producing military equipment. But for a talk sponsored by the political science department, it seemed woefully lacking of discussions of politics or markets. If I were trying to explain the difference in production processes between military vehicles, consumer automobiles and integrated circuits, I suspect I might look at the power of markets. IC manufacturers needed to build chips quickly because customers wanted to buy newer, faster chips… and would buy other chips if the manufacturer wasn’t fast enough. Ditto for automobile companies.

The defense industry is different. It’s very hard to terminate a weapons system, even if it’s massively over time and over budget. The competition happens well before a product is built. Discovering that the F-22 production isn’t going well doesn’t create a market opportunity for another company to produce a better product faster – the company producing the F-22 is going to get paid, even if they take an absurd time to produce the product.

I admire the approach Dugan and Gabriel are putting forward, and certainly appreciate that it plays well to a room full of engineers. But I was very surprised not to hear questions (and I only caught the first five or six) about whether the DoD purchasing process can be reformed so long as military budgets are sacrosanct. We’re currently facing mandatory budget cuts with the failure of the budget supercommmittee, and conventional political wisdom suggests that the social service cuts will go through, while the defense ones will not. How do you encourage companies to innovate when they’re currently amply rewarded for dragging design and production out over decades? How do you innovate without market pressures?

My homogeneously left-wing family was talking politics over the Thanksgiving dinner table and realized the solution to America’s current social problems was to simply adopt the Egyptian political system – let the military run everything. The right doesn’t like cutting military budgets, but is okay when the military provides state-sponsored healthcare and subsidizes education. All we need to do is ensure all Americans are employed by the US military and we can build a thriving, successful welfare state. The same absurdity behind that suggestion is what makes DARPA’s ideas so hard to implement – if there’s no pressure to cut military budgets, anything is possible… except real innovation around cost and efficiency.

September 8, 2011

The science of food… and of resetting your expectations

Filed under: Geekery,ideas — Ethan @ 5:21 pm

This is one of the more surreal weeks of my recent life. On Sunday, I took possession of an adorable and small apartment near Inman Square in Cambridge, fought my way through Ikea and spent the first night of my new itinerant academic existence in Cambridge. Monday, I moved into my office in the Media Lab, using a borrowed ID from a student as my ID card isn’t turned on yet. Tuesday, I met with my new masters students and other colleagues at the Center for Civic Media, then retreated to the Berkman Center to be part of their iLaw series. And then I found myself in a lecture hall in the Harvard Science Center, attending a lecture on cooking, given in part by David Arnold, one of the leading minds in haute cuisine… and a guy I used to hang out with more than twenty years ago. It feels like a very strange compression of history into a single (very long) weekend. But it was a great talk, so I thought I’d share it with you as well. (And that I’m not posting until two days later helps show how crazed the week has been…)

The lecture is a public talk associated with a Harvard class called “Science and Cooking – From Haute Cuisine to Soft Matter Science“, taught by David Weitz. It’s a science class focused on the chemical and physical changes associated with cooking. The text for the class is “On Food and Cooking” by Harold McGee, the opening speaker. McGee wrote the book in Cambridge and tells us “In late 1970s, I never dreamed Harvard would give a course on cooking – I can make a living now.”

McGee is accompanied by Arnold, who he introduces as the director of culinary technologies at French Culinary Institute in NYC and “the one guy in the world who knows the most about cutting edge tech in the modern kitchen.” Arnold insisted that the class needed a definition of cooking, and so we’re working with this definition: “The preparation of food for eating, especially by means of heat”. The term comes from the Latin coquere, “to cook, prepare food, ripen, digest”. Cooking is the application of energy and ingenuity to change foods so they’re easier, safer and more pleasurable to eat.

McGee quotes Arnold as observing that if a peach is perfectly right, the best thing you can do with it is put it on a plate with a knife. Nature, McGee argues, wants us to eat peaches so that we’ll carry seeds far and wide. What we do in cooking is, in part, trying to approach the complexity and the balance of the perfectly ripe piece of fruit.

The first stop on a history of cooking has to be fire. McGee references Richard Wrangham’s “Catching Fire: How Cooking Made us Human”. Cooking allows us to turn raw starch into something digestible. We needed these calories, Wrangham argued, to build our big brains. In that sense, learning to cook may literally have helped us become human. For us to tell if Wrangham is right, we need to see evidence of cooking much further back in history. We currently see evidence of 100,000 years ago, while Wrangham speculates we should see evidence 1 million years back.

By the Middle Ages, cooks had figured out how to make gelatins and clarify them, and how to do very complex decorative work for the courts. They’d also invented food as entertainment. We see a recipe from the 15th century titled “To Make a Chicken Sing when it is dead and roasted”. It involves stuffing a chicken with sulfur and mercury and sounds like a very bad idea… but it is amusing, and that notion of food as amusement is returning to modern kitchens today.

By 1681, we see the introduction of a very different way of cooking – the pressure cooker. In 1681, Denys Papin was a member of the Royal Society, working with Boyle on gases. He figured out that you could cook food using pressurized water and speed cooking processes. Because the Royal Society are mostly bachelors, there’s a wonderful set of literature of dinner parties where scientists brought ingredients and Papin cooked and served them.

Arnold jumps into explain that pressure cookers allow us to cook at temperatures other than what we could normally achieve. This leads to some fun discoveries. He read an influential book on pressure cooking that advised increasing use of onions in pressure cookers because the onion flavor dissipates. So he pressure cooked other similar foods, and discovered that foods like garlic lose their stink when pressure cooked. “The sulfur compounds in horseradish get totally knocked out so you can eat it by the bushel.” Mustard seeds cooked with vinegar puff up like caviar. And other effects can’t be replicated any other reasonable ways. “Pressure cookers speed up Maillard reactions – you can pressure cook an egg for an hour and get browning that you otherwise wouldn’t get without cooking for several days.”

McGee notes that Arnold hasn’t mentioned his durian experiments. Arnold sheepishly explains that this is a lesson in the importance of repetition. Durian smells bad (or wonderful, if you grew up in certain corners of Asia) because of sulfur compounds, and so you should be able to knock out the smell in a pressure cooker. “So I threw some stuff with durian into a pressure cooker and got the most incredible Durian caramel.” But he’s never been able to replicate it, with more than a month’s worth of attempts. “Don’t be a schmuck,” he tells us – document your work so you can replicate.

Replicability is, of course, the essence of experimental science. In 1770, McGee tells us, Ben Franklin was spending a huge amount of time on ships, traveling between the US and France. He noticed that when the cooks threw out the waste from cooking, the wake behind the ship calmed. He later tried an experiment in Clapham Pond in London, putting a teaspoon of oil onto a pond on a windy day. The water calmed over an area of half an acre. Had Franklin made a further leap, he could have pretty easily calculated the size of a molecule based on the experiment, assuming that the layer of oil eventually was a single molecule thick.

To get a sense for the molecular scale, Arnold gives us a demonstration of Dragon’s Beard candy, a preparation seen in China, Turkey and Iran. Cook sugar to a particular hardness and you can stretch and fold it at will. Arnold takes a centimeter-thick piece of sugar, turns it into a loop, and stretches it. Folding it once, it’s now two loops. He repeats until we have over 15,000 strands, each about a micron thick. It’s flavored with cocoa, but Arnold likes to serve it with vinegar and mustard powder, with peanuts wrapped inside.

McGee would like us to take Count Rumford as seriously as we tend to take Franklin. Rumford was a Colonial New Englander who was on the wrong side of the war, so he spent much of his career in England. Amongst his many discoveries, Rumford discovered that slow cooked meat is delicious, a discovery that’s come into fashion recently with sous vide cooking. Rumford accidentally discovered the technique by trying to cook a leg of mutton in his potato drier, and left it overnight. In the morning, he encountered an “amazing aroma”. And because was scientifically minded, he replicated the experiment and tried an objective taste test. At a cocktail party, he cooked one leg of mutton over a fire and another using the slow technique and put them at opposite sides of the room, and weighed the remnants – the slow-cooked mutton was far more popular.

The opposite of Rumford was Justus Liebig, a German chemist who was a theoretician, not an experimentalist. Working only from his own “brilliance”, not from experiments, Liebig introduced a new way of cooking meat – searing it to seal in the juices. It’s revolutionary, but also really bad. Apparently he never actually tasted it.

In 1969, the British scientist Nicholas Kurti suggested that we bring scientific methods back to ordinary, everyday phenomenon. “I think it is a sad reflection on our society that while we can and do measure the temperature in the atmosphere of Venus, we do not know what goes on inside our soufflés”. His investigations were part of a movement towards “soft matter science”, a study of phenomena like soap bubbles that led to a 1991 Nobel prize.

McGee found himself investigating these phenomena in 1984 when he wrote his book on the history of food. In collaboration with scientists, he began testing a Julia Child assertion about whipping egg whites in a copper bowl – Child advocated always whipping in copper. Experiments testing whipping in copper demonstrated that it took a much longer time, but led to lighter whites. The paper was eventually accepted by Nature, though one reviewer commented, “The science is good, but the subject is fluffy.”

While much of what’s emerged in science in the kitchen, like molecular gastronomy, is fairly recent, nouvelle cuisine is very old. In 1759, a poem was published that read:

Every year nouvelle cuisine
Because every year tastes change;
And every day there are new stews:
So be a chemist, Justine.

French cooking, historically, has been far from experimental. Classic French cooking as compiled by Escoffier and others codified cuisine to the point where it was difficult to innovate, since the classic textbook offers 100 “correct” recipes for beef tenderloin. McGee cites Michael Bras as helping invert these dynamics with the melting chocolate cake, an inversion of the “correct” idea that a cake is surrounded by a sauce – instead, a cake contains a ganache. A later dish, the Gargouillou, recreated a salad as a walk through a garden, whatever ingredients were most appropriate on the given day.

Chef Jacques Maximin was influenced by these experiments and observed, ” To be really creative means not copying.” His maxim struck a chord especially with Ferran Adria, who recreated the gargouillou as an endlessly surprising salad – nothing is quite what it seems. Adria went on to thoroughly revolutionize cuisine as we know it, with techniques like flavored foams and the spherification of ingredients like melon into texturally odd balls of flavor.

He’s had many followers. Joan and Jordi Roca use rotary evaporators to separate aromas from ingredients – this makes possible a dish of foods that are shades of white which have flavors usually associated with visually dark ingredients. Jose Andres experimented with a chemical most often used to make cough drops, offering a bonbon of liquid olive oil within a clear shell. Wylie Dufresne uses an enzyme called “meat glue” to offer a chicken nugget that’s white meat wrapped in dark, wrapped in skin. And now the field has been exhaustively documented by Nathan Myrvold, who’s published a massive, five-volume book on Modernist cuisine.

At this point, McGee gives the reins to Arnold, who offers a rapid-fire walk through some of his favorite techniques and his creative process. He shows us a Japanese ring that features a wavy woodgrain effect, produced by beating two different metals together. Arnold achieved something similar using fish as a way of persuading Hobart, the cooking machine company, to give him a really badass slicer. Using meat glue and casein, he glues salmon and fluke together and slices them into a thin sheet that looks a little like mortadella and a bit like wood grain. It’s served with creme fraische seasoned with nitrogen-frozen herbs, a fennel apple salad infused with curry and pressure cooked mushroom seeds, a veritable tour of modernist technique on a plate.

(The nitrogen chilled herbs allow fresh herbs to be broken into very small pieces, as you would break up a dried herb, but maintain the fresh flavor and texture. Arnold recommends you blanch your fresh herbs, flash freeze in liquid nitrogen, shatter into tiny pieces and pass through a chinoise, using only the tiny bits that escape the mesh.)

Using agar, a gelling agent made from seaweed, Arnold produces a concord grape jelly, a thick, stiff substance. He points out that it cuts cleanly and can’t be put back together. But if you break it violently – in a blender, say – you get a different effect: a microgel or fluid gel. It looks like a puree on the plate, but tastes like juice in the mouth.

Agar works well as a clarifier too, in lower concentrations. Arnold makes a loose gel of lime juice, then uses a whisk to separate it into “whey and curds”. He passes this through cheesecloth, making rude comments about “gently massaging the sack”, before producing a liquid that looks very much like water, but turns out to have intense lime flavor.

We clarify liquids, he tells us, because then we can infuse them into other foods. “We can make a cucumber better by adding liquor to it… we can make a lot of things better by adding liquor to them.” Injection techniques work better with clear liquids, and Arnold shows us how to infuse a cucumber with lime and sugar in a vacuum machine. The vacuum pulls air out of the cucumber, and rapidly threatens to boil it, as liquids boil at lower temperatures in vacuum. (Arnold recommends you heavily chill your ingredients as you vacuum infuse…) While the air is sucked out, the liquid is incompressible, and as air floods back into the chamber as he turns the vacuum off, liquid infuses into the cucumber in a flash, turning the vegetable into something that looks like stained glass. “It’s one way to get something that looks cooked, but still has crisp, clean lines to it.”

You can rapidly infuse using pressure as well. Arnold puts vodka and coffee into an ISI whipped cream maker, and uses nitrous oxide to force the coffee into the vodka. What results is heavily flavored, but not carbonated – the tingle of carbonation comes from carbon dioxide escaping from solution. Nitrous oxide offers pressure and fluff without carbonation.

Arnold offers his advice on carbonating some of his favorite things. As with infusion, clarified liquids work better. “If you’re going to carbonate liquor – which I highly recommend – you’re going to need more pressure than carbonating water because CO2 is more soluble in alcohol than in water.” You can force carbonate a wine at 30 psi, sake at 35psi, and liquors at about 40 psi.

Why would you infuse vodka with coffee? “The flavors you pull out of a product are dependent on time, temperature, pressure.” You don’t just get yummy coffee vodka – you can get different flavors than you’d ever experience through conventional means.

It must be fun to have a kitchen where liquid nitrogen is as common as hot water. Arnold chills a glass with liquid nitrogen, pointing out that it’s cold only on the inside, and doesn’t generate condensation. He pours himself a carbonated gin and lime concoction as the audience is served marshmallows frozen with liquid nitrogen. McGee returns to explain the history of the marshmallows – they were served at The Fat Duck as both a palate and “mind cleanser”. The chef responsible wanted to reset his diners’ expectations, so he served them a marshmallow flavored with lime, tea and vodka and frozen. The heat of your mouth melts the treat and you find yourself with vapors pouring from your mouth and nose. We have a similar experience with the frozen marshmallows, and like the Fat Duck diners, find ourselves laughing, our expectations reset.

April 19, 2011

Protocol.by – sharing how you want to be contacted

Filed under: Berkman,Geekery,ideas — Ethan @ 2:09 pm

Hugo Van Vuuren, Berkman Fellow and graduate student at Harvard’s Graduate School for Design and Gregg Elliott, researcher at MIT’s Media Lab, tell us that we’re experiencing a global communications “crisis”, one that we can address through better communications protocols.

Hugo sets the stage at today’s Berkman Center lunch talk, showing us the beginning of this video from design firm JESS3:

JESS3™ / The State of The Internet from JESS3 on Vimeo.

He summarizes the crisis, as he sees it, with a quote from Swiss designer Tina Roth Eisenberg: “Too many channels. Too many messages. Too much noise. Too much guilt.”

Lots of people are trying to build tools to cope with this flood of information. (Google’s priority inbox is one possible example of a tool to manage an overload of messages.) There’s less effort focused on overcoming the guilt. When we see people talking about reaching “inbox zero” or declaring “email bankruptcy“, they are looking for ways to deal with the guilt.

Even in an age of social media, mail and phone contact are massive in relation to new forms of communication. Russell Munroe’s legendary Online Communities map from 2005 has been updated for 2010, showing that massive social networks like Facebook are dwarfed by SMS, phonecalls and email.

Some recent articles in the New York Times – “Don’t Call Me, I Won’t Call You“, “Keep Your Thumbs Still When I’m Talking to You” – suggest that we’re seeing a conflict in cultural norms. Some people (me, for one) don’t answer the phone except for scheduled phonecalls, which is deeply confusing for people who consider phones the primary way to contact people. Some people check mobile phones while carrying on conversations, which can feel extremely rude to people who focus on face to face contact. Hugo points out that there can be differences in community protocol from one side of a university to others: “The Media Lab is much more of a phone-centered place than the GSD. At the GSD, email is something you do at your desk…”

We’re starting to see the explicit emergence of communications protocols. danah boyd‘s “email sabbatical” involves discarding all email received during a vacation – if you want to reach her, her autoresponder tells you, email her again once she’s come home. Tim Berners-Lee’s home page includes a complex protocol about what you should and should not email him. Harvard CS professor Harry Lewis suggested to Hugo that one of the massive problems in organizing a conference is figuring out how to contact academics, who tend to hide between different media, letting some emails go to administrative assistants while “real”, direct email addresses are carefully preserved commodities.

Hugo shows five.sentenc.es, an intriguing attempt to simplify email conversations by declaring that emails will be answered in five sentences or less. The hope is, by declaring a different protocol, it will no longer be considered rude to answer emails compactly and succintly. But this is “a kernel, not a generalized idea” for communications, Hugo offers. We need something broader and more inclusive.

One option is “stop and go signaling”, which we see on tools like instant messenger. But these status messages, which Greg explains used to be expressive, much like Facebook status messages, have turned into their own sort of protocol. “Away usually means that you’re at your keyboard, but busy.” It’s a step in the right direction, but perhaps too limited a vocabulary.

Hugo shows us a code of manners presented by the “Children’s National Guild of Courtesy”, a British organization from early last century. There are no single norms for behavior these days, set by institutions like this one. Norms are now set by individuals, or illustrated by example for leaders within communities.

To address these issues, Greg suggests that we need to:
– Define our rules of engagement
– Organize a system to execute on those rules, and
– Share your rules and expectations

Protocol.by is a first pass at defining and sharing these rules of engagement. Coming out of a closed alpha test shortly, it lets you register an account and compactly state the ways in which you’d prefer to be contacted. Greg explains that he dislikes spontaneous phonecalls – his protocol tells people not to call him before noon, and not to expect an answer to unscheduled calls. For emails, he urges correspondents to avoid polite niceties and get to the point. For people unsure of how to contact him, these protocols can make it easier for people to contact him in a way that’s minimally intrusive and maximally effective. (I have a protocol, if you’re interested…)

The goal for the site, Hugo offers, is for the site to become a “social anchor” to help bridge across multiple identities and online presences. In the long term, it could plug into location-based services and offer richer, more targeted information on how to contact people politely. A group could use protocol.by with voting systems which could help group protocols emerge.

Going forward, protocol.by might offer suggested protocols based on your identity – if you’re a technophile, you might want to be contacted with email and IM, not phone, for instance. Over time, these might emerge as a small set of cultural norms, rather than purely personal norms.

There’s dozens of questions from the Berkman crowd, as well as many observations phrased as questions. Some of the highlights, to the best of my reporting ability:

Q: Is there a revenue model for protocol.by?
A: Not at present – it’s a research project. In the long run, there might be fun ways to use the data, perhaps the way OKCupid analyzes dating information, in a way that might have financial value.

Q: Protocol-free communication leaves a lot of ambiguity in communications, which can be a good thing. Is someone not answering their email because you contacted them the wrong way, or because they don’t want to talk to you. Is it such a good idea to squeeze out this ambiguity?
A: You’ve got a good degree of freedom with the tool in how explicit you want to be. If you offer promises – “Emails will be answered within 48 hours” – you eliminate ambiguity. But a prioritized list of communication protocols is still pretty ambiguous.

Q: This system is very elegant, but it doesn’t recognize that you might communicate differently with a babysitter calling you about an emergency and an undergrad asking to interview you for a paper. How does the system handle this?
A: Protocols will likely differ for complete strangers versus friends and family. Protocol.by is mostly for people outside your circle of trust.

Q (David Weinberger): How many users do you need for this to be an effective research project and how will you get them?
A: There are about 500 users thus far. Having a few thousand may let us run bigger experiments. We’ll get more by embedding the tool into webpages and social networks.

Q (David Abrahms): I might want to be contacted via phone, but if I’m in Beijing, I’d like the system to accomodate that.
A: Great idea.

Q: (David Weinberger) There’s certainly a need for more metadata about your norms when you communicate with people outside your community. We need it for IP issues as well – Creative Commons helps us communicate what you can do with your content. Maybe this is a model for getting people to adopt this protocol?
A: Figuring out how to embed this well is going to help us work through these issues.

David took notes, too…

November 10, 2010

Those ducking yankers who designed T9

Filed under: Geekery,Just for fun — Ethan @ 10:39 am

Someone on Twitter pointed me to Damn You Auto Correct, a site that’s at least as narrow in focus as your average LOLCats site, but pretty funny nevertheless. I suppose it’s useful mostly as a warning not to invite someone over for gelato unless you’ve really thought things through. Then again, anyone who’s listened to Benjamen Walker’s 13th episode of Too Much Information, where an innocuous text message to a notoriously cranky rock star is transformed into a curt insult by autocorrect. Suffice it to say, I’ve never since typed “NP. Thanks so much” on my iPhone again.

It does seem like the manufacturers of autocorrect should keep up with the times in editing their dictionaries, realizing that “NP” has become pretty common slang and to find a different way to correct misspellings without alienating quick-fingered radio producers and SMSing computer scientists. And then I remembered a routine from British comics Armstrong and Miller:

The key phrase for me: “Our job, Gilbert, is to offer people not the words they do use but the words they should use.”

And you thought technology was value neutral…

September 2, 2010

Crisis Commons, and the challenges of distributed disaster response

Filed under: Berkman,Developing world,Geekery,ideas — Ethan @ 1:52 pm

Heather Blanchard, Noel Dickover and Andrew Turner from Crisis Commons visited the Berkman Center Tuesday to discuss the rapidly growing technology and crisis response space. Crisis Commons, Andrew tells us, came in part from the recognition that the volunteers who respond to crises aren’t necessarily amateurs. They include first responders, doctors, CEOs.. and lately, they include a lot of software developers.

Recent technology “camps” – Transparency Camp, Government 2.0 Camp – sparked discussion about whether there should be a crisis response camp. Crisis Camp was born in May, 2009 with a two-day event in Washington DC which brought together a variety of civic hackers who wanted to share knowledge around crisis technology and response. The World Bank took notice and ended up hosting the Ignite sessions associated with the camp, giving developers a chance to put ideas for crisis response in front of people who often end up providing funds to rebuild after crises.

The World Bank wasn’t the only large group interested in working with crisis hackers. Google, Yahoo! and Microsoft came together to found the Random Hacks of Kindness event, designed to let programmers “hack for humanity” in marathon sessions around the world.

While these events preceded the earthquake earlier this year in Haiti, that crisis was the seminal event in increasing interest in participating in technology for crisis relief efforts. A crisis camp to respond to the Haitian earthquake involved 400 participants in five cities and pioneered 13 projects. Over time, the crisis camp model spread to Argentina, Chile and New Zealand, with developers focused on building tools for use in Haiti, Chile and Pakistan. Blanchard explained that the events provided space for people who “didn’t want to contribute money – they wanted to do something.”

The camps had some tangible outcomes:
I’m Okay, a simple application that allows people to easily tell friends and family that they’re okay, in an emergency situation, was developed at Random Hacks of Kindness
– Tradui, an English/Kreyol dictionary for the Android was developed during the Crisis camps
– Crisis camps also developed a better routing protocol to enable point to point wireless between camps in Haiti, writing new drivers in 48 hours that were optimized for the long ping times associated with using WiFi over multi-kilometer distances

Perhaps the most impressive collaboration to come from the Crisis Camps was work on OpenStreetMap for Port au Prince. Using satellite imagery released by the UN, a team created a highly detailed map, leveraging the work of non-programmers to trace roads on the satellite images and diasporans to identify and name landmarks and streets. As the map improved in quality, the volunteers were eventually able to offer routing information for relief trucks, based on road damage that was visible on the satellite imagery. A convoy would request a route for a 4-ton water truck, and volunteers would use their bird’s eye view of the situation – from half a continent away – to suggest the safest route. Ultimately, the government of Haiti requested access to the information, and Crisis Camps provided not only the data, but training in using it.

The conversation turned to the challenges Crisis Camps have faced in making their model work:
– About 1/3rd of the participants are programmers. The others range from the “internet savvy” to those with complementary skill.
– Problems and requirements are often poorly defined
– It’s challenging to match volunteers to projects
– There’s a shortage of sustainable project management and leadership
– Projects often suffer from undocumented requirements and code, few updates on project status.
– Little work focuses on usability, privacy and security.
– Code licensing often isn’t carefully considered, and issues can arise about reusability of code on a licensing basis.
– Projects can be disconnected from what’s needed on the ground
– Disconnection happens in part because relief organizations don’t know what they want and need and are too busy to work with an untested, unproven community
– Volunteer fatigue – the surge of interest after a disaster tends to dissipate within four weeks
– There’s a lack of metrics and performance standards to evaluate project success.

The goal is to move from a Bar Camp/Hackathon model to a model that’s able to build sustainable projects. This means bringing project management into the mix, and asking hard questions like, “Does this project have a customer? Is it filling a well-defined need?” It also means building trust with crisis response organizations and groups like the World Bank and FEMA, who can help bring volunteer technology groups and crisis response groups together.

Crisis Commons see themselves as mediating between three groups: crisis response organizations like the Red Cross; volunteer technology organizations like OpenStreetMap; and private sector companies willing to donate resources. Each group has a set of challenges they face in engaging with these sorts of projects.

Crisis response organizations have a difficult time incorporating informal, ad-hoc citizen organizations into their emergency response plans. There’s a notion in the crisis response space of “operating rogue” if you’re not formally affiliated with an established relief organization… which further marginalizes volunteer tech communities. Many CROs have little tech understanding, which means they aren’t able to make informed decisions about collaboration with technical volunteers. In a very real way, crises are economic opportunities for relief organizations – that reality doesn’t breed resource sharing, which in turn, gets in the way of sharing best practices and lessons learned.

Volunteer tech communities frequently don’t understand the processes used by CROs, and frequently fail to understand that there’s often a good reason for those processes. While VTCs provide tremendous surge capacity that could help CROs, if there’s no good way for CROs to use this surge capacity, it’s a waste of effort on all sides. At the same time, tech communities inevitably suffer from the “CNN effect” – when crises are out of sight, they’re out of mind, and participation slumps. This is particularly challenging for managing long-term projects… and tech communities have massive project management and resource needs. Finally, successful VTCs can find themselves in a situation where they have a conflict of interest – they’re seeking paid work from relief organizations and may choose to cooperate only with those who can support them in the long term.

Private sector partners are usually participating in these projects led by their business development or corporate social responsibility divisions… while cooperation with the other entities often requires technical staff. Response organizations are often the clients of private sector players – the Red Cross is a major customer for information systems – which can create financial conflicts of interest. And working with large technology companies often raises intellectual property challenges, especially around joint development of software.

Meeting with a subset of crisis response organizations, Crisis Commons understands that there’s a need for long term relationships between tech volunteers and relief organizations, tapping the innovation power of these charitably minded geeks. But this requires relief organizations to know what solutions are already out there and what are reasonable requests to make of volunteers. And volunteer organizations need to understand the processes CROs have and how to work within them.

The hope for Crisis Commons is to become an “independent, nonpartisan honest broker” that can “bridge the ecosystem and matrix the resources.” This means “translating requirements of the CRO to the crisis crowd, helping the public understand CRO requirements,” and the reasons behind them. This could lead towards being able to set up a service like “Crisis Turk”, which could allow internet savvy non-programmers to engage in data entry tasks during a crisis.

In the long term, Crisis Commons might emerge as an international forum for standards development and data sharing around crises. Building capacity that could be active between crises, not just during them, they could direct research projects on lessons learned from prior disaster relief, could build a data library and begin preparing operations centers and emergency response teams for future crises. Some scenarios could involve managing physical spaces to encourage cooperation within and between volunteer tech teams and providing support for future innovation through a technology incubation program.

Starting from the shared premise the Crisis Commons founders presented us with – “Anyone can help in a crisis” – the discussion at Berkman focused on the structure Crisis Commons might take. The goal behind a “commons” structure is to be able to be an independent and trusted actor in the long term, to be able to be objective source of tech requirements, and to be able to bring non-market solutions to the table. But the founders realize that this is an inherently competitive space, and that volunteer organizations might find themselves in conflict with professional software developers in providing support to relief organizations, or with relief organizations if volunteer organizations began providing direct support.

It’s also possible that another player in the space could compete with Crisis Commons in this matchmaking role. Red Cross could develop an in-house technology team focused on collaborating with technology volunteers. Google could use the power of their tech resources to provide services directly to relief organizations. A partnership like Random Hacks of Kindness could emerge as the powerful leader in the space. Other volunteer technology organizations – Crisis Mappers, Strong Angel – might see themselves providing this bridging function. FEMA could start a private-public partnership under the NET Guard program. What’s the sweet spot for Crisis Commons?

One of our participants suggested that Crisis Commons could be valuable as a developer of standards, working to train the broader community about the importance of standards, and on the challenge of defining problems where solutions would benefit a broad community.

Another participant, who’d been involved with several Crisis Camp events worried that “the apps, while neat, never really made it into the field,” suggesting that the problems raised are real, not theoretical. It’s genuinely very difficult for tech volunteers to know what problems to work on… and hard for relief organizations under tremendous pressure to learn how to use these new tools.

This, I pointed out, is the problem that could prove most challenging for Crisis Commons in the long term. When crises arise, people want to help… but it’s critical that their help actually be… helpful. Clay Shirky told the story of his student, Jorge Just, who’s worked closely with UNICEF to develop RapidFTR, a family tracking and reunification tool. It’s been a long, engaged process with enormous amounts of time needed for the parties to understand each other’s needs and working methods… and it’s easy to understand why it might be difficult to convince volunteers to participate to this depth in a project.

I offered an observation from my time working on Geekcorps – I meet a lot of geeks who are convinced that the tech they’re most interested in – XML microformats, mesh wireless, cryptographic voting protocols – are precisely what the world needs to solve some pressing crisis. Occasionally, they’re right. Often, they’re more attached to their tech of choice than to addressing the crisis in question.

As such, the toughest job is defining problems and matching geeks to problems. At Geekcorps, it often took six months to design a volunteer assignment, and a talented tech person needed to meet several times with a tech firm to understand needs, brainstorm projects and create a scope of work, so we could recruit the right volunteer. While that model was expensive – and ultimately, made Geekcorps unsustainable – I think aspects of it could help Crisis Commons find a place in the world.

I ended up suggesting that Crisis Commons act as:
– a consultant to relief organizations, helping them define their technical needs, understand what was already available commercially and non-commercially and to frame needs to volunteer communities who could assist them
– a matchmaking service that connected volunteer orgs to short term and long term tech needs, preferably ones that had been clearly defined through a collaborative process
– a repository for best practices, collective knowledge about what works in this collaboration.

Unclear that this is the right solution for Crisis Commons or the road they’ll follow, but I came away with a strong sense that they are wrestling with the right questions in figuring out how to be most effective in this space. Very much looking forward to discovering what they come up with.

July 29, 2010

Counting International Connections on Facebook

Filed under: Geekery,xenophilia — Ethan @ 12:37 pm

My friend Onnik Krikorian has become a Facebook evangelist. Onnik, a Brit of Armenian descent, living in Armenia, is the Global Voices editor for the Caucuses, which means he’s responsible for rounding up blogs from Armenia, Georgia, Azerbaijan as well as parts of Turkey and Russia. This task is seriously complicated by the long-term tensions in the region. Armenia and Azerbaijan are partisans in a “frozen” conflict – the Nagorno-Karabakh war, which lasted from 1988 – 1994, and remains largely unresolved.

It’s taken Onnik years to build up relationships with bloggers in Azerbaijan, relationships he needs to accurately cover the region. Azeri bloggers are often suspicious of his motives for connecting and wonder whether he’ll cover their thinking and writing fairly. But Onnik tells me that Facebook has emerged as a key space where Azeri and Armenians can interact. “There are no neutral spaces in the real world where we can get to know each other. Facebook provides that space online, and it’s allowing friendships to form that probably couldn’t happen in the physical world.” (Onnik documents some of the conversations taking place between Azeri and Armenian bloggers in a recent post on Global Voices.)

Picture 1
Graph from the front page of peace.facebook.com

Onnik was talking about his love of Facebook at an event hosted by the US Institute for Peace, where I and colleagues at George Washington University and Columbia were presenting research we’d carried out on the use of social media in conflict situations. Onnik’s hopes for Facebook as a platform for peace were echoed by Adam Conner of Facebook, who showed the company’s new site, Peace on Facebook. The site documents friendships formed between people usually separated by geography, religion or politics. Some of the statistics seem clearly like good news – 29,651 friendships between Indians and Pakistanis per day. Others are rather dispiriting – 974 Muslim/Jewish connections in the past 24 hours.

I’m a data junkie, and there’s little more frustrating to me than an incomplete data set. Basically, by showing us a very small portion of the nation to nation social graph, Facebook is hinting that the whole graph is available: not just how many friendships Indian Facebook users form with Pakistani users, but how many they form with Americans, Canadians, Chinese, other Indians, etc. Obviously, this is info I’m interested in – I’ve been building a critique that argues that usage of social networking tools to build connections between people in the same country vastly outpaces use of these tools to cross national, cultural and religious borders.

Without the whole data set, it’s hard to know whether these numbers are encouraging or not. Are 29,651 Indian/Pakistani connections a lot? Or very few, in proportion to how many connections Indians and Pakistanis make on Facebook in total? In other words, we’ve got the numerator, but not the denominator – if we had a picture of how many connections Indians and Pakistanis make per day, we might have a better sense for whether this is an encouraging or discouraging number.

I made a first pass at this question this morning, using data I was able to obtain online. Facebook tells us that the average user has 130 friends – a number that might be out of date, as the same statistics page lists “over 400 million users”, not the half billion currently being celebrated in the media. (Ideally, we’d like to know how many new friends are added per day so we can compare apples to apples, but you got to war with the data you have…)

We also need a sense for how many Facebook users there are per country. Here, we turn to Nick Burcher who publishes tables of Facebook users per country on a regular basis. Nick tells readers that the data is from Facebook, and the Guardian appears to trust his accounts enough to feature those stats on their technology blog. They are, alas, incomplete – Burcher published stats for the 30 countries with the largest number of Facebook users, and revealed a few more countries in the comments thread on the post.

Because we don’t have data for Pakistan, we can’t answer the India/Pakistan question. But we can offer some analysis for Israel/Palestine and Greece/Turkey.

Facebook for Peace tells us that there are 15,747 connections between Israelis and Palestinians for the past 24 hours. The term “connection” is not clearly defined on the site – it’s not clear whether a reciprocated friendship is 1 connection or 2 – because I’m going to count the number of Israeli friends and Palestinian friends, it makes sense to count a reciprocal friendship as two connections. (If Facebook is counting differently than I am, my numbers are going to be half what they should be.)

3,006,460 Israelis are Facebook users… a pretty remarkable number, as it represents 39.92% of the total population of the nation and roughly 57% of the country’s 5.3 million internet users. There are very few Palestinian internet users – 84,240, or 2.24% of the population… This mostly reflects how few Palestinians are online, as Facebook is used by 21% of Palestine’s 400,000 internet users.

At 3,090,700 Palestinian and Israeli Facebook users, we should see almost 402 million friendships involving an Israeli or a Palestinian. If we extrapolate from 15,747 friendships a day to 5.7 million a year, we’re looking at Israeli/Palestinian friendships representing 1.43% of friendships in the Israeli/Palestinian space… with all sorts of caveats. (The biggest is that the use of a year-long interval to calculate total friendships is totally arbitrary and probably not supportable. If you’ve got better data or a suggestion for a better estimation method, please don’t hesitate to speak up.)

We get very different results from looking at Greece and Turkey. 2,838,700 Greeks are Facebook members (25.11% of the national population), while 22,552,540 Turks (31.08% of the population) are. That’s roughly 3.3 billion friendships projected, and our year-long approximation finds us just over 4 million Greek/Turkish connections. That suggests that only 0.12% of friendships in the pool are Turkish/Greek friendships.

What explains the disparity between these numbers? While there’s certainly a long history of tension between Greece and Turkey, the last major military confrontation between the nations ended in 1922. Israel and Palestine, on the other hand, are involved with an active conflict and Israel’s recent incursion into Gaza ended a few months ago. What gives?

It’s possible that the numerous efforts designed to build friendship between Israeli and Palestinian youth are having an impact, much as Onnik’s work in Armenia and Azerbaijan is showing positive results. But there’s another possibility – 20% of the Israeli population are Arab citizen of Israel, and the majority of this set is of Palestinian origin. It’s certainly possible that the high percentage of Israeli/Palestinian friendship includes a large set of friendships between people of Palestinian origin in Israel and Palestinians… indeed, given the difficulty for both populations in meeting in physical space, we’d expect to see increased use of the internet as a meeting space to compensate for the difficulties of meeting in the physical world. This could be a factor in explaining India/Pakistan friendships as well, as well as Albanian/Serbian friendships, as the emergence of new nations through partition and conflict left groups united by cultures, separated by borders.

My goal in this post isn’t to belittle the power of Facebook for providing a border-transcending space where friendships can be built – Onnik’s story makes it clear that Facebook is a real and powerful tool for good, at least in the Armenian/Azeri space. But I continue to think that we overestimate how many of our online contacts cross borders and underestimate how often these tools are used to reinforce local friendships. I’d invite friends at Facebook to correct my numbers or my math… and mention that we could do a much better job of answering these questions if Facebook would release a data set that shows us all the cross-national connections made on the service.


Ross Perez has created some great interactive maps that visualize the adoption of Facebook around the world, using Burcher’s data – worth your time.

May 22, 2010

Democrats, Republicans and Appropriators

Filed under: Geekery — Ethan @ 2:29 pm

I had the good fortune to catch a small part of a conference at Harvard yesterday on text analysis. Good fortune, because I was there long enough to hear Justin Grimmer‘s talk on his dissertation, Measuring Reputation Outside Congress. Grimmer is interested in an important – and tough to answer – question: how responsive are the people we elect to their constituents?

We could look for ways to answer this question by studying the voting record of legislators (qualitatively or quantitatively), examining their work in Washington (through Congressional literature) or through examining their communications with constituents at home. This latter set of questions is referred to as the “Home Style” of a politician, following the work of Richard Fenno (1978).

Home style tells us something about politicians that their voting record often doesn’t, Grimmer tells us. He invites us to compare Senators Jeff Sessions (R-Alabama) and Richard Shelby (also R-Alabama). If we consider them simply in terms of their voting behavior, they look nearly identical – they vote together the vast majority of the time and both can be described, in voting terms, as conservative republicans.

But anyone who knows Alabama politics will tell you that Sessions and Shelby are vastly diferent guys. Grimmer characterizes Session as “an intense policy guy” who will bore you to tears with incredibly long, thorough explanations of issues when all you wanted was a photo with him. Shelby, on the other hand, is all about bringing home the bacon… and there are Shelby Halls at two Alabama universities to prove it.

Evidence suggests that representational style – policy versus pork, heavy versus light communicators – cuts across party lines. And it’s likely that politicians have diverse, stable, nonpartisan home styles. If we can find ways to characterize these differences – Grimmer proposes studying the difference in communications with constituents that claim credit and those that discuss policy – we have the opportunity to compare across senators, and connect these differences to what senators do within the institutions of power.

When Fenno studied the “home style” of politicians in 1978, he engaged in “soaking and poking” – intense participant observation, which involved following 18 members of Congress over 8 years. This method, Grimmer observes, is expensive, underrepresentative (and really hard to replicate as a graduate student.) Instead, we might study texts produced by senators. One candidate is newspaper articles… but editorial bias makes it hard to use editorials as representative of senatorial communications. We might use the constituent newsletters produced by Senate offices… but they’re sent using the constitutional Franking privleges and are very hard to get hold of.

Instead, Grimmer has been studying the press releases that senate offices produce – over 64,000 in all. The average senator issues 212 press releases per year, and while the quantity produced has a wide range (some produce only a few dozen, while Hillary Clinton’s senate office produced over a thousand a year), there’s no strong correlation between political party and usage of the tool.

After collecting the releases, Grimmer used machine learning techniques to separate transcripts of floor statements (which are usually released as press releases) from pure press releases, which let him study how a senator chooses to speak to her constituents. Once that sorting has taken place, the task is pretty simple – determine the topic of a press release. This is simplified by the fact that congressional aides try hard to ensure that press releases are on a single topic.

Grimmer’s work clusters senators by the topics discussed in their press releases. His research reveals four basic clusters:

– Senate Statespersons. These folks speak like they’re running for president… and they may well be. Their releases discuss the Iraq war, intelligence issues, international relations and budget issues. John McCain’s office communicates this way.

– Domestic policy. These senators are also policy wonks, but their focus is domestic – the environment, gas prices, DHS, and consumer safety.

– Pork and policy – Communication from these senators includes discussions of water rights grants, but also has serious discussion of health and education policy. Sometimes this is because the office simply issues lots and lots of releases – (former) Senator Clinton’s office fits in this camp.

– Appropriators – These guys communicate about the grants they’ve won – fire grants, airport grants, money for universities, and for police departments.

As well as clustering press releases based on topic, Grimmer’s work considers another metric – how often a press release claims credit for an appropriation. There turns out to be a vast spectrum, ranging from John McCain, who basically only issues statements about policy, and a guy like Mike DeWine, an Ohio Republican, where virtually every press release claims credit for an appropriation. There’s a very strong correlation between the topic clusters in releases and the percentage of releases claiming credit. (That’s at least in part because claiming credit is one of the topic clusters – you’re correlating between, in part, the same factor. Interesting nevertheless.)

What’s most interesting is that this classification – either by type of politician or by their place on the credit spectrum – is tightly correlated to their voting behavior on a particular issue: votes on appropriations rules, or as Grimmer puts it, “How do legislators self-regulate the porkbarrel”. These votes aren’t partisan – the late Ted Kennedy voted with Richard Shelby on these sorts of votes, which suggests truth to the truism that there are three parties in Congress: Democrats, Republicans and Appropriators. In other words, the way a Senator communicates with constituents is strongly predictive of their legislative behavior, specifically on how they allocate funds.

I thought this was excellent stuff – I hadn’t seen someone take a large database of political communications and subject it to automated analysis, and I thought the demonstration of this “third party” was particularly compelling.

May 3, 2010

ROFLCon: From Weird to Wide

Filed under: Developing world,Geekery,ideas,Just for fun,xenophilia — Ethan @ 4:26 pm

An audio version of danah and my keynote is now available for download online. I recommend a background of lolcats – preferably multilingual ones – as you listen.

I gave a dozen public talks last month, and it’s possible that ROFLCon was the most intimidating of the bunch. I was asked by Tim Hwang, internet researcher (and Berkman Center affiliate) co-founder of The Awesome Foundation and of ROFLCon, to kick off the event by co-keynoting with (dear friend) danah boyd. danah actually works in the deep swamps of contemporary internet culture, so ROFLCon – a conference that takes both a loving and scholarly look at the phenomenon of internet memes – is close to home turf for her. I, on the other hand, tend to study things like the impact of cellphones in political organizing in the developing world, and wondered if there was any possible way to connect the sort of issues I work on with a conference that featured Mahir Cagri (of I Kiss You fame), the owner and videographer of Keyboard Cat and the author of Garfield Minus Garfield.

Turns out I was underestimating ROFLCon. Yes, there were panels where the main question seemed to be, “What’s it like to be a microcelebrity”… which may have included the panel danah and I moderated. And yes, there’s nothing to make you feel old and decrepit like walking into a panel where you don’t know a single one of the internet memes being celebrated. (No, I’d never heard of cornify. No, my world has not been substantially broadened by listening to their founder, wearing a unicorn mask, discuss vampires.) On the other hand, the panel on race – I can haz dream? – was one of the best conference panels I’ve ever attended. (If any network execs are reading this blog, let me just point out that a late night show based around Baratunde Thurston and Christian Lander would kill.) And many of the people at the conference seemed to be deeply engaged in the sorts of issues danah and I were talking about – Who creates internet culture? Whose voices are amplified and whose aren’t? What happens when marginal, weird cultures become mainstream?

Alex Leavitt did an excellent job of liveblogging our talks. I thought I’d post my notes and some of my slides as well – the full slide deck is online, though isn’t real useful without accompanying notes, which follow below.

It’s not easy being an academic at a conference like ROFLCon. The stars are the folks who’ve done something wonderful, weird, unforgetable, or so wonderfully weird it’s unforgetable. Those of us who are trying to make observations about the field feel a little like musicologists studying Bach – we can study his compositions exhaustively, but we’re acutely aware that we’re not going to write a mighty fugue. No matter how much I might study internet memes, I know I’m never going to accomplish something as majestic as keyboard cat… and I have to live with that truth every day of my life.

Unlike danah who can actually tell you something about internet culture, I study information in the developing world. Basically, I’m interested in the question of whether the internet, mobile phones and community radio can make people healthier, wealthier and more free.


If you work in this field for very long, you’ll end up realizing that the basic question behind development economics is “Why are some people rich and other people poor?” There are better and worse answers to these questions. Some of the smartest answers focus on which parts of the world had animals and plants that were easily domesticated and which had endemic diseases. Other smart answers look at the ways in which colonialism held back development or look at the problems of bad governance and persistent conflict. Bad answers to the questions focus on the idea that some people are inherently, biologically smarter than others. This idea – “scientific racism” – surfaces throughout history, as the basis for eugenics and more recently in psuedo-scientific analyses of IQ scores.

If you’d like to understand just what a stinking heap of bullshit scientific racism theories are, I recommend spending some time in very poor nations. You’ll discover that many of the people you meet display extraordinary creativity as they navigate the challenges of everday survival. And you’ll start learning about people like William Kamkwamba, whose near death from famine in Malawi didn’t prevent him from building a fiendishly ingenious power-generating windmill from an old bicycle and some recycled PVC pipe.

My time in the developing world suggests to me that intelligence, creativity and humor are evenly distributed throughout the world. People’s ability to express their intelligence, creativity and humor – and our ability to encounter said traits – are heavily geographically constrained, but the basic distribution is near constant.


All of which leads us to the question at hand today: Daddy, where do memes come from? I suspect Drew will be asking me this question any day now, due to Rachel and my egregious tendency to misuse Cafe Press and the fact that we gave him the middle name “Wynn” in part so we could title his blog “For the Wynn“. In answering these questions, I find that I’m usually referring to Randall Munroe’s brilliant
Online Communities map, and to the fertile equatorial regions that extend from the Gulf of YouTube through the Ocean of Subculture. Within this region, there are areas whose soils – turned black with the charring of endless flamewars – are especially fertile for the cultivation of new memes. (sup, /b/?)


I’m interested in mapping memes in a different way. Here’s a quick and dirty map of internet memes extracted from Know Your Meme. Yes, the US and Japan dominate global memetics (or, at least, they do based on the site, which has its own – recognized, now being addressed – cultural biases). But there’s a huge number of memes coming from almost all corners of the globe.

In development economics, we pay special attention to the so-called BRIC countries – Brazil, Russia, India, China – who we expect to become increasingly important over the next few decades due to their large populations, natural resources and rates of economic growth. And so we shouldn’t be surprised to find distinctly regional memes emerging from each of these countries – I offer as a gallery of superheroes Brother Sharp from China, Golimar from India, Glazastik from Russia and the legion that is Tenso from Brazil. You may not know who these viral wonders are, but the people who live in these rapidly developing nations do.

Assume I’m right and that creativity has a near-constant distribution. Assume also that access to the internet continues its explosive spread. The inescapable conclusion is that the next wave of internet memes is going to come from the developing world.

It’s already happening – I just watched the first major Kenyan internet meme come to life. The Nairobi-based band called “Just a Band” released a video for a song called “Ha-He” off their new album. The video’s absurdly good – it’s shot by the guys in the band, and it introduces a new superhero: Makmende.

Actually, “Makmende Amerudi” means “Makmende has returned”… “Makmende” was what you called a kid in the neighborhood in Kenyan in the 1990s who wanted to be Bruce Lee. I heard it and assumed that it was a sheng word – “sheng” is the blend of Swahili and English that’s Kenya’s unofficial national language – turns out that “Makmende” is what happens when Kenyans say “Go ahead, make my day”.

So Makmende kicks the ass of all comers in this video, gets the girl… who he promptly ignores, and spouts some incomprehensible but pithy aphorisms. This video went crazy in the Kenyan blogosphere – which is an extremely creative space – and we started seeing Makmende magazine covers, a 10,000 shilling note and lots of video remixes.

Above, we see a local television reporter come to a rapid and bad end when he has the misfortune of finding Makmende’s house… in sort of a Nairobi version of the Blair Witch project. And yes, Hitler’s upset about Makmende as well… But the best stuff actually has pretty low production values – it’s the website aggregating the sort of Makmende one-liners that shot across Twitter for a week or so after the video became popular. Sure, lots of the content here could have appeared on Chuck Norris Facts, but much of what’s there is indigenous to Kenya, and may not make sense if you’re not Kenyan.

Makmende’s so badass that he raises two philosophical questions for me. The first is, “Who gets to decide what’s a meme?”


Brilliant and funny lexicographer Erin McKean tells us that new worlds enter the language because people love them enough to use them. Lexicographers aren’t the bouncers at the language club; they’re anthropologists, discovering and documenting how language gets used. This is clearly how memes work as well – if people adopt it, love it and transform, it’s a meme… and what anyone else says doesn’t matter.

But it sure as hell helps if it ends up in Wikipedia. Getting Makmende into Wikipedia was one of the first things Kenyans tried to do… and getting things into Wikipedia is a lot harder than it used to be. The article was deleted a couple of times before the authors realized that they needed to make the case that Makmende was Kenya’s first major internet meme, which made it notable. It hasn’t made it into Know Your Meme yet – it was summarily deadpooled when last submitted.

My hope is that all of us who are interested in internet culture can be anthropologists, not bouncers. Yes, not everything that gets posted online is worthy of our study and amplification… but it’s worth keeping in mind that we sometimes don’t understand the unfamiliar at first and would find it intensely cool if we took a bit more time to try and understand it.

My second question is: “Who gets to play along with an internet meme?” On the one hand, there’s not much preventing you from adding some Makmende facts to the mix. On the other hand, a lot of the funny stuff already posted doesn’t make much sense unless you know the language and the culture. “Makmende hangs his clothes on a Safaricom line” only is funny if you know that Safaricom is Kenya’s largest mobile phone company and doesn’t have any traditional phone lines.

My sense is that most memes don’t cross between cultures because we don’t understand the language, don’t understand the references or weren’t paying attention to that corner of the internet to start with. Those that do tend to be funny in a way that’s independent of language. The Back Dorm Boys are pretty funny, and it’s not hard to figure out how to join in the fun.

This question parallels one that internet scholars are spending a lot of time on: Do we have one internet or many? When a country like China heavily censors their internet and encourages the growth of a parallel internet, do we hit a point where it just doesn’t make sense to talk about “the internet” anymore? Perhaps we’ve got to talk about internets, and how they interconnect. And if 340 million Chinese internet users look mostly at Chinese sites, laugh at Chinese memes, maybe it makes sense that the Chinese internet will eventually run on its own protocols, which might make it easier to censor or control. Go far enough down this road and you can imagine diverging internets, each trying to best meet the needs of their users, and no longer having a world where we readily peer into each other’s internets.

slide 26.026

If we care about a single, united internet, it is imperative that we develop, discover and disseminate internet memes that we can laugh at together. When governments censor political sites on the internet, they alienate the small portion of their populations who already identify as politically dissident – and they can make the case that they’re protecting their citizens from terrorism or incitement to violence or pornography. But when they block our access to videos of cats flushing toilets, we see them for the heavy-handed bullies that they are. The cute cats serve as cover traffic for more serious political speech – so long as chinese users want to laugh at our cat videos, we’re encouraging people to circumvent censorship and potentially encounter all sorts of stuff on YouTube.

The Chinese have developed cute cat technology. Even a cursory glance at Youku shows that the once apparently insurmountable cat gap has been thoroughly bridged. And not just simple cute cats – Youku features cats flushing toilets! And not just western style toilets – squat toilets as well! If we accept my assertion that it’s politically critical for us to LOL together, we need not just to be studying Chinese net memes – we need to develop memes we can LOL at across cultures.

When we cross cultural borders in internet memespace, we’re usually laughing at someone else. Engrish, funny though it is, is basically the act of laughing at someone for failing to speak your (absurdly complex and irregular) mother tongue. I’m deeply impressed with people like Mahir a?r? who managed to turn the experience of being laughed at by the entire internet into laughing along with the joke. It takes an unusual personality to pull this off – I’m not sure that laughing at and inviting folks to laugh along is always the best way to go.

I’d rather take the example of Matt Harding, the video game developer who spent years travelling the world, dancing badly. After the success of his first video, Matt discovered that the piece of music he’d used – “Sweet Lullaby” by Deep Forest – had a problematic history. The very short version – the French musicians behind Deep Forest used a lullaby from the Solomon Islands to record their hit song, without seeking permission from the woman who sang the song and over the explicit objections of the musicologist who recorded it. Worse, they presented it in such a way that most listeners thought it came from central Africa, not from the south Pacific.

Matt could have dismissed this story as an ugly footnote to his adventures with internet fame. To his great credit, he didn’t. Instead, he went to Auki, a small town in the Solomon Islands, to interview a nephew of Afunakwa, the woman who’d recorded the original song. It was his way of apologizing for the complex past of the song, and his way of using the weirdness of internet fame to make his world – and all those of us who’ve watched the video – a little wider.

My conclusions?
– We can go from weird to wide, as Matt did, using the strange and quirky corners of the internet to prod us into curiosity
– It’s worth asking ourselves if we’re laughing at, or laughing with. And if we don’t like the answer, perhaps we need to change our behavior.
– Anthropologists are cooler than bouncers.
– If we don’t laugh at Chinese internet memes – the first step towards getting Chinese users to laugh at global memes – the censors win.
– “Erinaceous” is a totally awesome word.

Highlights of presenting the talk included:

– Co-presenting with danah, which encouraged significantly sillier behavior than I generally engage in when on stage. I’d like to believe that I would always be willing to crouch behind a podium wearing a fluffy red hat before delivering a keynote… but it’s just not true. Add danah to the mix and it suddenly is.

– Matt Harding jumping up when his name was mentioned and dancing in the audience. I’m thankful that he came on stage after the talk to introduce himself and apologize if I freaked him out by spontaneously hugging him. I just think he’s wicked cool and deserves recognition for using the internet to show us (one facet of) how wide and wonderful the world can be.

– Meeting Mahir, who turns out to be utterly lovely in person. Yes, he immediately started filming our meeting via flip video and digital camera, and yes, he did invite me, my wife and infant son to visit him in Izmir… but I got the sense that it wasn’t in any way an act, just his particular version of friendliness. It felt more wonderful than weird.

– Talking with the guys from Know Your Meme, who are working really hard to ensure that their site is global and inclusive, and who are trying to take some pages from the Global Voices playbook, recruiting local editors who understand memes in their corners of the world. I’ve got high hopes of a Makmende article in development soon, and hope perhaps for a GV/KYM alliance where we source and research global memes.

In other words, I had a blast. Thanks to everyone involved and hope you had as much fun as I did.

Older Posts »

Powered by WordPress