The Media Lab’s conversation series today features Pakistani social entrepreneur Khalida Brohi, founder and executive director of the Sughar Empowerment Society. She’s a director’s fellow at the Media Lab, resident at the Center for Civic Media for the next year.
Khalida offers the theme of “building bridges between the indigenous and modern world” as the theme of her talk, and of her life the last few years. She recently attended Google’s Zeitgeist conference and was totally overwhelmed by the experience of being fitted for Google Glass by Eric Schmidt. She realized that, twenty days before, she was sitting in her rural village in the mountains of Pakistan, with her grandfather and uncle. How does one resolve these two worlds?
Khalida’s mother was married at age nine, and left her home to live with Khalida’s father as a small girl. Before her marriage, her mother had never seen a school building. She lived her entire life in a single village. She explains that it wasn’t just that her mother was forced into a marriage – her father was as well, when his older brothers refused to be married to Khalida’s mother.
By being forced into a marriage, Khalida’s father felt like his manhood was threatened and left the village. By leaving, he was able to go to a boarding school and then to university. At university, he experienced something remarkable: girls who could read! Girls who could talk about politics! Her father considered taking another wife, marrying a college girl. But he realized he could educate the girl who became his wife at age nine, who was home crying for her mother. So “he held her hand” and taught her to read and to write.
“My mother thought the world ended at the borders of the village – it doesn’t end there!” Through reading, her mother ended up with a much broader view of the world than most women in tribal areas. Ultimately, her mother demanded that her family leave the village so the children could be educated, and her father had to obey, because he was in love. Other villagers protested: “What kind of a man are you, listening to your wife?”
Khalida was born in a city and lived in the city until she was five years old. But her father worried that his children were being spoiled. “I saw your brother wearing his shoes, I saw you with books in your hands – I’m spoiling you kids.” Her father remembered the hardships of his past: sharing a bus with goats that defecated on him as he travelled to school, completing his homework by kerosene lamps. Her father moved the family home, but helped Khalida split her time between the village and Karachi.
Living in the village, Khalida revelled in the beauty of cultures and traditions. But she also found herself wrestling with the uncomfortable aspects of her culture: child marriage, exchange marriages, vanni and honor killings. “Apart from the beauty, there are other things in the culture that are very dark.” Khalida understood the darkness on a very personal level when she returned from pre-medical classes to the village at age 16, and discovered that one of her friends had been killed in the name of honor, because she had wanted to marry someone she liked.
Khalida realized how different her experience was from that of other women in her village, and realized that part of her obligations to her family were to come home to her community and address these issues. She launched a campaign: the WAKE UP campaign against Honor Killings. The major manifestation of the campaign was a Facebook group, which she managed from the single PC shared by the ten kids in her family’s house. “If I washed dishes and made dinner, I could have 10 minutes on the computer.” The campaign she led called attention to the government policies that made honor killings and exchange marriages possible. (Author’s note: I’m quoting Khalida directly as much as possible, and linking to resources I’m able to find online to provide context for practices like honor killings. These may or may not be the links Khalida would choose to explain these practices, and I will hope to work with her to link to the resources she thinks are most appropriate in the future.)
At age 18, Khalida’s campaign gained international attention, in part through support from Amnesty International. She received calls from local and international press and found herself flying to cities to give talks about her work and the movement. But the media exposure had an unintended effect: it made it impossible for her to return to her village, where people in her community accused her of being un-Islamic and banned her from the village. Living in Karachi, unable to return to Balochistan, she realized the errors she’d made:
“We were standing against values that were really meaningful to people – we didn’t listen to people’s solutions because we thought our solutions were so important. And we weren’t including the women whose fight we were fighting.” This second point was critical to Khalida – she tells us, “I’d see women with the same scars on their faces each time I came back to the village. We were trying to change policies that could take decades to change, but we needed something that was helping women instantly.”
The new plan Khalida and her team came up with started with a surprising step: “the apology project”. She met with tribal leaders and apologized for her behavior, for standing against tribal values. (Speaking after the talk, Khalida told me that this was one of the hardest things she’d ever done: sitting at a tribal council next to a man who had killed her friend and apologizing to him, and looking for a way to genuinely forgive him.) She and her team agreed to a project to benefit the community, to promote the music, language and embroidery of the community.
This proposal was quickly accepted, and Khalida paints us a picture of tribal leaders, sitting under trees, recording each other singing to retain the language and culture of the community. The projects to document music through CDs and local stories in books were successful, but the really subversive project was the embroidery project.
To promote local embroidery, Khalida and her team built a center inside the village. Women from every house came to the center for three hours each day. In Balochistan, women are generally kept in seclusion within their houses, so creating a women’s space was a radical step. And Khalida went further, using the assembly of women to teach not only embroidery, but life skills, enterprise development, what Islam says about women’s rights and how they could advocate for rights within their marriages. “Within two weeks, women who were not allowed to laugh in their houses were laughing, and laughing too much!”
The sudden change revealed too much about what was happening in the Center. Women who had never been alone together, allowed to socialize without their husbands, were gossiping, talking about their husbands, talking about women’s rights, learning to read and write. It was deeply threatening to the husbands, and three men stopped their wives from coming to the Center.
As Khalida worried that the whole thing would crash, she decided to add a market element to her work. If women participated in the training for six months, they would receive small loans to start their own embroidery businesses. Given the extreme poverty in the region, the added income was hard for husbands to refuse. To ensure a market for the embroidery, Khalida began researching the Pakistani fashion industry, and realized she could create a tribal women’s fashion brand. With press releases to Al Jazeera, BBC and others, they launched “Sughar” as a tribal fashion brand, bringing women from the villages into fashion shows in Pakistan’s biggest cities. And because fashion brought money into the villages, husbands tolerated their wives involvement in fashion.
The project has now scaled up to 23 villages. In each center, roughly 30 women come to learn from 3 trainers. TripAdvisor has just signed on to sponsor 2 new centers, and 800 women are currently involved with the program. But Khalida’s ambitions are much broader – she wants to reach a million women in 10 years, and her work at MIT as part of that ambition.
She closes her talk by showing an image of her MIT ID. “The day I received it, I spent two hours admiring how shiny the ID was.” She sent a picture of the ID to her father, and received an avalanche of text messages in response: 16 text messages about his journeys to school, in the dust, clinging to the outside of buses, walking miles at a time.
“For a second, I thought I didn’t deserve to be here. But then I thought again. I think I deserve it. I think my dad deserves it.”
When the lab tries to work on social impact projects, we have another problem, Joi notes – we often don’t understand the needs of these communities. A project to help Detroit, one of Joi’s home towns, led to strong pushback from Detroiters that the solutions Lab students and faculty were proposing were locally inappropriate. Working with people like Khalida broadens our understanding of how different communities work and encourages us to think differently about how we work with culturally distant communities, moving towards models of closer collaboration.
We’ll be posting the video from Khalida and Joi’s talk soon and I will update this post with a link to the video, including the question and answer session with the audience that followed.
Kate Darling (@grok_) offers a talk to the Berkman Center that’s so popular, the talk needs to be moved from Berkman to Wasserstein Hall, where almost a hundred people come for lunch and her ideas on robot ethics. Kate is a legal scholar completing her PhD during a Berkman fellowship (and a residence in my lab at the Media Lab), but tells us that these ideas are pretty disconnected from the doctoral dissertation she’s about to defend on copyright. She’s often asked why she’s chosen to work on these issues – the simple answer is “Nobody else is”. There’s a small handful of “experts” working on robots and ethics, and she feels an obligation to step up to the plate and become genuinely knowledgeable about these issues.
Robots are moving into transportation, education, care for the elderly and medicine, beyond manufacturing where they have been for years. She is concerned that our law may not yet have a space for the issues raised by the spread of robots, and hopes that we can participate in the construction of a space of robotics law, following on the healthy and creative space of cyberlaw.
She begins with a general overview of robot ethics. One key area is safety and liability – who is responsible for dysfunction and damage in these complex systems where there’s a long chain from coder to the execution of the system. It sounds fanciful, but people are now trying to figure out how to program ethics into these systems, particularly around autonomous weapons like drones.
Privacy is an area that creates visceral responses in the robotics space – Kate suggests that talking about robots and privacy may be a way to open some of the discussions about the hard issues raised by NSA surveillance. But Kate’s current focus is on social robots, and specifically on the tendency to project human qualities on robots. She references Sherry Turkle‘s observation that people bond with objects in a surprisingly strong way. There are perhaps three reasons for this: physicality (we bond more strongly with the real world than with the screen), perceived autonomous action (we see the Roomba moving around on its own, and we tend to name it and feel bad when it gets stuck in the curtains), anthropomorphism (robots targeted to mimic expressions we associate with states of minds and feelings.)
Humans bond with robots in surprising ways – soldiers honor robots with medals, demand that robots be repaired instead of being replaced, and demand funerals when they are destroyed. She tells us about a mine-defusing robot that looked like a stick insect. It lost one of six legs each time it exploded a mine. The colonel in charge of the exercise called it off on the grounds that a robot reduced to two or three legs was “inhumane”.
Kate shows her Pleo dinosaur, named for Yochai Benkler. The robot was inspiration from an experiment she ran at a workshop with legal scholars where she encouraged participants to bond with these robots, then to destroy one. Participants were horrified, and it took threats to destroy all robots to get the group to destroy one of the six. She observes that we respond to social cues from lifelike machines, even if we know they are not real.
Kate encourages workshop participants to kill a robot. Murderer.
So why does this matter? People are going to keep creating these sorts of robots, if only because toy companies like to make money. And if we have a deep tendency to bond with these robots, we may need to discuss the idea of instituting protections for social robots. We protect animals, Kate explains. We argue that it’s because they feel pain and have rights. But it’s also because we bond with them and we see an attack on an animal as an attack on the people who are bonded with and value that animal.
Kate notes that we have complicated social rules for how we treat animals. We eat cows, but not horses, because they’re horses. But Europeans (though not the British) are happy to eat horses. Perhaps the uncertainty about rights for robots suggests a similar cultural challenge: are there cultures that care for robots and cultures that don’t. This may change, Kate argues, as we have more lifelike robots in our lives. Parts of society – children, the elderly, may have difficulty distinguishing between live and lifelike. In cases where people have bonded with lifelike robots, are we comfortable with people abusing these robots? Is abusing a robot someone cares about, and may not be able to distinguish from a living creature, a form of abuse if it hurts the human emotionally?
She notes that Kant offered a reason to be concerned about animal abuse: “We can judge the heart of a man by his treatment of animals for he who is cruel to animals becomes hard also in his dealings with men.” Some states look at reports of animal abuse and conduct investigations of child abuse when there’s been a report of animal abuse in a household because they worry that the issues are correlated. Is robot abuse something we should consider as evidence of more serious underlying social or psychological issues?
Kate closes by suggesting that we need more experimental work on how human/robot bonding takes place. She suggests that this work is almost necessarily interdisciplinary, bringing together legal scholars, ethicists and roboticists. And she hopes that Cambridge, a space that brings these fields together in physical space, could be a space where these conversations take place.
Jessa Lingel of MSR asks whether an argument for protecting robots might extend to labor protections for robots. “I’m not sure I buy your arguments, but if so, perhaps we should also unionize robots?” Kate argues that we should grant rights according to needs and that there’s no evidence that robots mind working long hours. Jessa suggests that the argument for labor rights might parallel the Kantian argument – if we want people to treat laborers well, maybe we need to treat our laboring robots well.
There’s a long thread on intellectual property and robots. One question asks whether we can demand open source robots to ask for local control rather than centralizing control. Another asks about the implications of self-driving cars and the ability to review algorithms for responsibility in the case of an accident. I ask a pointed question about whether, if the Pentagon begins advertising ethical drones that check to see whether there’s a child nearby before we bomb a suspected terrorist, will we be able to review the ethics code? Kate notes that a lot of her answers to these questions are, “Yes, that’s a good question – someone should be working on this!”
Andy Sellars of Digital Media Law Project asks Kate to confront her roboexceptionalism. He admits that he can’t make the leap from the Pleo to his dog, and can’t see any technology on the horizon that would really blur that line for him. Her Pleo experiment could be replicated with stuffed animals – would we worry as much about people torturing stuffed animals? Kate cites Sherry Turkle, who has found evidence that children do distinguish between robots and stuffed animals. More personally, she tells a story about a woman who told her, “I wouldn’t have any problem torturing a robot – does that make me a bad person?” Kate’s answer, for better or for worse, is yes.
Tim Davies of the Berkman Center offers the idea that Kate’s arguments for robot ethics is virtue ethics: ethics is the character we have as people. Law generally operates in the space of consequentialist ethics: it’s illegal because of the consequences of behavior, not its reflection on your calendar. He wonders whether we can move from language of anthropomorphism around robots and talk about simulation. There are legal cases where simulation of harm is something we consider to be problematic, for instance, simulated images of child abuse.
Boris Anthony of Nokia and Ivan Sigal of Global Voices (okay, let’s be honest – they’re both from Global Voices) both ask about cultural conceptions of robots through science fiction – Boris references Japanese anime and suggests that Japanese notions of privacy may be very different from American notions; Ivan references Philip K. Dick. Kate notes that, in scifi, lots of questions focus on the inherent qualities of robots. “Almost Human”, a near-future show that posits robots that have near-human emotions, is interesting, but not very practical – we’re not going to have those robots any time soon. Issues of projection are going to happen far sooner. In the story that becomes Blade Runner, the hero falls in love with a robot who can’t love him back, and he loves her despite that reality – that’s a narrative that had to be blurred out in the Hollywood version because it’s a very complex question for a mainstream movie.
Chris Peterson opens his remarks by noting that he spent most of his teenage years blowing up furbies in the woods. “Was I a sociopath, a teenager in New Hampshire, or are the two indistinguishable?” Kate, whose Center for Civic Media portrait, features her holding a flayed Furby shell absolves Chris: “Furbies are fucking annoying.” Chris’s actual question focuses on the historical example of European courts putting inanimate objects on trial, citing a case where a Brazilian colonial court put a termite colony on trial for destroying a church (and the judge awarded wood to the termites who had been wronged in the construction.) Should emergent, autonomous actors that have potentials not intended by designers have legal responsibilities. “Should the high frequency trading algorithm that causes harm be put to death? Do we distinguish between authors and their systems in the legal system?” Kate suggests that we may have a social contract that allows the vengeance of destroying a robot that we think has wronged people, but notes that we also try to protect very young people from legal consequences.
Paul Salopek is a journalist, a storyteller and an explorer. As a foreign correspondent, he covered stories in fifty countries and won two Pulitzer prizes, one for his reporting on conflicts on the African continent, one for explanatory reporting on the Human Genome Diversity Project, which seeks to explain the history and the diversity of the human species.
Paul’s reporting has often placed him in difficult and dangerous circumstances. In 2006, while reporting on the unfolding crisis in Darfur, he was held for a month in a prison cell in El Fasher, Sudan, accused of espionage and writing “false news”. (He was released thanks to interventions by The Chicago Tribune, National Geographic, New Mexico governor Bill Richardson and members of Congress.)
While Paul has done extraordinary work giving readers access to the challenges, struggles and triumphs of people around the globe, he often felt that his journalism was proceeding at too fast a pace to understand what life was really like in the places he visited. Paul first explored the idea of “slow journalism”, journalism at a walking pace, in a 2012 article on famine — walking with nomads in Kenya’s Turkana Basin to understand the experience of deprivation in the horn of Africa.
This year, Paul started walking one of the greatest stories imaginable: the spread of the human race from the Rift Valley of east Africa across the globe.
Paul will walk from Ethiopia to Patagonia, his journey paralleling human migration from Africa, through the Middle East, through Asia, across a land bridge to North America and eventually to South America. This journey took humankind about 45,000 years. Paul’s walk will take seven years. He started this January, and November finds him on the shores of the Red Sea, walking through Saudi Arabia to the Jordanian border.
In the spring of 2012, Paul came to Cambridge, MA for a fellowship at the Nieman Foundation at Harvard to prepare for his trip. He became a regular at MIT’s Center for Civic Media, sitting in on my class on News and Participatory Media and spending time with me and my students to brainstorm ways he could share his journey with an internet audience without losing the meditative, contemplative nature of the trip.
Our team at Center for Civic Media has become part of Paul’s global pit crew. Nathan Matias helped Paul debug his power problems, matching Paul with Richard Smith, a power expert with the One Laptop Per Child project. Richard discovered that Paul was using a power inverter suited for use with a car battery, not Paul’s lightweight solar panels. A “solar camel” named Fares now powers Paul’s laptop and cellphone, with only the minor inconvenience that the shiny panels sometimes scare other camels at oases.
Most of our consulting has been more mundane, helping Paul and his team think about how they could use social media. In following our friend’s journey, Nathan, Matt and I have become Salopek superfans, awaiting his dispatches and exploring the leads, references and ideas Paul shares.
Tomorrow, Paul’s journey is on the cover of National Geographic. To celebrate that launch, Center for Civic Media is guest curating the Out of Eden twitter feed for the next two weeks. We will share some of the highlights of the journey Paul has taken thus far, previews on what is to come, and details about what Paul is reading, thinking about and referencing. While we are in touch with Paul periodically, we are mostly exploring ideas he has put forward in his dispatches, tracking down references and following links, engaging in a digital exploration in parallel to Paul’s physical travels.
We see Paul’s trip as a vast, spreading tree – Paul’s steps form the trunk, and we are exploring some of the more distant spreading branches. Inspired by Maria Popova’s Brainpickings, Jorn Barger’s Robot Wisdom, and the rambling curiosity of early visions for the web, Nathan, Matt and I will be taking turns monitoring the Out of Eden feed and exploring the topics we’ve found most interesting in Paul’s journey. Please feel free to ask us questions, suggest topics we should explore, and point us to local voices we should feature. We are watching Paul’s journey with admiration and fascination and look forward to walking a few miles with you.
MIT’s Comparative Media Studies hosts a weekly colloquium, and this week’s featured speaker is sociologist and movement theorist, Zeynep Tufekci. Zeynep describes herself as a scholar of social movements and of surveillance, which means this has been an interesting and challenging year. The revelations about the NSA hit the same week as the Gezi protests in Turkey. She explains that it’s hard to do conceptual work in this space because events are changing every few months, making it very hard to extrapolate from years of experience.
Not until protests reached Gezi, Zeynep tells us, did she feel comfortable putting a name on the phenomenon she’s been seeing in her research in the Arab Spring, through Occupy and in the Indignados movement. To explain her theory, she opens her talk with a picture of the Hillary Step on Mount Everest. The picture of Everest, taken a day that four people died on the mountain, shows the profound crowding on the mountain, which made Everest so dangerous as climbers had to wait for others to finish.
Because of technology and sherpas, more people who aren’t great climbers come to Everest. Full service trips (at a $65,000 price point) can get you to base camp and get you much of the way up the mountain, but they cannot prepare people to climb the peak. There’s an uptick in deaths in the 1980s once the basecamps become developed and more people can get to the mountain.
People have proposed putting a ladder at Hillary’s Step, hoping to make things more difficult. But the issue is not the ladders – it’s the fact that it’s very, very hard to climb at altitude. The mountineering community has suggested something else: require people to climb seven other high peaks before they reach Everest.
This is an analogy for internet-enabled activism. In talking about internet and collective action, we tend to talk about ease of coordination and community. Zeynep worries that we’re getting to base camp without developing altitude awareness – in other words, some of the internet’s benefits have significant handicaps as side effects. The result: we see more movements, but they may not have impact or staying power because they come to public attention much earlier in their lives.
She suggests we stop looking so much at outputs of social media fueled protests and start looking instead at their role in capacity building. She recommends that we stop looking at offline/online distinctions and look more at signaling approaches to protests. This requires a game-theoretic framework, and consideration of movement capacities and strategic tensions.
With that as backdrop, she takes us to Gezi Park and Taksim Square, which she suggests we see as analogous to Chelsea or Soho, a neighborhood where people go to party. It’s one of the very rare greenspaces in that part of the city. It was to be replaced with the replica of an Ottoman barracks, which was going to be used as a high-end shopping mall, something that there are many of in Istanbul.
Neighbors of the park held a small protest, probably 30-40 people. But that small protest was met with pepper gas, which is a clear overreaction to a small, peaceful protest. People got upset about the protest and saw it as a personal decision by Erdogan, who seemed to be pushing the development over local wishes and over the wishes of the people of Istanbul.
People took to the streets and to Twitter. Why Twitter? While CNN International was showing protests in Taksim, CNN Turkey was showing a documentary about penguins. Zeynep found this deeply surprising – “We’re not China!” But there are different kinds of censorship, and this was censorship by media conglomerates, which are controlled by people who want government contracts. To curry favor with the government, media tends to self-censor… and if they don’t, they often get phone calls from the government. So Turkey isn’t China, but it’s a bit more like Russia, though with open elections and a more open public sphere. The backdrop for Gezi includes a 11-year single party reign, a polarized nation, an ineffective opposition and an electoral system that makes it hard to start new parties.
These protests in the middle of the city showed the depth of media corruption in Turkey, because social media documented the clashes with the police. Outrage over the police action and media interaction turned into a long-term occupation of Gezi Park. So, Zeynep tells us, she packed up her gear: a helmet, a gas mask, sunscreen, a recorder and a digital camera, all air-gapped from the internet.
Zeynep describes the encampment as Smurf village, a happy and friendly version of “Woodstock meets the Paris commune”, but threatened by Gargamel, the police showing up periodically. Roma ladies who normally sell flowers to tourists were selling Anonymous masks, ski goggles and spray paint. (Who says the developing world needs help with entrepreneurship?, she tell us.)
She walks us through the iconography – #diren (“resist”, or “occupy”), penguins (a reference to CNN showing penguin documentaries rather than clashes.) While the icons imply a common movement, there wasn’t one. She shows us a picture of a Kurdish activist, a far-right activist and an opposition party activist in the same frame, and another picture of macho soccer fans meeting with a local feminist group. Soccer fans traditionally call referees “faggots” in their chants, and the soccer fans protesting wanted to call the police faggots… but got confronted by local gay and lesbian activists who said, “No, we’re the faggots – we’re the guys protesting!” The two groups had a meeting, and the soccer fans ended up chanting “Sexist Erdogan”, newly aware of the members in their community. Zeynep takes pains to explain the heterogeneity of the crowd: a Kurdish activist and a gay rights activist talking about why they hadn’t interacted before.
Despite how much positivity came from these protests, there were real risks – people went to sleep after writing their blood types on their arms. Serious injuries happened every day, from tear gas cannisters and police confrontations.
What did the internet do? It broke media censorship, created a counternarrative, and allowed coordination. To tease CNN, people photoshopped penguins into protest footage, urging CNN Turkey to come to the protests. Humor was a major weapon, drawing attention to the persistent censorship. Zeynep makes the point of how difficult it is to censor in a social media age, pointing to the differences between Gafsa and Sidi Bouzid protests in Tunisia.
Twitter was critical for the Gezi protests, not just for generating a counternarrative, but for protest coordination. For the most part, the internet worked, and local businesses turned on wifi to make it accessible to protesters. Activists called friends who tweeted on their behalf. Erdogan wasn’t going to turn off the internet, Zeynep tells us, because of fear he’d be seen as an autocrat.
Despite all this adhoc coordination, there was no real centralized leadership, and very little delegation of authority. It was extremely unclear what demands were beyond “Don’t raze Gezi Park.” Because there was no need to deal with these thorny questions of representation and delegation to coordinate the protests, the movement did not build a strong leadership culture.
The Gezi protests were brutally dispersed, at which point, protest conversations moved to neighborhood forums, which were also dispersed. While popular, these protests haven’t been able to create structures that engage the government in the long term.
Despite the successes of the protests, Zeynep reminds us that Gezi and the open internet never overwhelmed the state’s capacity to surpress the protests. It simply overwhelmed state capacity to suppress without unwanted side effects of embarrassment, loss of tourism revenues, loss of prestige, loss of being seen as a modern civic space.
To understand these protests, Zeynep turns to Amartya Sen and capacity building, looking at those capacities, not traditional outputs, as the benefits of development. The internet gives us some new capacities, but that may undermine other capacities: we end up at base camp very easily, but we don’t know how to negotiate Hillary’s step. We can carry out the spectacular street protest, but we can’t build a larger movement to topple or challenge a government.
Protests are very good at grabbing attention and putting forth counternarratives. They create bonding between diverse groups. They also signal capacity, but it’s a different capacity than it might have been fifteen years ago. Zeynep tells us that this is not a “cheap talk” argument – protesting isn’t too easy – it’s just that a protest isn’t going to topple the government. This isn’t a slactivism argument either – it’s an argument about capacities. The internet seems to be very good at building a spectacular local optima – a street protest – without forcing deeper capacity development.
In the past, gaining attention meant gaining elite dissent and buy-in. Now, gaining attention may also have a cost – you may or may not have achieved elite buy-in, which means you may gain polarization. Gaining attention on your terms means not gaining the dominant narrative.
Digitally enabled protest allows for much more ability for social interaction amongst the machines. That said, the internet is a homophily machine, and joining a movement can be a step towards a homophilous group. Movements like the Tea Party are thriving in these environments.
Zeynep shows a slide of a gazelle stotting to make her last point. Jumping in the air isn’t a great way to avoid predators – it’s a way to show that you’re really fast and would be hard to catch. But animals that can’t evade predators can also jump. Zeynep warns us that ignoring the March on Washington would have been a mistake, which might have ousted a President, but Gezi was not that sort of protest.
She urges us to consider “network internalities”, development of ties within networks that would allow social networks to become effective actors. Movements get stuck at no, she argues, because they’ve never needed to develop a capacity for representation, and can only coalesce around saying no, not building an affirmative agenda.
I’m at Code for America’s 2013 summit in San Francisco today, an impressive gathering put together by an extremely impressive civic innovation organization. I’m one of the advisors to Code for All, a sister project to Code for America led by Catherine Bracy, old friend from the Berkman Center, and was able to meet the first Code for All partners from Jamaica, Germany and Mexico at CfA’s amazing headquarters yesterday.
Code for America has done something pretty astounding. They’ve found a way to bring geeks into local governments to build innovative new projects in a way that’s fiscally sustainable. They’ve got support from the governments who host these geeks and from the central players in the US tech economy, and they’ve emerged as a central organizing node for the government innovation community.
Jen Pahlka, the founder of Code for America, opens her remarks with the classic Margaret Mead “Never doubt that a small group of people can change the world” quote, and admits that she never got her degree in anthropology because the classes were too early in the morning. She notes that many people working on civic change feel like they are a small group, though we’re able to come together into a movement today. She hopes that this isn’t a movement of sameness, but of diversity, which sometimes creates conflict and chaos. When we come together, we get new applications and APIs, but more importantly, we get a community and a common mission.
The common beliefs of this group include the idea that government can work of the people, by the people and for the people, even in the 21st century. We have in common the idea that we can do things better together. Code for America welcomes anyone who has these values, and – and here she emphasizes her words very carefully – are doing something. Code for America is a reaction to Tim O’Reilly’s injunction to the tech industry to “work on stuff that matters”. CfA, she tells us, works on the stuff that matters the most.
Jen is supposed to be working as deputy CTO under Todd Park in the White House on a yearlong break from the organization… though she’s on furlough at the moment. She explains her decision to move into government for a year by explaining how inspired she is by people working in government. “In order to honor all of you – all the public servants in government and the fellows to work with them – I felt like I could not pass up this experience.”
Answering the inevitable question: “How’s it going in DC?”, she answers that it’s both deeply rewarding and the hardest thing she’s ever done, including starting Code for America. She offers warm thanks to Bob Sofman and Abhi Nemani who’ve been leading the organization during her year off.
Clay Shirky start his talk at the Code for America summit with some internet history:
Larry Sanger is an epistemologist, hired into one of the few epistemology jobs, working on Nupedia, a new encyclopedia working with experts to build a carefully fact-checked new encyclopedia. Nine months into Nupedia, they’ve created about a dozen articles. Sanger realizes this isn’t working and goes to Jimmy Wales, the guy who hired him, and suggests using Ward Cunningham’s wiki software. Wikipedia is born and the rest is history – in weeks, it outpaces Nupedia and Nupedia rapidly shuts down.
Patrick McConlogue, a New York city entrepreneur who works at Kickass Capital, caught sight of a homeless guy on the streets of New York and proposes teaching him to program as a way of addressing the problem of “the unjustly homeless”. McConlogue never bothered to learn the homeless guy’s name, and the details of the story led to ferocious online criticism of McConlogue’s plans to teach a homeless man to program. In the criticism of McConlogue, Shirky was struck by the idea that tech startups encourage thinking that doesn’t consider limitations and constraints, which might be appropriate for the tech industry, but doesn’t work well in the social change space.
This sounded wrong to Shirky, who started re-reading the comments through this lens, looking both at the criticisms of McConlogue’s idea and the voluminous criticism of Leo, the homeless guy, for being homeless. Matt Yglesias was similarly skeptical, but looked at possible solutions: how do we address homelessness, which begins with looking for ways to create affordable housing. Clay draws a distinction between this sort of helpful criticism – which was very harsh to McConlogue’s approach, but ultimately helpful – and corrosive criticism, which doesn’t make you smarter but just tries to get you to stop looking at the problem.
Clay notes that he’s lived through two sorting out times: the question of whether the web would be important, and questions of whether social media would spread. In these periods of sorting out, technology looks like a solution in search of a problem, because at that point it is. Over time, we find answers to the question – will it work? will it scale – and it ultimately does. Clay suggests that we’re now at that point with civic media. We need to listen to the helpful critics, and we need to stop listening to the corrosive ones so we can keep moving forward.
“If you want to feel like a genius, go to a place where people are doing something new and predict that they’ll fail. You’ll almost always get it right. It’s a cheap high.” There’s a great deal of space between “nothing will work” and “almost nothing will work”. The easiest problems to take on, Clay tells us, are pure technical problems where you just need information. It’s not an accident that applications to report potholes are the great success story in this space – there’s no pothole lobby. Potholes are projects and they have solutions.
One step up from technical problems are managerial problems. In starting a bike sharing program in New York, the organizers posted a map and asked people to request bike stations. The resulting map, where everyone requested a station outside their homes, was a rhetorical document that helped build support for the program. Managerial problems don’t just solve technical problems – they have to do with building support and constituencies for solutions. And then there are political problems.
It is not possible to imagine a city without prostitutes, Clay tells us. People don’t agree what the goal is when they address prostitution. Some people want sex work to stop and some people want it to be a better job. At the political level, you’re not dealing with problems – you’re dealing with dilemmas and you only have tradeoffs, not solutions.
When people want to distract you, Clay tells us, they tell you the problem you’re working on is not the real problem. “Don’t work on potholes – work on traffic flow citywide.” Work at that scale and you’ll get criticized for not working on something concrete and achievable because you can always find a way to criticize a project’s scale.
The possibility of learning as you go is the potential of the people in this room, Clay tells us. We can’t find major solutions by planning better or by starting an endless series of unconferences and hackathons: hackathons don’t produce running code, but better understanding of problems and better social capital. In the internet community, we’ve all thought through Nupedia and we think we understand how it ground to a halt through bureaucracy. But many of us fail to understand that the people who made Nupedia fail were the people who make Wikipedia succeed, the same folks who’d been building Nupedia. Wikipedia was a plan B.
When you build a prototype, you’re building up your understanding of the process. When you build a prototype, you’re not solving the client’s problem – you’re often showing show the client that they don’t understand the problem, as people often don’t tell you what they need until you can show them something that is concretely wrong. If we can commit to working on problems before discovering at first that we’re wrong, we can take on the most challenging problems that face us.