Susan Benesch is one of the leading thinkers on countering hate speech online. She’s a fellow at the Berkman Center this year, and I’m terribly sorry to be missing her talk at Berkman this afternoon. (Instead, I’m watching from home so I can be primary caretaker for my son for a couple of weeks while Rachel gets to travel.) She teaches international human rights at American University and is the founder of the Dangerous Speech Project, which tries to understand the spread of speech that incites people to violence.
Susan’s talk is available online and I’ve tried to blog it remotely while cursing my inability to teleport across the state. The talk is wonderfully titled “Troll Wrastling for Beginners: Data-Driven Methods to Decrease Hatred Online”. Unlike most conventional online wisdom, Benesch believes you should engage with the trolls, in part because it may be the most successful path to countering dangerous speech. The approaches states have taken to dangerous speech – punishment and censorship – don’t work very well, and some evidence suggests that they work even worse online than offline. She suggests the case of Anwar al-Awlaki, who was ultimately killed by a drone strike – despite being punished (via summary execution from a US drone), his online speeches continue to be influential and may have influenced the Boston Marathon bombers. Censoring that speech doesn’t work well in an online environment as it’s likely to move onto other platforms.
So what about “don’t feed the trolls”? Benesch points out that there are several implicit assumption in that advice. We assume that if we ignore a troll, they will stop (which, in turn, tends to assume behavior that’s only on a signal platform.) There’s an assumption that online hate is created by trolls; in the few experiments that look at racist and sexist speech, at least half is produced by non-trolls. We tend to assume that all trolls have the same motivations and that they will respond to the same controls. And finally, we assume that the trolls are the problem – we need to consider effects on the audience.
(Benesch doesn’t define trolls until pressed by the audience and points out that it’s a term she uses with tongue in cheek, most of the time – she acknowledges that different trolls have different motivations. Her goal is to move away from considering trolls as the problem and towards understanding dangerous speech as a broader phenomenon.)
One of the benefits of online speech environments, Benesch posits, is that we can examine the effect of speech on people. In offline environments, it’s very hard to measure what reactions dangerous speech leads to – in online environments, it may be possible to track both responses and effects.
Benesch’s suggestion is that we should approach dangerous speech through counterspeech, in effect, talking back to the trolls and to others. In explaining her logic, she notes that the internet doesn’t create hate speech – in some cases, it may disinhibit us from speaking. But more often, the internet creates an environment where we are aware of speech we otherwise wouldn’t hear. Most of us wouldn’t have been aware of what speech is shared at a KKK meeting, and many of us wouldn’t have heard the sexist jokes that were told in locker rooms. Now speech is crossing between formerly closed communities.
This is a new feature of human life, Benesch suggests, and while it causes a great deal of pain, it’s also an opportunity. We can “toss speech back across those boundaries to see what effect it has.” For the most part, we don’t know what will happen when we expose speech this way, and it’s possible the effects could be very positive. She asks us to consider norm formation in teenagers – most 16 year olds, she argues, have historically developed opinions from a small, homogenous community around them. That’s no longer the case, and it positive opportunity for teens to develop a broader and more nuanced worldview.
Believing in counterspeech means having faith that it’s possible to shift norms in speech communities. Benesch asks “What is the likelihood an American politician will use the N-word in public?” While there’s a constitutionally protected right to use such an offensive term, the probability of a speaker using the term is near zero. Yet, she argues, 60 years ago there were places in the US where you likely could not have been elected without using that word. “People’s behavior shifts dramatically in response to community norms,” she suggests, and as many of 80% of people are likely to follow the norms of speech consistent with a space and a situation, even trolls.
One of Benesch’s case studies for counterspeech comes from Kenya, where dangerous speech was a key component to violence in the wake of 2007’s disputed election. With over a thousand killed and hundreds of thousands displaced, the 2007-8 unrest was one of the ugliest chapters in the nation’s history, and as Kenya prepared for elections in 2013, many Kenyans were worried about inflammatory and dangerous speech online.
Benesch worked with Kenya data scientists at the iHub and the team at Ushahidi to build Umati (from the Swahili word for crowd), which collected reports of online hate speech. What they found was a wave of inflammatory speech from Facebook, and astonishingly little dangerous speech on Twitter. This disparity is not well explained by platform usage – Twitter is extremely popular in Kenya. Instead, it’s explained by counterspeech.
When inflammatory speech was posted on Twitter, prominent Kenyan twitter users (often members of the #KOT, Kenyans on Twitter, community) responded by criticizing the poster, often invoking the need to keep discourse in the country civil and productive. This counterspeech was surprisingly successful – Benesch tells the story of a Twitter user who posted that he would be okay with the disappearance of another ethnic group, and was immediately called out by other Twitter users. Within a few minutes, he had tweeted, “Sorry, guys, what I said wasn’t right and I take it back”.
This isn’t the behavior of a troll, Benesch argues. If the user in question were simply looking for attention, he wouldn’t have backed down when his inflammatory tweets met with spontaneous counterspeech. This online counterspeech is especially important when online speech is magnified by broadcast media, as it is in both Kenya and the US – it’s possible for television and newspapers to magnify not just the hateful speech but the attempts to counteract it.
By studying successful examples of counterspeech, Benesch is trying to develop a taxonomy of counterspeech and determine when and where different forms are most useful. She takes inspiration from examples like that of a young man in the US tweeting angrily about Nina Davuluri being named Miss America. The young man inaccurately and disparagingly referred to Davuluri as “an Arab”, and was immediately countered on Twitter by people who called out his racism. Within a few hours, he’d tweeted something resembling an apology to Davuluri herself.
Benesch wonders, “Can we put together the ideas of counterspeech and the idea of influencing 16 year olds?” It’s not realistic to believe we’re going to change the behavior of hardcore haters, she tells us, but we only need to influence a critical mass of people within a community, not the outliers.
Twitter and Facebook aren’t the only environments for inflammatory speech online – anyone who’s participated in online gaming knows that there’s toxic and hostile speech in online environments. Riot Games was concerned about the speech surrounding their popular game League of Legends and cooperated with academic researchers to understand speech in their game universe. The study found that fully half of the inflammatory messages were coming from users we wouldn’t normally consider to be trolls – they came from people who generally behaved like other game players, but were having a bad day and lashed out in ways that were inflammatory. They also discovered that very small changes in the platform – changes in language used to prompt players, apparently minor changes like font and text color – could improve behavior substantially.
Facebook’s “compassion research” project works on similar ideas, trying to get people to use Facebook in more pro-social ways. When you try to flag content on Facebook as offensive, Facebook first prompts you to engage with the person who offended you, suggesting language to communicate to the other user: “Could you take this down? It hurts my feelings.” As with Riot Games, they’ve found that small prompts can lead to dramatic behavior changes.
Benesch has been using these insights to consider problems of inflammatory speech in Myanmar (a topic I learned a little about in my visit to the country earlier this month.) In Myanmar, Facebook is the dominant internet platform, not just the dominant social media platform – if you search for information in Myanmar, you’re probably searching Facebook. In this environment, a rising tide of highly inflammatory speech inciting Buddhists against Muslims, particularly against the Rohingya people, is especially concerning. Not only does Facebook in Myanmar lead to echo chambers where no one may be willing to challenge inflammatory speech with counterspeech, but some of the mechanisms that work elsewhere may not work in Myanmar.
In a country that’s suffered under a military dictatorship for half a century, the idea of “reporting” people for their speech can be very frightening. Similarly, being encouraged to engage with someone who posted something offensive when you have reason to fear this person, or his friends, might threaten your life, isn’t a workable intervention. Any lessons from Facebook’s compassion research needs to be understood in specific human contexts. Benesch asks how you should respond to offensive speech as a Facebook user in Myanmar: you can like the post, but you can’t unlike it. If you respond in the comments thread, you’re participating in a space where the page owner can eliminate or bury your comment. This points to the challenge of using a private space as a quasi-public space.
We need more research on questions like this, Benesch offers. We need to understand different responses to dangerous speech, from “don’t feed the trolls” to counterspeech, to see what’s effective. We need to understand whether counterspeech that seeks to parody or use humor is more effective than direct confrontation. And we need to understand discourse norms in different communities as what works in one place is unlikely to work in another. Louis Brandeis advised that the remedy for bad speech is more speech. As researchers, we can go further and investigate which speech is a helpful counter to bad speech.
I’ll admit that the topic of Benesch’s research made me uneasy when we first met. I’m enough of a first amendment absolutist that I tend to regard talk of “dangerous speech” as an excuse for government control of speech. I had a great meeting with Benesch just before I went to Myanmar, and was much better prepared for the questions I fielded there than if I hadn’t benefitted from her wisdom. She’s done extensive work understanding what sorts of speech seems to drive people to harm one another, and she’s deeply dedicated to the idea that this speech can be countered more effectively than it could be censored or banned.
The conversation after her talk gave me a sense for just how challenging this work is – it’s tricky to define inflammatory speech, dangerous speech, trolling, etc. What might be a reasonable intervention to counter speech designed to incite people to violence might not be the right intervention to make a game community more inviting. On the other hand, counterspeech may be more important in ensuring that online spaces are open and inviting to women and to people of different races and faiths than they are right now, even if inflammatory speech never descends to the level of provoking violence.
For people interested in learning more about this topic, I’d recommend the links on the Berkman talk page as well as this essay from Cherian George, who was at the same meeting I attended in Myanmar and offered his thoughts on how the country might address its inflammatory speech online. I’m looking forward to learning more from Susan’s work and developing a more nuanced understanding of this complicated topic.
Engin Onder and Zeynep Tufekci visited the Berkman Center today to talk about the rise of citizen reporting in Turkey. Tufekci is a leading scholar of online media and protest, and Onder is one of the founders of 140journos, an exciting citizen media group that’s been central to documenting Turkey’s protests in Gezi Park and across the nation.
Zeynep Tufekci offers an overview of the press situation in Turkey to provide context for Engin’s work with 140journos. There’s no golden age of press freedom in Turkey to look back to, she warns. After the military coup in 1980, the 1980s were a decade marked by military censorship. In the 1990s, Turkish media suffered from censorship around Kurdish issues, but there were media outlets that took journalism seriously within existing constraints.
In the 2000s, the concentration of power by AKP after their second election led to large conglomerates moving into the media business and buying up the press. Energy companies ended up buying leading newspapers, firing columnists and steering the paper’s editorial direction towards the government… and, coincidently, would win the next major government energy contract. Zeynep describes the situation as “ridiculous”, noting that a multiday clash in the heart of the nation’s biggest city was broadcast by CNN International, while CNN Turk broadcast a document on penguins. Talking to a Turkish journalist about the situation, the journalist explained a layered system of censorship: “First, I censor myself. Then my editor censors me, taking my already soft story and make it softer. And if that’s not still soft enough, the government may call a newspaper or TV station and demand coverage change.” Should an outlet not comply, they face massive tax bills, which mysteriously disappear when the media becomes more compliant.
While the press is heavily constrained, Zeynep tells us, the internet is largely open. Websites have been blocked, but it was very easy to get around censorship using proxies. The blocking of YouTube, she tells us, wasn’t a serious obstacle to viewing content, as even the prime minister admitted he used proxies to access it. Instead, it was a tax strategy, trying to get Google to come to Turkey and pay taxes. That’s changing, however, and the new censorship regime promised is significantly more serious, including deep packet inspection.
Zeynep tells us of the Roboski Massacre, a bombing in the village of Uludere, in Kurdish areas where informal smuggling is part of the local economy. The village was bombed by military jets, killing 34 people. It was unclear whether this was a mistake by the military, or a conscious attack on the Kurdish population.
Every newsroom in the country knew about the story and all waited to hear whether they could publish about it. A Turkish journalist, Serdar Akinan, decided to fly to the area and took a minibus to the village, encountering the massive funeral procession. He took an instagram photo and shared it on Twitter… which broke the media blackout and led everyone to start publishing news of the bombing. Akinan lost his job for this reporting and now works for an independent news organization.
The story of 140journos starts there, Zeynep tells us. Engin Onder introduces himself as a non-journalist from Istanbul, a former passive news consumer before media and news broke down. “We felt so sad about this issue, and thought we can do some stuff.” Onder runs a group of creative professionals called Institute of Public Minds, a group that operates creatively in physical and digital public spaces.
In early 2012, in the wake of the Roboski Massacre, Onder and his colleagues felt compelled to start building their own media systems to address the weaknesses of the professional media. Roboski wasn’t the only trigger – a set of pro-secularism protests in 2007 and a union protest in Ankara in 2009 also received no media coverage.
Akinan’s coverage of the Roboski massacre was the inspiration for Engin and his friends Cem and Safa. All three were heavy Twitter users, and they realized that Twitter and online services might be sufficient infrastructure to report the news, as it was all Akinan needed to break this critical story. They brainstormed names, and settled on 140journos, honoring Twitter’s character limit and using slang to poke fun at the professional status of journalist.
Cem had been kicked out of his house because his politics so sharply diverged from his father’s. His father read and watched only media from one conglomerate, while Cem began reading underground and alternative newspapers – for Cem, 140journos is about “hacking his father”, creating media that could sway his parents. Safa is a conservative and religious guy, who helps counterbalance the team. Engin tells us that he had only attended one rally before starting the project.
Before the Gezi protests, 140journos reported on key court cases using nothing more than a 3G mobile phone. At some point in a key trial, the judge demanded that journalists with press cards leave – the 140journos remained and continued tweeting from their phones. That led to discovery of the network by mainstream journalists (who probably resented 140journos for being able to remain in the courtroom.)
140journos made a point of visiting a wide range of public protests, including conservative protests against fornication. They believed it was important to ensure different groups understood each other and saw the diversity of protest movements.
Media coverage of 140journos had been pretty condescending, focusing on the youth of the participants, not on the quality of their reporting. Zeynep, on the other hand, took their work seriously, declaring “This is not ‘citizen journalism’ – this is ‘journalistic citizenship’.”
Once the Gezi Park protests broke out, 140journos found themselves at the heart of a massive movement in Istanbul. Part of the mantra of the Gezi movement was, “the media is dead – be the media”. This helps explain why, during a moment the police were spraying tear gas on Taksim Park, a protester was holding up an iPad and taking photos. Gezi brought a culture of documentation to Turkish protest movements.
The tools of the trade, Ergin tells us, include Facebook, Twitter, Soundcloud, Vine, Instagram, as well as tools that help mine social media platforms. Tineye, Topsy, Google Image Search helped they find traffic cameras, which were also helpful. Google Maps allowed the team to identify where documentations took place, as did Yandex Panorama (similar to Google Streetview, but with coverage of Turkey.) When they heard the sames of people involved with the protests, they sought them out via Facebook, then scheduled in person or phone interviews. Internally, the team coordinated using WhatsAp.
During the protests, 140journos were tweeting hundreds of times a day. They noted different media usage patterns in different parts of the world. Istanbulis use a wide range of media types. Ankarans favor livestreaming. In Izmir, there was less content produced, more a complaint about what the media wasn’t covering.
When the culture of protest documentation became common, the role for 140journos changed into a practice of curating and verifying, not frontline reporting. They decided they couldn’t participate in the protests, and never physically appeared in the park so they could cover the protests with a level of detachment and neutrality. They may have sympathized with the protesters, but their role was as journalists, not activists.
To explain the working method, Ergin gives us an example from Rize, a conservative town that’s the home of the Prime Minister. A crowd, allegedly armed with knives, gathered in front of the office of a secularist group. Seeking to verify what was going on, they searched online, found a blurry photo of the protesters outside the office and started reading signs on the street. They began calling shops on the street and interviewing witnesses of the standoff. Ironically, one of the businesses nearby was a TV station which, unsurprisingly, was not reporting on the situation. Eventually, they also found a nearby traffic camera, and used a combination of the interviews and the street camera to confirm the story and report on it.
After the Gezi Park protests, Engin argues that the content of citizen journalism has been legitimized, the quality of citizen journalism content has been refined and the value of credibility has been strengthened throughout their network. There’s now a network of citizen journalists aside from 140journos, and 140journos often uses these networks to vet their work. 140journos builds their reporting on lists of citizens they’ve verified live in different Turkish cities – when an event takes place, they lean on those local sources.
In a remarkable twist, Veli Encü, a survivor of the Roboski Massacre, has become a correspondent. When warplanes fly over Uludere, he immediately reports to the network so that people can watch and ensure another massacre doesn’t take place. Cem’s father, who used to isolate himself in conservative media, has now become an activist and a much broader reader. And 140journos is now producing a radio show driven by citizen media, broadcasting once a week, and projecting their work onto the sides of public buildings to attract attention and open dialog with a broad range of participants.
We move into a Q&A, which I opened by asking whether the rise of citizen journalism has shamed Turkish journalists into changing their behavior. Engin is uncertain. He notes that the CEO of CNN Turk underestimates citizen journalism, likely seeing it as providing misinformation and poisoning public discourse. But media workers are starting to work as pirates, with 10 or more professional journalists contributing anonymously with stories they otherwise couldn’t get published. Zeynep suggests that there has been a significant change post-Gezi, with more actual news carried live. 140journos was a catalyst, she argues, but so were marches where people stood outside TV stations, waved money and begged reporters to do their jobs. There’s another cultural shift, both note. Citizens are willing to put themselves at personal risk to capture images from the frontline of protests.
A Berkman fellow asks whether there are any Turkish tools being used to produce this media. For better or worse, Engin explains, the tools used are those of social media, and almost all are hosted in the US, but available for no cost online. Furthermore, the journalism the team is doing is wholly non-commercial – they support themselves through other jobs and engage in their reporting as part of their civic engagement.
In the next few weeks, 140journos is planning to release two new tools. One will use elements of gamification to help increase the practice of verifying and factchecking reporting. The other will provide background detail on locations throughout Turkey on a data-enhanced map, which can be used as a way to provide context and background information on stories the network releases.
Another question asks whether there are any plans to monetize content. Engin is insistent that the priority is building better content, not working on sustainability. Another questioner asks whether coming internet censorship will make it difficult for 140journos to share content. Engin explains that the group has so many friends in the Pirate Party that they won’t have trouble finding VPNs, or helping their readers find VPNs. At the same time, he notes that it’s unclear how these admittedly draconian laws will actually be implemented. Engin notes that his group is non Anonymous (or anonymous) – they strongly believe they are doing nothing illegal, merely reporting the news.
Another question asks whether the Turkish government will begin mining online data to identify protesters. Zeynep explains that this isn’t necessary – every phone in Turkey is registered to an individual’s national ID, and the government has the identity of everyone who has appeared at protests. While there have been occasional arrests of people who tweeted to incite violence, there have not been widespread roundups of people involved with these demonstrations. Engin notes that the government probably cannot shut down the internet in Turkey without collapsing the government entirely.
Zeynep closes the conversation by noting her amazement when she discovered that 140journos was four college students, working in their free time. She draws an analogy to the groups that coordinated logistics during the Tahrir protests, who used social media to build a logistics team, inspired by a local cupcake shop that used Twitter in that fashion. Zeynep suggests that we’re seeing a technological shift that makes certain kinds of mobilization significantly easier than it ever had been before.
danah boyd is a Principal Researcher at Microsoft Research, a Research Assistant Professor in Media, Culture, and Communication at New York University, a fellow at the Berkman Center, and director of the Data and Society Institute. danah has been working on the issues associated with “It’s Complicated” for many years. 10 years ago, Ethan and danah were two of the youngest people at a conference. danah told him, “I only have one secret to get through these events. I tell them what their children are doing.” Telling people what their children are doing online is incredibly valuable, either because we’re parents who care about our children, or because we care about the future of the Internet. danah has been relentless over the last decade in trying to make it clear that simple snap answers about the Internet (good, bad, dangerous, amazing) are utterly and totally inadequate. What we need to do is to take a long, careful look at the context that underlies people’s behaviours online. We’re in a moment where the easiest thing to do is to say “it’s simple.” danah has put forward a book that says “it’s complicated.”
Today is the official publication of the book and danah tells us that she wanted to spend the day with friends. (Her day began, mediawise, with a celebratory story on NPR, ”Online, Researcher Says, Teens Do What They’ve Always Done”) She’s been affiliated with the Berkman Center, in one context or another, for fifteen years. Rather than lecture about the book, she wants to provide some context on her thinking, then take questions.
danah explains that she was part of the first generation to grow up online, and that the internet was her “saving grace”. Her brother tied up the phone lines with strange modem squeals, and he showed her that the internet was made of people. Once she’d made that discovery, the phone line wasn’t safe after her mother went to bed. The first $700 phone bill ended that, but introduced danah to the wider world of phone phreaking and misbehavior to ensure she had access to outline spaces.
She went to school in computer science to explore the space, but didn’t really find her direction until she came to grad school and was able to study social media. She began a blog in 1998, and has been participating in and working on social media since then. Working with Judith Donath in 2002, she was invited to join Friendster a year before the network became prominent and widely used (see this paper on Friendster).
The early adopters of Friendster were geeks, freaks and queers, danah tells us, and those groups are the early adopters of most new technology platforms. As someone who identifies with all those groups, danah tells us that she had a front-row seat for Friendster’s successes and missteps, and was often able to interrogate the platform’s founder about his decisions. She moved to studying MySpace, and benefitted from the shift of youth to that platform, allowing her to watch the rise and fall of two major social media platforms (see danah’s research on Friendster and MySpace and this paper on Why Youth (Heart) Social Network Sites)
danah tells us that MySpace was based on Cold Fusion, a now-antiquated database programming language, and that the vulnerabilities of the site led her to novel research methods. User IDs were assigned sequentially, and she was able to sample users by choosing a random subset of IDs. But as her research developed, it became less easy to randomly approach youth online, so danah shifted her research methods to working offline, traveling around the United States to meet the young users of these platforms. (The major problem with interviewing 166 teenagers was dietary – it involved a lot of cafeteria lunches and a lot of McDonalds.)
Her research on teens informed her doctoral dissertation, and once she’d completed it, she felt a need to discuss the same issues with a broader audience. The book is organized around myths associated with youth and online media: the idea that youth are digital natives, that online spaces are heavily sexualized, and that online spaces are dangerous to youth.
Her overall takeaway from this research: we have spent thirty years restricting the ability of youth to get together face to face in the physical world. These technologies give youth access to public life once again and to make meaning of the world around them. Youth want to gather and socialize with their friends and become part of public life. Many youth would rather get together in real life, but turn to online spaces because those are the only spaces where young people can interact with one another in public life.
“There’s so much learning, so much opportunity through being part of public life”, says danah. We need to accept the idea that these online spaces are the key public spaces for young people.
Dorothy Zinberg asks about cycles – the decline of Friendster and MySpace – is Facebook now declining? And how do we expect these youth to change over the next decade?
danah notes that ten years ago, email was something people were excited about – “You’ve got mail” was a popular ringtone. Now, we open email with apprehension and worry. And that’s how teens are now approaching Facebook. Teens are not running from it, but it’s no longer the “passion play” – instead, it’s a place to connect with adults in your life. Who wants to spend their time hanging out with adults? The idea of a single platform to rule them all will look like a historical anomaly. It’s more natural to see fragmentation, a wealth of platforms that people use for different reasons and in different contexts. There are messaging, photo and videosharing services, all of which have emerged as new spaces for youth. We’re also seeing the emergence of interest-driven spaces like Tumblr or Twitter, which make it possible to geek out on fashion or music. There are also media for different communities – people obsessed with media are fascinated with Secret, which looks more like ChatRoulette in terms of speed. Young people, depending on their interests and passions, are moving across different services with texting as the single common denominator. danah notes that texting behavior in the US is anomalous, as we are one of the few countries where you pay to send and receive texts – there’s nothing more socially awkward than sending someone a message and making them pay for it.
As far as where youth usage is going: it’s moving to mobile. Mobile is an intimacy device. In response to discussions over safety, computers are now used in shared spaces, like the living room. The mobile device is a way of maintaining privacy. But the world of aps is a very different world than the world of websites. It’s surprising that we don’t yet have powerful geographically-linked apps – it may be that since youth are restricted to a world of home and school, geolocation doesn’t yet have a youth audience, and people love to experiment on young users. She notes that the old is new again, pointing to the rise of the aniGIF.
David LaRochelle wonders whether issues of Facebook’s collapsed context is a technical problem – could something like Google Plus solve those technical problems? danah explains that the key feature of new platforms is that Mom doesn’t know about them yet – once Mom knows you have an account, she can watch over your shoulder or demand you friend her. She asks us to think back to high school: not everyone in your class are friends with one another. When you plan a party, you don’t want to invite everyone. That same drama plays out online – you can move to a different platform as a way of connecting with a subset of friends.
Judith Donath asks what we’ve learned longitudinally from studying social media over the course of years. What happens when a generation that grew up on one set of applications is now 23? danah explains that the book is really about high school. She’s tracked some of these teens through college or the military and into the real world. Something that becomes clear is that certain behaviors are tightly associated with a life stage. The constraints of high school dynamics seem to force people to work through status, peer relationships and early sexual relationships, all of which play into online media environments, and then, in turn, influence those school dynamics. Once people are no longer constrained by school dynamics, you see a more mature set of dynamics: more dating, more efforts to appear cool, and lots of discussion about employment. The use of social media changes sharply for 20-somethings who want to go into social media marketing, or government, where that behavior ends up changing their online behavior.
A visiting scholar from Valencia asks about gender differences in teen behavior online, especially around experimentation. danah notes that it’s challenging to differentiate between gendered behavior online and offline: online behavior mirrors the offline. Status dynamics come into focus for girls earlier than for boys, while boys have more gameplay relationships (pranking, punking) with their peers, both offline and online. One of her book chapters is on “drama”, a predominantly female behavior online and off. What’s more challenging in studying gender online is watching gendered pressures, especially around sexuality, playing online. Young girls see a Miley Cyrus video and feel pressure to dress and behave certain ways. Young boys feel social pressure to talk to girls in certain ways. Online environments make very clear how powerful these pressures are.
Tim asks about policy and practice responses to youth behavior online. danah explains that she never expected to engage in policy through this research. She takes us back to a lawsuit mounted by 49 states attorneys general against MySpace, accusing the platform of enabling sexual predation. One of the outcomes of the suit was appointing a Internet Safety Task Force, consisting of danah, John Palfrey, and Dena Sacco, to help MySpace regulate behavior. The attorneys general expected a tension between the three, but the three worked closely together to consider actual data around contact, conduct and content online. Their research found far less evidence for dangerous behavior online than the attorneys general had expected to find, and came to the counterintuitive finding that the laws designed to prevent bullying had often had negative effects. danah hopes that one thing this book can do is help prevent ridiculous, counterproductive laws from being written.
danah also explains that it’s been very hard to work with practitioners, like teachers. In the early days of social media, teachers often came into these spaces and explored how to interact with students. It’s now become an article of faith that teachers should not engage with students in these spaces, and that’s a shame, as it’s important to have non-custodial guides online. Don’t friend a student, but if a student reaches out to you, reciprocate. “Don’t flip out” when students misbehave, but make clear that you’re present in the space. She notes that Jane Jacobs explained the importance of eyes of the street in urban spaces – we might think of the same dynamic happening online.
Kate Darling notes that Sherry Turkle speculates that online communication in the place of face to face communication is dangerous and detrimental. danah explains that she loves Sherry as a person, but strongly disagrees with her as a researcher. Sherry starts conversations by noting how uncomfortable teens are interacting with adults – when, asks danah, have teens ever been comfortable interacting with adults?! Teens are comfortable socializing with each other face to face, but retreat to devices around adults. Teens want to spend more time face to face with friends and generally are prevented from doing so. “Every aspect of sociality is a learning process and you strengthen different muscles through different interactions.” Teens may be more sophisticated in interacting online than in interacting face to face simply through where they have the most practice. But it’s absurd to suggest that teens are somehow stunted by online interaction.
“The political activist in me got entertained by the idea that a generation learned to use proxies to escape restrictions put in place by adults.” When we talk about teenagers, we’re usually dealing with our own anxieties, danah says. Lawmakers became obsessed with teen sexting before Anthony Weiner came on the scene and reminded everyone that lawmakers can misbehave as well.
Deb Chachra refers to a Sumerian clay tablet in which a father complains about his slacker son. How do we overcome the “kids these days” narrative that shapes so much of discussion around kids and social media. danah notes that the downside of Deb’s example is that we appear to be predestined to repeat these behaviors. The key is to get adults to listen to young people. Young people are telling their stories – the positive and the negative – to an unprecedented degree. Instead of complaining, there’s an amazing opportunity to listen to youth writ large. danah hopes the book will spark conversations about how we listen, rather than answering specific questions. That said, she worries that protectionism of young people leads to young adults who are not well socialized to deal with the choices of college or adult life.
A questioner asks about kids moving to different platforms to escape their parents. She notes that there’s a spectrum of risk from paranoia to actual risk. If kids are escaping to these narrow, parent-free spaces, who are the people on the streets who can provide eyes on online behavior? The providers of these applications are not teenage kids and may not have the best interests of teens in mind. danah suggests that we think about what adults are in the lives of kids beyond their parents. We want kids to have multiple adults – the cool aunt, the teacher they like – in their lives, and they are likely to invite these adults into online spaces. This is why age segregation is a dangerous direction for online platforms – we want adults and youth interacting in the same spaces. But danah doesn’t believe this will necessarily happen automatically – we may need to consciously create eyes on the streets, as community activists do by putting college students on the streets to work with at-risk youth. We need people to be involved in online spaces in a way that they are available, but non-judgemental.
danah notes her involvement with Crisis Textline, an NGO she helped start, that uses texting to connect teens to crisis hotlines. The worst thing we can do, she suggests, is put these decisions in the hands of engineers. We need to look at the people who understand these social systems and build on their best practices.
Rob Faris notes that part of being a kid is surviving your own mistakes and being able to hit the reset button. How does that work in online spaces? Could platforms and online spaces improve on this score? danah notes that one of the challenges online is that things go on your permanent record – she notes that her teenage Usenet posts are still online. We don’t know the longitudinal answer, danah notes – she’s part of a cohort that really did grow online, but it’s not clear how that information may affect life going forward. People assumed that bullying would be worse online, but it’s actually turned out that having a record of bullying is helping people find support. Documenting self-harm seems to lead youth to interventions that happen more quickly, but perhaps that accelerated progress is a good thing. Perhaps we are able to acknowledge the past through some sort of online transparency, putting information online before someone else does. She notes that we’ve moved into a culture of forgiveness for US presidents – from I didn’t inhale, to I did drugs, but I was a kid from Clinton to Obama – that we may simply be making it easier to escape your past. The question is whether this will be true for underprivileged youth in the same way that it is for the most privileged.
A questioner asks about youth’s relationships with free services. danah notes that this generation has very little access to financial capital. Babysitting and newspaper routes no longer produce revenue for kids, and kids now compete with fifty-year olds for fast food jobs. Without capital, there’s enormous pressure for kids in poorer families to get a phone and a pay as you go data plan. As a result, she saw a lot of kids engaged in illegal activities to obtain devices. Once kids are online, it’s all about free. They don’t particularly like ads, but they don’t see an alternative. The response is a form of gameplaying: can you send content to your friends that make them get absurd ads? Young people understand the ecosystem, but their relationship is one of hacking and playing. Their goal is socializing with their friends, and they understand that free services make that possible.
Tim Mallay notes that he recently revisited danah’s discussions of gentrification in MySpace. In 2006-7, danah explains, she saw a split between youth moving to MySpace and Facebook. Facebook appeared safe and high-status, while MySpace seemed dangerous, poor and used by people of color. danah wrote an essay she now regrets discussing this dynamic, and woke up to a media storm that resulted from her observations. Teens often told danah that she was right, though insufficiently nuanced. These race and class dynamics are still critical to understanding social media, danah tells us, but there’s no longer as start a division between sites. Instead, it plays out in different behavior on the different platforms. Because social media plays out around the race and class networks of your social circle, it’s impossible to understand online behavior without considering these issues. danah guesses that we’ll see this again once we’re fragmenting between different services like messaging aps – adoption of the different platforms tends to be based on race and class. This matters, because in 2006-7 colleges were recruiting online – we need to make sure that we don’t reproduce privilege online by favoring some platforms over others.
A question from a Berkman staffer begins by noting that he coaches high school atheletes, and he’s observed that they are less broadly skilled than they were years ago. He believe this is because students only engage in physical behavior in structured ways. Is it possible that we may finally be reaching a point where we will be able to tell youth that it’s okay to go outside again? danah notes that, especially within privileged environments, it’s hard to get a network of parents to change behavior. It’s a collective action problem – if you allow your children to be “free range” kids, other parents will force their children to shun your child. Because it doesn’t work to go parent to parent, danah feels it’s important to bring these messages of the importance of giving youth space to roam online and offline to media and other public fora. Oddly, she’s more successful making this case for urban families than to suburban ones, if only because public transportation makes it possible for children to roam.
danah will be speaking tonight (February 25) at the Harvard Bookstore. Come and hear hear her talk about “It’s Complicated” and bring your own questions.
Kate Darling (@grok_) offers a talk to the Berkman Center that’s so popular, the talk needs to be moved from Berkman to Wasserstein Hall, where almost a hundred people come for lunch and her ideas on robot ethics. Kate is a legal scholar completing her PhD during a Berkman fellowship (and a residence in my lab at the Media Lab), but tells us that these ideas are pretty disconnected from the doctoral dissertation she’s about to defend on copyright. She’s often asked why she’s chosen to work on these issues – the simple answer is “Nobody else is”. There’s a small handful of “experts” working on robots and ethics, and she feels an obligation to step up to the plate and become genuinely knowledgeable about these issues.
Robots are moving into transportation, education, care for the elderly and medicine, beyond manufacturing where they have been for years. She is concerned that our law may not yet have a space for the issues raised by the spread of robots, and hopes that we can participate in the construction of a space of robotics law, following on the healthy and creative space of cyberlaw.
She begins with a general overview of robot ethics. One key area is safety and liability – who is responsible for dysfunction and damage in these complex systems where there’s a long chain from coder to the execution of the system. It sounds fanciful, but people are now trying to figure out how to program ethics into these systems, particularly around autonomous weapons like drones.
Privacy is an area that creates visceral responses in the robotics space – Kate suggests that talking about robots and privacy may be a way to open some of the discussions about the hard issues raised by NSA surveillance. But Kate’s current focus is on social robots, and specifically on the tendency to project human qualities on robots. She references Sherry Turkle‘s observation that people bond with objects in a surprisingly strong way. There are perhaps three reasons for this: physicality (we bond more strongly with the real world than with the screen), perceived autonomous action (we see the Roomba moving around on its own, and we tend to name it and feel bad when it gets stuck in the curtains), anthropomorphism (robots targeted to mimic expressions we associate with states of minds and feelings.)
Humans bond with robots in surprising ways – soldiers honor robots with medals, demand that robots be repaired instead of being replaced, and demand funerals when they are destroyed. She tells us about a mine-defusing robot that looked like a stick insect. It lost one of six legs each time it exploded a mine. The colonel in charge of the exercise called it off on the grounds that a robot reduced to two or three legs was “inhumane”.
Kate shows her Pleo dinosaur, named for Yochai Benkler. The robot was inspiration from an experiment she ran at a workshop with legal scholars where she encouraged participants to bond with these robots, then to destroy one. Participants were horrified, and it took threats to destroy all robots to get the group to destroy one of the six. She observes that we respond to social cues from lifelike machines, even if we know they are not real.
Kate encourages workshop participants to kill a robot. Murderer.
So why does this matter? People are going to keep creating these sorts of robots, if only because toy companies like to make money. And if we have a deep tendency to bond with these robots, we may need to discuss the idea of instituting protections for social robots. We protect animals, Kate explains. We argue that it’s because they feel pain and have rights. But it’s also because we bond with them and we see an attack on an animal as an attack on the people who are bonded with and value that animal.
Kate notes that we have complicated social rules for how we treat animals. We eat cows, but not horses, because they’re horses. But Europeans (though not the British) are happy to eat horses. Perhaps the uncertainty about rights for robots suggests a similar cultural challenge: are there cultures that care for robots and cultures that don’t. This may change, Kate argues, as we have more lifelike robots in our lives. Parts of society – children, the elderly, may have difficulty distinguishing between live and lifelike. In cases where people have bonded with lifelike robots, are we comfortable with people abusing these robots? Is abusing a robot someone cares about, and may not be able to distinguish from a living creature, a form of abuse if it hurts the human emotionally?
She notes that Kant offered a reason to be concerned about animal abuse: “We can judge the heart of a man by his treatment of animals for he who is cruel to animals becomes hard also in his dealings with men.” Some states look at reports of animal abuse and conduct investigations of child abuse when there’s been a report of animal abuse in a household because they worry that the issues are correlated. Is robot abuse something we should consider as evidence of more serious underlying social or psychological issues?
Kate closes by suggesting that we need more experimental work on how human/robot bonding takes place. She suggests that this work is almost necessarily interdisciplinary, bringing together legal scholars, ethicists and roboticists. And she hopes that Cambridge, a space that brings these fields together in physical space, could be a space where these conversations take place.
Jessa Lingel of MSR asks whether an argument for protecting robots might extend to labor protections for robots. “I’m not sure I buy your arguments, but if so, perhaps we should also unionize robots?” Kate argues that we should grant rights according to needs and that there’s no evidence that robots mind working long hours. Jessa suggests that the argument for labor rights might parallel the Kantian argument – if we want people to treat laborers well, maybe we need to treat our laboring robots well.
There’s a long thread on intellectual property and robots. One question asks whether we can demand open source robots to ask for local control rather than centralizing control. Another asks about the implications of self-driving cars and the ability to review algorithms for responsibility in the case of an accident. I ask a pointed question about whether, if the Pentagon begins advertising ethical drones that check to see whether there’s a child nearby before we bomb a suspected terrorist, will we be able to review the ethics code? Kate notes that a lot of her answers to these questions are, “Yes, that’s a good question – someone should be working on this!”
Andy Sellars of Digital Media Law Project asks Kate to confront her roboexceptionalism. He admits that he can’t make the leap from the Pleo to his dog, and can’t see any technology on the horizon that would really blur that line for him. Her Pleo experiment could be replicated with stuffed animals – would we worry as much about people torturing stuffed animals? Kate cites Sherry Turkle, who has found evidence that children do distinguish between robots and stuffed animals. More personally, she tells a story about a woman who told her, “I wouldn’t have any problem torturing a robot – does that make me a bad person?” Kate’s answer, for better or for worse, is yes.
Tim Davies of the Berkman Center offers the idea that Kate’s arguments for robot ethics is virtue ethics: ethics is the character we have as people. Law generally operates in the space of consequentialist ethics: it’s illegal because of the consequences of behavior, not its reflection on your calendar. He wonders whether we can move from language of anthropomorphism around robots and talk about simulation. There are legal cases where simulation of harm is something we consider to be problematic, for instance, simulated images of child abuse.
Boris Anthony of Nokia and Ivan Sigal of Global Voices (okay, let’s be honest – they’re both from Global Voices) both ask about cultural conceptions of robots through science fiction – Boris references Japanese anime and suggests that Japanese notions of privacy may be very different from American notions; Ivan references Philip K. Dick. Kate notes that, in scifi, lots of questions focus on the inherent qualities of robots. “Almost Human”, a near-future show that posits robots that have near-human emotions, is interesting, but not very practical – we’re not going to have those robots any time soon. Issues of projection are going to happen far sooner. In the story that becomes Blade Runner, the hero falls in love with a robot who can’t love him back, and he loves her despite that reality – that’s a narrative that had to be blurred out in the Hollywood version because it’s a very complex question for a mainstream movie.
Chris Peterson opens his remarks by noting that he spent most of his teenage years blowing up furbies in the woods. “Was I a sociopath, a teenager in New Hampshire, or are the two indistinguishable?” Kate, whose Center for Civic Media portrait, features her holding a flayed Furby shell absolves Chris: “Furbies are fucking annoying.” Chris’s actual question focuses on the historical example of European courts putting inanimate objects on trial, citing a case where a Brazilian colonial court put a termite colony on trial for destroying a church (and the judge awarded wood to the termites who had been wronged in the construction.) Should emergent, autonomous actors that have potentials not intended by designers have legal responsibilities. “Should the high frequency trading algorithm that causes harm be put to death? Do we distinguish between authors and their systems in the legal system?” Kate suggests that we may have a social contract that allows the vengeance of destroying a robot that we think has wronged people, but notes that we also try to protect very young people from legal consequences.
Bruce Schneier is one of the world’s leading cryptographers and theorists of security. Jonathan Zittrain is a celebrated law professor, theorist of digital technology and wonderfully performative lecturer. The two share a stage at Harvard Law School’s Langdell Hall. JZ introduces Bruce as the inventor of the phrase “security theatre”, author of a leading textbook on cryptography and subject of a wonderful internet meme.
The last time the two met on stage, they were arguing different sides of an issue – threats of cyberwar are grossly exaggerated – in an Oxford-style debate. Schneier was baffled that, after the debate, his side lost. He found it hard to believe that more people thought that cyberwar was a real threat than an exaggeration, and realized that there is a definitional problem that makes discussing cyberwar challenging.
Schneier continues, “It used to be, in the real world, you judged the weaponry. If you saw a tank driving at you, you know it was a real war because only a government could buy a tank.” In cyberwar, everyone uses the same tools and tactics – DDoS, exploits. It’s hard to tell if attackers are governments, criminals or individuals. You could call almost anyone to defend you – the police, the government, the lawyers. You never know who you’re fighting against, which makes it extremely hard to know what to defend. “And that’s why I lost”, Schneier explains – if you use a very narrow definition of cyberwar, as Schneier did, cyberwar threats are almost always exaggerated.
Zittrain explains that we’re not debating tonight, but notes that Schneier appears already to be conceding some ground in using the word “weapon” to explore digital security issues. Schneier’s new book is not yet named, but Zittrain suggests it might be called “Be afraid, be very afraid,” as it focuses on asymmetric threats, where reasonably technically savvy people may not be able to defend themselves.
Schneier explains that we, as humans, accept a certain amount of bad action in society. We accept some bad behavior, like crime, in exchange for some flexibility in terms of law enforcement. If we worked for a zero murder rate, we’d have too many false arrests, too much intrusive security – we accept some harm in exchange for some freedom. But Bruce explains that in the digital world, it’s possible for bad actors to do asymmetric amounts of harm – one person can cause a whole lot of damage. As the amount of damage a bad actor can create, our tolerance for bad actors decreases. This, Bruce explains, is the weapon of mass destruction debate – if a terrorist can access a truly deadly bioweapon, perhaps we change our laws to radically ratchet up enforcement.
JZ offers a summary: we can face doom from terrorism or doom from a police state. Bruce riffs on this: if we reach a point where a single bad actor can destroy society – and Bruce believes this may be possible – what are the chances society can get past that moment. “We tend to run a pretty wide-tail bell curve around our species.”
Schneier considers the idea that attackers often have a first-mover advantage. While the police do a study of the potentials of the motorcar, the bank robbers are using them as getaway vehicles. There may be a temporal gap when the bad actors can outpace the cops, and we might imagine that gap being profoundly destructive at some point in the near future.
JZ wonders whether we’re attributing too much power to bad actors, implicitly believing they are as powerful as governments. But governments have the ability to bring massive multiplier effects into play. Bruce concedes that his is true in policing – radios have been the most powerful tool for policing, bringing more police into situations where the bad guys have the upper hand.
Bruce explains that he’s usually an optimist, so it’s odd to have this deeply pessimistic essay out in the world. JZ notes that there are other topics to consider: digital feudalism, the topic of Bruce’s last book, in which corporate actors have profound power over our digital lives, a subject JZ is also deeply interested in.
Expanding on the idea of digital feudalism, Bruce explains that if you pledge you allegiance to an internet giant like Apple, your life is easy, and they pledge to protect you. Many of us pledge allegiance to Facebook, Amazon, Google. These platforms control our data and our devices – Amazon controls what can be in your Kindle, and if they don’t like your copy of 1984, they can remove it. When these feudal lords fight, we all suffer – Google Maps disappear from the iPad. Feudalism ended as nation-states rose and the former peasants began to demand rights.
JZ suggests some of the objections libertarians usually offer to this set of concerns. Isn’t there a Chicken Little quality to this? Not being able to get Google Maps on your iPad seems like a “glass half empty” view given how much technological process we’ve recently experienced. Bruce offers his fear that sites like Google will likely be able to identify gun owners soon, based on search term history. Are we entering an age where the government doesn’t need to watch you because corporations are already watching so closely? What happens if the IRS can decide who to audit based on checking what they think you should make in a year and what credit agencies know you’ve made? We need to think this through before this becomes a reality.
JZ leads the audience through a set of hand-raising exercises: who’s on Facebook, who’s queasy about Facebook’s data policies, and who would pay $5 a month for a Facebook that doesn’t store your behavioral data? Bruce explains that the question is the wrong one; it should be “Who would pay $5 a month for a secure Facebook where all your friends are over on the insecure one – if you’re not on Facebook, you don’t hear about parties, you don’t see your friends, you don’t get laid.”
Why would Schneier believe governments would regulate this space in a helpful way, JZ asks? Schneier quotes Martin Luther King, Jr. – the arc of history is long but bends towards justice. It will take a long time for governments to figure out how to act justly in this space, perhaps a generation or two, Schneier argues that we need some form of regulation to protect against these feudal barons. As JZ translates, you believe there needs to be a regulatory function that corrects market failures, like the failure to create a non-intrusive social network… but you don’t think our current screwed-up government can write these laws. So what do we do now?
Schneier has no easy answer, noting that it’s hard to trust a government that breaks its own laws, surveilling its own population without warrant or even clear reason. But he quotes a recent Glenn Greenwald piece on marriage equality, which notes that the struggle for marriage equality seemed impossible until about three months ago, and now seems almost inevitable. In other words, don’t lose hope.
JZ notes that Greenwald is one of the people who’s been identified as an ally/conspirator to Wikileaks, and one of the targets of a possible “dirty tricks” campaign by H.B. Gary, a “be afraid, be very afraid” security firm that got p0wned by Anonymous. Schneier is on record as being excited about leaking – JZ wonders how he feels about Anonymous.
Schneier notes how remarkable it is that a group of individuals started making threats against NATO. JZ finds it hard to believe that Schneier would take those threats seriously, noting that Anon has had civil wars where one group will apologize that their servers have been compromised and should be ignored as they’re being hacked by another faction – how can we take threats from a group like that seriously? Schneier notes that a non-state, decentralized actor is something we need to take very seriously.
The conversation shifts to civil disobedience in the internet age. JZ wonders whether Schneier believes that DDoS can be a form of protest, like a sit in or a picket line. Schneier explains that you used to be able to tell by the weaponry – if you were sitting in, it was a protest. But there’s DDoS extortion, there’s DDoS for damage, for protest, and because school’s out and we’re bored. Anonymous, he argues, was engaged in civil disobedience and intentions matter.
JZ notes that Anonymous, in their very name, wants civil disobedience without the threat of jail. But, to be fair, he notes that you don’t get sentenced to 40 years in jail for sitting at a lunch counter. Schneier notes that we tend to misclassify cyber protest cases so badly, he’d want to protest anonymously too. But he suggests that intentions are at the heart of understanding these actions. It makes little sense, he argues, that we prosecute murder and attempted murder with different penalties – if the intention was to kill, does it matter that you are a poor shot?
A questioner in the audience asks about user education: is the answer to security problems for users to learn a security skillset in full? Zittrain notes that some are starting to suggest internet driver’s licenses before letting users online. Schneier argues that user education is a cop-out. Security is interconnected – in a very real way, “my security is a function of my mother remembering to turn the firewall back on”. These security holes open because we design crap security. We can’t pop up incomprehensible warnings that people will click through. We need systems that are robust enough to deal with uneducated users.
Another questioner asks what metaphors we should use to understand internet security – War? Public health? Schneier argues against the war metaphor, because in wars we sacrifice anything in exchange to win. Police might be a better metaphor, as we put checks on their power and seek a balance between freedom and control of crime. Biological metaphors might be even stronger – we are starting to see thinking about computer viruses influencing what we know about biological viruses. Zittrain suggests that an appropriate metaphor is mutual aid: we need to look for ways we can help each other out under attack, which might mean building mobile phones that are two way radios which can route traffic independent of phone towers. Schneier notes that internet as infrastructure is another helpful metaphor – a vital service like power or water we try to keep accessible and always flowing.
A questioner wonders whether Schneier’s dissatisfaction with the “cyberwar” metaphor comes from the idea that groups like anonymous are roughly organized groups, not states. Schneier notes that individuals are capable of great damage – the assassination of a Texas prosecutor, possibly by the Aryan Brotherhood – but we treat these acts as crime. Wars, on the other hand, are nation versus nation. We responded to 9/11 by invading a country – it’s not what the FBI would have done if they were responding to it. Metaphors matter.
I had the pleasure of sitting with Willow Brugh, who did a lovely Prezi visualization of the talk – take a look!