Kate Darling (@grok_) offers a talk to the Berkman Center that’s so popular, the talk needs to be moved from Berkman to Wasserstein Hall, where almost a hundred people come for lunch and her ideas on robot ethics. Kate is a legal scholar completing her PhD during a Berkman fellowship (and a residence in my lab at the Media Lab), but tells us that these ideas are pretty disconnected from the doctoral dissertation she’s about to defend on copyright. She’s often asked why she’s chosen to work on these issues – the simple answer is “Nobody else is”. There’s a small handful of “experts” working on robots and ethics, and she feels an obligation to step up to the plate and become genuinely knowledgeable about these issues.
Robots are moving into transportation, education, care for the elderly and medicine, beyond manufacturing where they have been for years. She is concerned that our law may not yet have a space for the issues raised by the spread of robots, and hopes that we can participate in the construction of a space of robotics law, following on the healthy and creative space of cyberlaw.
She begins with a general overview of robot ethics. One key area is safety and liability – who is responsible for dysfunction and damage in these complex systems where there’s a long chain from coder to the execution of the system. It sounds fanciful, but people are now trying to figure out how to program ethics into these systems, particularly around autonomous weapons like drones.
Privacy is an area that creates visceral responses in the robotics space – Kate suggests that talking about robots and privacy may be a way to open some of the discussions about the hard issues raised by NSA surveillance. But Kate’s current focus is on social robots, and specifically on the tendency to project human qualities on robots. She references Sherry Turkle‘s observation that people bond with objects in a surprisingly strong way. There are perhaps three reasons for this: physicality (we bond more strongly with the real world than with the screen), perceived autonomous action (we see the Roomba moving around on its own, and we tend to name it and feel bad when it gets stuck in the curtains), anthropomorphism (robots targeted to mimic expressions we associate with states of minds and feelings.)
Humans bond with robots in surprising ways – soldiers honor robots with medals, demand that robots be repaired instead of being replaced, and demand funerals when they are destroyed. She tells us about a mine-defusing robot that looked like a stick insect. It lost one of six legs each time it exploded a mine. The colonel in charge of the exercise called it off on the grounds that a robot reduced to two or three legs was “inhumane”.
Kate shows her Pleo dinosaur, named for Yochai Benkler. The robot was inspiration from an experiment she ran at a workshop with legal scholars where she encouraged participants to bond with these robots, then to destroy one. Participants were horrified, and it took threats to destroy all robots to get the group to destroy one of the six. She observes that we respond to social cues from lifelike machines, even if we know they are not real.
Kate encourages workshop participants to kill a robot. Murderer.
So why does this matter? People are going to keep creating these sorts of robots, if only because toy companies like to make money. And if we have a deep tendency to bond with these robots, we may need to discuss the idea of instituting protections for social robots. We protect animals, Kate explains. We argue that it’s because they feel pain and have rights. But it’s also because we bond with them and we see an attack on an animal as an attack on the people who are bonded with and value that animal.
Kate notes that we have complicated social rules for how we treat animals. We eat cows, but not horses, because they’re horses. But Europeans (though not the British) are happy to eat horses. Perhaps the uncertainty about rights for robots suggests a similar cultural challenge: are there cultures that care for robots and cultures that don’t. This may change, Kate argues, as we have more lifelike robots in our lives. Parts of society – children, the elderly, may have difficulty distinguishing between live and lifelike. In cases where people have bonded with lifelike robots, are we comfortable with people abusing these robots? Is abusing a robot someone cares about, and may not be able to distinguish from a living creature, a form of abuse if it hurts the human emotionally?
She notes that Kant offered a reason to be concerned about animal abuse: “We can judge the heart of a man by his treatment of animals for he who is cruel to animals becomes hard also in his dealings with men.” Some states look at reports of animal abuse and conduct investigations of child abuse when there’s been a report of animal abuse in a household because they worry that the issues are correlated. Is robot abuse something we should consider as evidence of more serious underlying social or psychological issues?
Kate closes by suggesting that we need more experimental work on how human/robot bonding takes place. She suggests that this work is almost necessarily interdisciplinary, bringing together legal scholars, ethicists and roboticists. And she hopes that Cambridge, a space that brings these fields together in physical space, could be a space where these conversations take place.
Jessa Lingel of MSR asks whether an argument for protecting robots might extend to labor protections for robots. “I’m not sure I buy your arguments, but if so, perhaps we should also unionize robots?” Kate argues that we should grant rights according to needs and that there’s no evidence that robots mind working long hours. Jessa suggests that the argument for labor rights might parallel the Kantian argument – if we want people to treat laborers well, maybe we need to treat our laboring robots well.
There’s a long thread on intellectual property and robots. One question asks whether we can demand open source robots to ask for local control rather than centralizing control. Another asks about the implications of self-driving cars and the ability to review algorithms for responsibility in the case of an accident. I ask a pointed question about whether, if the Pentagon begins advertising ethical drones that check to see whether there’s a child nearby before we bomb a suspected terrorist, will we be able to review the ethics code? Kate notes that a lot of her answers to these questions are, “Yes, that’s a good question – someone should be working on this!”
Andy Sellars of Digital Media Law Project asks Kate to confront her roboexceptionalism. He admits that he can’t make the leap from the Pleo to his dog, and can’t see any technology on the horizon that would really blur that line for him. Her Pleo experiment could be replicated with stuffed animals – would we worry as much about people torturing stuffed animals? Kate cites Sherry Turkle, who has found evidence that children do distinguish between robots and stuffed animals. More personally, she tells a story about a woman who told her, “I wouldn’t have any problem torturing a robot – does that make me a bad person?” Kate’s answer, for better or for worse, is yes.
Tim Davies of the Berkman Center offers the idea that Kate’s arguments for robot ethics is virtue ethics: ethics is the character we have as people. Law generally operates in the space of consequentialist ethics: it’s illegal because of the consequences of behavior, not its reflection on your calendar. He wonders whether we can move from language of anthropomorphism around robots and talk about simulation. There are legal cases where simulation of harm is something we consider to be problematic, for instance, simulated images of child abuse.
Boris Anthony of Nokia and Ivan Sigal of Global Voices (okay, let’s be honest – they’re both from Global Voices) both ask about cultural conceptions of robots through science fiction – Boris references Japanese anime and suggests that Japanese notions of privacy may be very different from American notions; Ivan references Philip K. Dick. Kate notes that, in scifi, lots of questions focus on the inherent qualities of robots. “Almost Human”, a near-future show that posits robots that have near-human emotions, is interesting, but not very practical – we’re not going to have those robots any time soon. Issues of projection are going to happen far sooner. In the story that becomes Blade Runner, the hero falls in love with a robot who can’t love him back, and he loves her despite that reality – that’s a narrative that had to be blurred out in the Hollywood version because it’s a very complex question for a mainstream movie.
Chris Peterson opens his remarks by noting that he spent most of his teenage years blowing up furbies in the woods. “Was I a sociopath, a teenager in New Hampshire, or are the two indistinguishable?” Kate, whose Center for Civic Media portrait, features her holding a flayed Furby shell absolves Chris: “Furbies are fucking annoying.” Chris’s actual question focuses on the historical example of European courts putting inanimate objects on trial, citing a case where a Brazilian colonial court put a termite colony on trial for destroying a church (and the judge awarded wood to the termites who had been wronged in the construction.) Should emergent, autonomous actors that have potentials not intended by designers have legal responsibilities. “Should the high frequency trading algorithm that causes harm be put to death? Do we distinguish between authors and their systems in the legal system?” Kate suggests that we may have a social contract that allows the vengeance of destroying a robot that we think has wronged people, but notes that we also try to protect very young people from legal consequences.
Bruce Schneier is one of the world’s leading cryptographers and theorists of security. Jonathan Zittrain is a celebrated law professor, theorist of digital technology and wonderfully performative lecturer. The two share a stage at Harvard Law School’s Langdell Hall. JZ introduces Bruce as the inventor of the phrase “security theatre”, author of a leading textbook on cryptography and subject of a wonderful internet meme.
The last time the two met on stage, they were arguing different sides of an issue – threats of cyberwar are grossly exaggerated – in an Oxford-style debate. Schneier was baffled that, after the debate, his side lost. He found it hard to believe that more people thought that cyberwar was a real threat than an exaggeration, and realized that there is a definitional problem that makes discussing cyberwar challenging.
Schneier continues, “It used to be, in the real world, you judged the weaponry. If you saw a tank driving at you, you know it was a real war because only a government could buy a tank.” In cyberwar, everyone uses the same tools and tactics – DDoS, exploits. It’s hard to tell if attackers are governments, criminals or individuals. You could call almost anyone to defend you – the police, the government, the lawyers. You never know who you’re fighting against, which makes it extremely hard to know what to defend. “And that’s why I lost”, Schneier explains – if you use a very narrow definition of cyberwar, as Schneier did, cyberwar threats are almost always exaggerated.
Zittrain explains that we’re not debating tonight, but notes that Schneier appears already to be conceding some ground in using the word “weapon” to explore digital security issues. Schneier’s new book is not yet named, but Zittrain suggests it might be called “Be afraid, be very afraid,” as it focuses on asymmetric threats, where reasonably technically savvy people may not be able to defend themselves.
Schneier explains that we, as humans, accept a certain amount of bad action in society. We accept some bad behavior, like crime, in exchange for some flexibility in terms of law enforcement. If we worked for a zero murder rate, we’d have too many false arrests, too much intrusive security – we accept some harm in exchange for some freedom. But Bruce explains that in the digital world, it’s possible for bad actors to do asymmetric amounts of harm – one person can cause a whole lot of damage. As the amount of damage a bad actor can create, our tolerance for bad actors decreases. This, Bruce explains, is the weapon of mass destruction debate – if a terrorist can access a truly deadly bioweapon, perhaps we change our laws to radically ratchet up enforcement.
JZ offers a summary: we can face doom from terrorism or doom from a police state. Bruce riffs on this: if we reach a point where a single bad actor can destroy society – and Bruce believes this may be possible – what are the chances society can get past that moment. “We tend to run a pretty wide-tail bell curve around our species.”
Schneier considers the idea that attackers often have a first-mover advantage. While the police do a study of the potentials of the motorcar, the bank robbers are using them as getaway vehicles. There may be a temporal gap when the bad actors can outpace the cops, and we might imagine that gap being profoundly destructive at some point in the near future.
JZ wonders whether we’re attributing too much power to bad actors, implicitly believing they are as powerful as governments. But governments have the ability to bring massive multiplier effects into play. Bruce concedes that his is true in policing – radios have been the most powerful tool for policing, bringing more police into situations where the bad guys have the upper hand.
Bruce explains that he’s usually an optimist, so it’s odd to have this deeply pessimistic essay out in the world. JZ notes that there are other topics to consider: digital feudalism, the topic of Bruce’s last book, in which corporate actors have profound power over our digital lives, a subject JZ is also deeply interested in.
Expanding on the idea of digital feudalism, Bruce explains that if you pledge you allegiance to an internet giant like Apple, your life is easy, and they pledge to protect you. Many of us pledge allegiance to Facebook, Amazon, Google. These platforms control our data and our devices – Amazon controls what can be in your Kindle, and if they don’t like your copy of 1984, they can remove it. When these feudal lords fight, we all suffer – Google Maps disappear from the iPad. Feudalism ended as nation-states rose and the former peasants began to demand rights.
JZ suggests some of the objections libertarians usually offer to this set of concerns. Isn’t there a Chicken Little quality to this? Not being able to get Google Maps on your iPad seems like a “glass half empty” view given how much technological process we’ve recently experienced. Bruce offers his fear that sites like Google will likely be able to identify gun owners soon, based on search term history. Are we entering an age where the government doesn’t need to watch you because corporations are already watching so closely? What happens if the IRS can decide who to audit based on checking what they think you should make in a year and what credit agencies know you’ve made? We need to think this through before this becomes a reality.
JZ leads the audience through a set of hand-raising exercises: who’s on Facebook, who’s queasy about Facebook’s data policies, and who would pay $5 a month for a Facebook that doesn’t store your behavioral data? Bruce explains that the question is the wrong one; it should be “Who would pay $5 a month for a secure Facebook where all your friends are over on the insecure one – if you’re not on Facebook, you don’t hear about parties, you don’t see your friends, you don’t get laid.”
Why would Schneier believe governments would regulate this space in a helpful way, JZ asks? Schneier quotes Martin Luther King, Jr. – the arc of history is long but bends towards justice. It will take a long time for governments to figure out how to act justly in this space, perhaps a generation or two, Schneier argues that we need some form of regulation to protect against these feudal barons. As JZ translates, you believe there needs to be a regulatory function that corrects market failures, like the failure to create a non-intrusive social network… but you don’t think our current screwed-up government can write these laws. So what do we do now?
Schneier has no easy answer, noting that it’s hard to trust a government that breaks its own laws, surveilling its own population without warrant or even clear reason. But he quotes a recent Glenn Greenwald piece on marriage equality, which notes that the struggle for marriage equality seemed impossible until about three months ago, and now seems almost inevitable. In other words, don’t lose hope.
JZ notes that Greenwald is one of the people who’s been identified as an ally/conspirator to Wikileaks, and one of the targets of a possible “dirty tricks” campaign by H.B. Gary, a “be afraid, be very afraid” security firm that got p0wned by Anonymous. Schneier is on record as being excited about leaking – JZ wonders how he feels about Anonymous.
Schneier notes how remarkable it is that a group of individuals started making threats against NATO. JZ finds it hard to believe that Schneier would take those threats seriously, noting that Anon has had civil wars where one group will apologize that their servers have been compromised and should be ignored as they’re being hacked by another faction – how can we take threats from a group like that seriously? Schneier notes that a non-state, decentralized actor is something we need to take very seriously.
The conversation shifts to civil disobedience in the internet age. JZ wonders whether Schneier believes that DDoS can be a form of protest, like a sit in or a picket line. Schneier explains that you used to be able to tell by the weaponry – if you were sitting in, it was a protest. But there’s DDoS extortion, there’s DDoS for damage, for protest, and because school’s out and we’re bored. Anonymous, he argues, was engaged in civil disobedience and intentions matter.
JZ notes that Anonymous, in their very name, wants civil disobedience without the threat of jail. But, to be fair, he notes that you don’t get sentenced to 40 years in jail for sitting at a lunch counter. Schneier notes that we tend to misclassify cyber protest cases so badly, he’d want to protest anonymously too. But he suggests that intentions are at the heart of understanding these actions. It makes little sense, he argues, that we prosecute murder and attempted murder with different penalties – if the intention was to kill, does it matter that you are a poor shot?
A questioner in the audience asks about user education: is the answer to security problems for users to learn a security skillset in full? Zittrain notes that some are starting to suggest internet driver’s licenses before letting users online. Schneier argues that user education is a cop-out. Security is interconnected – in a very real way, “my security is a function of my mother remembering to turn the firewall back on”. These security holes open because we design crap security. We can’t pop up incomprehensible warnings that people will click through. We need systems that are robust enough to deal with uneducated users.
Another questioner asks what metaphors we should use to understand internet security – War? Public health? Schneier argues against the war metaphor, because in wars we sacrifice anything in exchange to win. Police might be a better metaphor, as we put checks on their power and seek a balance between freedom and control of crime. Biological metaphors might be even stronger – we are starting to see thinking about computer viruses influencing what we know about biological viruses. Zittrain suggests that an appropriate metaphor is mutual aid: we need to look for ways we can help each other out under attack, which might mean building mobile phones that are two way radios which can route traffic independent of phone towers. Schneier notes that internet as infrastructure is another helpful metaphor – a vital service like power or water we try to keep accessible and always flowing.
A questioner wonders whether Schneier’s dissatisfaction with the “cyberwar” metaphor comes from the idea that groups like anonymous are roughly organized groups, not states. Schneier notes that individuals are capable of great damage – the assassination of a Texas prosecutor, possibly by the Aryan Brotherhood – but we treat these acts as crime. Wars, on the other hand, are nation versus nation. We responded to 9/11 by invading a country – it’s not what the FBI would have done if they were responding to it. Metaphors matter.
I had the pleasure of sitting with Willow Brugh, who did a lovely Prezi visualization of the talk – take a look!
Jenna Burrell, assistant professor at the School of Information at UC Berkeley, is speaking today at the Berkman Center on her research on internet usage in Ghana, the subject of her (excellent) book Invisible Users: Youth in the Internet Cafes of Urban Ghana. Burrell is an ethnographer and sociologist, and her examination of Ghanaian internet cafes is one of the best portraits of contemporary internet use in the developing world.
Jenna doing fieldwork in Ghana
Her talk today covers some of the work she began in 2004 and published last year, but expands in some new directions, including questions about network security and preserving access in the margins of the Global Internet. Burrell’s understanding of Ghana has been built up through six years of fieldwork, both on how non-elite Ghanaians use the internet, and on how Ghana’s internet has literally been built, from recycled and repurposed computer equipment. She notes that ethnographers are famous for their microfocus. When she published her book, a Facebook friend joked, “How odd, I just finished my book on youth in the internet cafes of suburban Ghana!” Burrell is now interested in some of the broader questions we might examine raised by specific cases like the dynamics of Ghana’s cybercafes.
Burrell notes that early conversations about the internet often featured the idea that in online spaces, we transcend our physical limits and are able to talk to people anywhere in the world. Our race and gender might become irrelevant or invisible. She suggests that just at the point where real cross-cultural connection was starting to unfold online, discourse about a borderless internet became unfashionable. We might benefit from returning to some of these ideas of borderlessness and encounter in places where these encounters are really taking place.
Ghana’s internet cafes are an excellent space to explore how this connect works in practice, as much of what takes place in these cafes is centered on international connect. Ghana’s “non-elite” net youth culture – i.e., the young people accessing the internet via cybercafes, not the digerati who are accessing the net through computers in their homes – centers around the idea of the “pen pal”, an analog concept adapted for a digital age. Many Ghanaian students have interacted with pen pals via paper letters, and their encounters in online space often focused on finding a digital pen pal. Most participating in this culture were English-literate, had at least a high school education and had probably stopped going to school when they ran out of funds. They sought out pen pals for a variety of reasons: as friends, as potential romantic partners, as patrons or sponsors, business partners, or as philanthropists who might fund their future education or emigration.
Much of Burrell’s work has focused on talking to cybercafe users about their stories and motivations. Understanding the gaps between their understandings of the people they are talking with on Yahoo chat or other tools helps illuminate the challenge of cultural encounter. One group of cybercafe youth were collectors. They had applied for British Airways Executive Club membership – the airline’s frequent flyer program – and called themselves “The Executive Club”, reveling in the membership cards the airline had sent. They collected religious CDs and bibles from the people they encountered online. Another Ghanaian participant in Christian chat rooms on Yahoo! complained that his conversation partners didn’t understand his needs and motivations – he was looking for contacts and potential business partners and figured that Christians would be trustworthy people to work with, but was frustrated that they only wanted to talk about the bible. A third person she observed explained, “I take pen pals just for the exchange of items and actually I don’t take my size. I take sugar mommies and sugar daddies…” In other words, he was looking specifically for conversations that led to people giving gifts.
This sounds like a path from conversation into internet scamming, but Burrell warns us not to jump to conclusions. Gift-giving is very common in Ghanaian culture, and while gifts are small, they are important and usually reciprocal. Some of her Ghanaian informants couldn’t understand why asking for a gift chased their conversation partners away. Fauzia, who had been chatting with a man on Yahoo! asked him to send her a mobile phone. Not only did he stop taking to her, he performed a complicated “dance of avoidance”, logging off when he saw her log on. Another informant, Kwaku, was talking with a Polish woman about seeking a travel visa and couldn’t understand why she wouldn’t let him stay in her home in Poland. Again, the cultural discontinuity is important – if you traveled to see a friend in their village, you would expect that they would share their home with you and provide a place for you to sleep.
Burrell suggests that there are basic misunderstandings between Ghanaian and North American/European culture around gender and communication norms, the moral economy of gifting and notions of obligation and hospitality. In addition, these cultural discontinuities are complicated by material asymmetries, simplistic perceptions of western wealth and African poverty, and the fact that Ghanaians are often paying for net connectivity by the minute, leading to rushed and high pressure encounters.
When cross-cultural encounters go badly, people seek to block further contact. Networks like Facebook make it very easy to block an individual from contacting you. But Burrell sees the internet moving from simple blocking and banning to “encoded exclusion”, the automatic exclusion of entire countries from being able to access certain servers and services. Dating websites, in particular, have taken to blocking and banning Ghanaians and Nigerians entirely, because they use the websites in ways that the site’s creators hadn’t expected or intended.
Working from Ghana for almost a decade, Burrell has found that it’s often difficult to engage in basic online tasks from that country because sites and services exclude based on geolocation. Based on her experiences and that of her informants, she posits two types of exclusion: failure to include, and purposeful exclusion.
Ecommerce is a space where failure to include is pretty common. Ecommerce is a credit-card based world. Many African economies, including Ghana’s, are largely cash based. Even for Ghanaians who have the money to buy online services, there’s often no easy way to make an online payment. This becomes a rationalization for credit card fraud. Ghanaians who want to participate on match.com, which has a modest member fee, rationalize using a stolen credit card as a way of gaining access to a space that’s otherwise closed. There’s also an unfair stigma attached to cash-based transactions, she posits. Some media coverage of Umar Farouk Abdulmutallab, the Nigerian underwear bomber, focused on the fact that he’d purchased his air ticket in Ghana, paying cash. US authorities suggested that paying cash was evidence of bad intent and some suggested waiting periods and extra scrutiny for cash payments – Burrell suggests that that’s simply how Ghana’s economy works at present, and that using cash payments as a signal for possible terrorist behavior is a form of failure to include.
Purposeful exclusion also comes into play in ecommerce. Burrell discovered that trying to purchase a product on Amazon from Ghana triggered a set of “forced detours” that made purchasing impossible. Once Amazon detected her login from Ghana, the site immediately reset her password and began sending her phishing warnings. Paypal uses similar techniques – when she tried to sign up for a sewing class in Oakland (to make something out of the beautiful batik she was buying in Ghana), PayPal told her that they didn’t serve customers in Ghana or Nigeria, and started a set of security checks that led to phone verification to her US phone, which didn’t work in Ghana. These extended loops of checks are a huge frustration to the Ghanaians who have the means and tools to participate in these economies. As Ghanaian-born blogger Koranteng noted in an excellent blog post, “If we take ecommerce as one component of modern global citizenship then we are illegal aliens of sorts, and our participation is marginal at best.”
Other blocks are more explicit. Plentyoffish.com, a popular, no-fee dating site, briefly ran a warning that stated that they block traffic from Africa, Romania, Turkey, India, Russia “like every other major site”. The warning was removed, but the site is still inaccessible from Ghana.
Search for “IP block Ghana” or “IP block Nigeria” and you’ll find posts on webmaster fora asking for advice on how to exclude whole nations from the internet. She offers three examples:
From Webmaster World: “I am so fed up with these darn African fraudsters, is it possible to block african traffic by IP”
From a Unix security discussion group: “Maybe we could just disconnect those countries from the Internet until they get their scam artists under control”
From a Linux admin tips site: “I admin an [ecommerce] website and a lot of bogus traffic comes from countries that do not offer much in commercial value.”
Legitimate frustration over fraud leads to overbroad attempts to crack down on this fraud. Burrell’s research involved working with a British woman who lost $100,000 to scams in Ghana – the woman came to Ghana to seek justice and Burrell attended court hearings with her. She suggests that while there’s likely corruption within the Ghana police service, the judges and lawyers she met were genuinely worried about scamming and looking for ways to crack down on the activity. But the perception remains that Ghana isn’t doing enough to protect the rest of the world from its least ethical internet users. This, in turn, has consequences for Ghana’s many legitimate users.
She leaves the group with a series of questions:
- How do we consider inclusiveness as one of the principals to strive for in network security best practices?
- How do we investigate and make visible the consequences of network security practices at the margins of the internet?
- When is country-level IP address blocking appropriate?
These questions lead to a lively discussion around the Berkman table. Oliver Goodenough wonders whether the practices Burrell is describing parallel redlining, the illegal practice of denying certain services or overcharging for them in neighborhoods with high concentrations of citizens of color. But another participant wonders whether we’re being unfair and suggests that using concepts like “censorship” to discuss online exclusion is unfairly characterizing what might simply be wise business practice. “Should a company be compelled to do business in a country where there’s no legal infrastructure to adequately protect it?” Jerome Hergueux argues that global trade follows trust, and that the desire to exclude these countries may be seen as a vote that there’s no trust in how they do business. Burrell notes that there are patterns of media coverage that contribute to why we don’t trust Ghanaians, and that those perceptions might not be accurate.
I’m deeply interested in the topics Burrell brings up in this talk. I’ve experienced the purposeful exclusion Burrell talks about, both in trying to do business from west Africa, and in my travels back and forth – I routinely bring goods to Ghana and Nigeria that friends in those countries have ordered and sent to my office, because they can’t get them delivered to their homes. It’s very strange when people you’ve met only over Twitter send you iPads so you can bring them to Nigeria… but it is, as Hergeuex points out, an interesting commentary on who we trust and who we don’t.
I worry about another form of exclusion that’s mostly theoretical at this point, but possible: what if spaces that are acting as digital public spheres become closed to developing world users? That’s an idea put forward in a New York Times article by Brad Stone and Miguel Helft. Examining Facebook’s efforts to build sites “optimized” for the developing world, they wonder whether companies, desperate to become profitable, will stop serving, or badly underserve, users in countries where there’s little online advertising, like Nigeria and Ghana.
Talking with Burrell after her talk, I wondered whether there’s a hierarchy of needs at work: should we worry more about Facebook banning Nigerian users (no evidence that they will, to be clear) more than Amazon or OkCupid? Are we willing to argue for a global right to online speech, but no global right to online dating? Burrell argued that accessing OkCupid might be more significant in terms of life transformation for a Ghanaian user than accessing Facebook and suggested that any sort of tiering of access was challenging to think through.
It’s interesting to consider: the Internet Freedom agenda advocated by the US State Department focuses on countries that would block access to the internet to prevent certain types of political speech. But what if the real threat to global internet freedom starts with US companies that don’t see a profit in letting Ghanaian or Nigerian users onto their sites? Anyone want to bet on whether a Kerry State Department will be willing to tell US companies to stop excluding African users?
Friend and fellow Berkmanite Doc Searls is presenting his new book, The Intention Economy, at Harvard this evening. That I’m hosting the event doesn’t stop me from blogging it. Doc’s new book is a manifesto designed to change how we think about vendors, customers, transactions and privacy. It’s also a trenchant critique of the advertising business and retail commerce as we know it. I’ve had great fun watching Doc write it and am excited to see it having an impact out in the world.
Here are my notes on Doc’s talk – I wasn’t able to blog the Q&A due to my lame attempts to moderate.
What is the most embarrasing thing about you? Something you’d only share with a really good friend or a licensed professional? Doc asks us to think about that question while we watch a clip from The Onion News Network, which reports on Facebook as an intelligence agency project, described by undercover agent Mark Zuckerberg as “the single most powerful tool for population control ever invented.”
Privacy is very simple in the material world. But it took us thousands of years to understand how privacy should work. We created technologies like clothing and houses to allow ourselves different degrees of privacy. The internet as we know it, Doc tells us, is 17 years old, with widespread adoption of the graphical web browser. There are no rules of privacy in this new world – it simply wasn’t designed into the protocols. The internet is early in its development. “The older I get, the earlier it seems.”
Facebook may know an immense about you, Doc tells us, but that’s not the internet – it’s an application. Facebook wasn’t even around 8 years ago – think of it as an experiment. There will be something else that follows on its heels. We should think about these technologies as a social and technical experiment we’re still working through.
Doc tells us we’ve been in a master-slave narrative since 1995. We go to websites for content – milk – and we get something in addition – a cookie. Cookies were invented to maintain state, to allow web servers to track us over time. We, as clients, are dependents, slaves to the servers, and we haven’t broken away from it yet. Thanks to cookies, we’re being followed… and not just by our friends.
He points us to a series called What They Know put together by the Wall Street Journal. Of the top sites on the web, the only not tracking users is Wikipedia. One site, dictionary.com, sets 234 tracking files on your browser so various companies can understand your online behavior. A site called Ghostery helps you see what’s been set on your system – there’s an amazing list of companies that are tracking you if you’re an average web user. How these companies use this data matters – he references Rebecca MacKinnon’s new book, Consent of the Networked, which points out that privacy setting changes on a site like Facebook have serious implications for dissidents in a country like Iran.
One of the companies Doc features is Rapleaf, an ad targeting company that collects a great deal of information about users. When Doc requested his Rapleaf file, he discovered that most of what the company thought they knew about him was wrong – he’s married, not single; he didn’t complete grad school; his residence is MA, not CA. These companies claim intimate knowledge of you, and what they have is inaccurate and incomplete.
Doc compares the current model of tracking to toddlers who can’t put on their own clothes. Even the Do Not Track compromise buys into the notion that the servers set cookies on you – you can refuse, but they’re still in charge of dressing you.
Companies like to tell us that you’re in charge – IBM’s website announces your visit as “the Chief Executive Customer”. This is window dressing. “I promise you that there’s not a single customer at their Smarter Commerce Global Summit.” These companies are following you around like a pack of dogs you can’t see. These dogs spend a lot of money – $1.5 trillion a year trying to sell you things. That isn’t going away… but the hundreds of millions being spent on analytics is a bubble. They claim that, through their big data, they know you thoroughly… but the truth is, they know very little.
He references Eli Pariser’s book “The Filter Bubble“, which suggests that internet marketing is based on “a bad theory of you.” Doc suggests that our experience with marketing is literally creepy – these theories are stuck in the uncanny valley. Facebook’s attempt to market to us are downright creepy in how they market to us. An ad, “Boyfriend Wanted – Seniors Meet”, seems based on the misperception that he’s single and the reality that he’s old. It’s not what he wants – it’s what the data thinks he wants.
Doc references corporate loyalty cards as a virus that spreads between species. The loyalty card came into play around 1995, and now it’s spreading into the online space. It’s accompanied by agreements we never made, the impossibly long contracts businesses force us to sign before using our iPhones or online services. Friedrich Kessler calls these “contracts of adhesion”, contracts where one party isn’t free to negotiate the terms. Freedom of contract, the ability to negotiate our terms and come to agreement with others, is a fundamental freedom in a democratic society. But in a broadcast age, we find the rise of mass marketing, and mass services. Individual contracts were no longer possible – instead, we had contracts that one side built and the other was required to accept. It’s a bad idea that came into the online world with little questioning.
So what can we do about this? These problems were evident in 1999, when Doc and colleagues wrote The Cluetrain Manifesto. He points to a Chris Locke quote: “We are not seats or eyeballs or end users or consumers. We are human beings – and our reach exceeds your grasp. Deal with it.” Tragically, this wasn’t true. While Cluetrain became very popular, this idea never caught on – our reach never exceeded our grasp in online spaces.
In the hopes of realizing some of these ambitions, Doc came to Berkman in 2006 and started working on Project VRM. VRM is a term he didn’t coin – it emerged from the community he brought together, as an alternative to “customer relationship management”, a massive industry. VRM is not yet at the scale of CRM, but it’s starting to have impact.
With VRM, Doc tells us, the customer drives. A car is a good example of a VRM tool – it gives us choice, independence and privacy. It’s a way of relating to the world and to commerce. An infrastructure has grown up around it – parking spaces, drive-thru restaurants. The car could never have been invented by a railroad. Anyone running a server alone – Google, Facebook – is in the railroad business.
VRM allows customers to define their own terms of service, define what loyalty is, control the use of your own data, manage your relationships with vendors, and do this all yourself, or through “fourth parties”, third parties who work for you, not the vendor. A fourth party is a buyer’s agent who works for you, a lawyer who represents your interests in the face of other institutions.
Fourth parties in the VRM community include TrustFabric, Singly, Azigo, The Customer’s Voice and several other start up firms. Doc focuses on a French company called Privony, a company that’s part of the VRM movement. Privony gives you a vault for your data, which you can share with companies… but they can’t see your data, as it’s encrypted. Privony provides a menu bar to help you manage your relationships with other sites, turning on or off tracking, and relating in new and better ways with the vendors of the world. With your permission and control, you can make yourself available as a qualified lead… but it’s your choice to do so.
What’s critical is the ability to set your own terms of service: “Don’t track me outside your site or service”, “Give me my data in the usable form I specify”, and so on. Your personal data is in the cloud, but you can control how various servers use it, rather than ceding that responsibility to those servers.
The global money transfer company, SWIFT, is encouraging people to think of your digital assets as a form of money. Thinking through this interesting thought, you end up with ideas like the possibility of escrowing your intention to make a certain purchase, and perhaps advertise your willingness to do so in order to find someone who wants to do business with you.
Project VRM is still starting, even six years into the project. It’s still the space of the innovators, not yet the space of early adopters. The end state – the late majority – is the “Intention Economy”, a space where your intentions are clear, and advertisers and vendors don’t have to guess at what you want.
David Weinberger‘s new book “Too Big To Know” (#2B2K – be sure to pick book titles that make good hash tags…) launched last night at Harvard Law School with a talk entitled “Unsettling Knowledge”. If you know David’s work, it’s obvious that the title is a pun. And David’s new book is a wonderfully unsettling piece – it challenges our notion of what knowledge is, and introduces the uncomfortable question of how we navigate this new space.
Knowledge as we know it is coming apart, David tells us. The bastions of knowledge, the physical emblems of knowledge, like encyclopedias, newspapers and libraries are undergoing radical transformation. We know we’re heading into a future that’s deeply different, though we don’t know quite how. The manifestations of knowledge are at risk, and all it took was the touch of a hyperlink.
How did these institutions fall apart so quickly? It’s an impossible question to answer, but he offers one path through the thicket. He starts with a famous quote from Daniel Patrick Moynihan, who tells us “Everyone is entitled to his own opinion, not his own facts.” This is the promise of knowledge: that if we all got together and had an honest conversation, we can eventually come to an agreement. There is knowledge and it can bring us together.
We tend to assume that knowledge gives us an accurate picture of the world, built up bit by bit, fact by fact. In acquiring knowledge, we nail down each piece with certainty. And we see knowledge as a product of filtering and winnowing – we move from perception to true perception, from a mob of opinion to true belief. Knowledge is about finding gold within the flux.
We’ve always had to filter, based on the fact that the world is way bigger than what fits in our skills. There’s too much to know (quoting Anne Blair’s book “Too Much to Know“) and the world is too big to know.
Traditionally, we’ve handled this by breaking off a brain-sized chunk of the world and getting an expert to understand it. Once we’ve got that expert, we can stop asking questions: we simply ask the expert. Experts, and the credentials that create them, are stopping points. They’re points beyond which we don’t need to look any further.
But that’s how knowledge works on paper. Books, for all their magnificence, are a disconnected medium. They are contained within covers, they are shelved apart, they don’t naturally connect to one another. The author’s job is to put everything she knows on a topic between two covers. The arguments move in sequence, from the beginning to the conclusion. And because the book is an essentially limited medium, good writers ruthlessly cast things aside, deciding what it put in the book and what is excluded. Books are born of long-form arguments, moving us forward step by step, brick by brick.
Links are a new form of punctuation. They give you a means of continuing. In the print world, to follow a footnote in a book, you need to get on a bus and go to the library. That’s why we don’t generally follow footnotes. But now we can jump from one book to the next. It’s a magic map – touch a place on the map and you go there.
The internet is an environment that’s all about connection and our knowledge is picking up properties of the medium. Knowledge in this space is characterized by the fact that it’s “too much, messy, unsettled, and unstructured”.
Clay Shirky suggests that there’s no such thing as information overload, only filter failure. This is a very modern response to an older question. Futurist Alvin Toffler warned us about information overload, popularizing the phrase. It’s an extension of the idea of sensory overload, the idea that too much input could overwhelm and paralyze you. This is based on the faulty assumption that brains are information processing machines, and that we can overwhelm and crash them.
This line of thinking led marketers to conclude that choosing between 16 brands would be overwhelming to an American housewife and that fewer choices needed to be offered. But we’re now headed to a point where there’s an exabyte of genomic information available, and that number doesn’t lead us to paralysis, but to fascination. We’ve redefined the term “information overload” through how we use it.
We’re less overwhelmed because we’re learning different ways to filter. When we filtered in the print world, we did so in a way that prevented us from seeing the dregs. We saw only the books that our local library chose to buy, and only the books the publisher chose to print. The manuscripts filtered out of that process were invisible to retrieve through ordinary means.
Now, in a digital age, we filter forward, not filter out. All that information – some of it very low quality – is out there somewhere on the internet. We could curate and try to delete the stuff that’s wrong, hurtful, harmful or hateful. But it’s expensive to exclude information and cheaper to include everything. When you curate, you’re making decisions about what is interesting to your users, and no one can accurately predict what might be useful to a researcher in the future. Filter out all the gossip and crap from new media and you harm the scholar who wants to study celebrity behavior. You couldn’t have predicted the high level of interest in notes from a committee meeting in Wasilla, Alaska in 2008 until Sara Palin became a public figure.
The web has worked by developing tools that include all content and filter when we retrieve it. As recently as a decade ago, information retrieval experts told us that ordinary users would never use tools this complicated. But now we use them everyday, because we have to. And we’re seeing much better tools, like Shelflife, the tool Harvard’s Library Lab has created to allow users to browse the vast set of information in Harvard’s library systems.
We don’t just have a lot of information – the information is very messy. We like order – David shows a slide of zoological specimens, beetles mounted on pins – and we’re very good at establishing it. We understand where everything fits in a tree of species, based on similarities and differences. To know where a species fit into this tree was to know how the world works – to not know it was to be adrift.
In the physical world, there’s only one way to sort manifestations of information. You might want to sort your CDs by artist, while your partner might want them sorted by genre. There’s only one possible they can be stacked on the shelf, because no two things can be in the same place at the same time. In a digital age, we simply make playlists. We end up with a mess of information, but it’s a rich and fertile mess.
Figuring out where things fit in the natural order of things was an essential piece of being human. Human beings saw ourselves as “the knowers. But there’s multiple orders and multiple ways of categorizing, through tags, playlists and other ways to sort information. Messiness is an essential feature of how we scale meaning. But, David warns, we still tend to think of knowledge in the ways we did when books had to sit on a single place on the shelf, when knowledge had a single, possible, right form, rather than multiple forms.
Knowledge is too big, messy and wildly unsettled, just like the internet. “For every fact on the internet, there is an equal and opposite fact.” David warns that there is nothing we all agree on – you can find someone willing to argue that 2+2 is not 4 (and, indeed, a quick Google search shows this to be true.) We don’t agree about anything, and David warns, we never will. “This doesn’t mean there are no facts – but it does mean that people are going to insist on being wrong.”
What this persistence of disagreement means is that the promise of knowledge Moynihan offers – that we can agree on a set of facts and then argue our opinions – is not going to be fulfilled. As it turns out, we don’t even know whether Moynihan said “everyone is entitled to his own opinion, not his own facts” or whether that’s exactly what he said.
The good news is that we’re rapidly developing ways of dealing with difference and disagreement. YouTube has a crummy commenting system, as is well documented and well established. David shows us a threat of comments on a recent Batman movie trailer. Somewhere deep in this comment thread is an impassioned argument about circumcision. It would have been great if YouTube supported forking of conversations. Forking is a powerful way to deal with disagreement. It’s very hard to do in the real world without social consequences – if we decide to move away from the dinner party to our own table where we talk about circumcision, it makes people uncomfortable – but it’s very easy to do this on the web.
In the 19th century, it was very challenging to classify the platypus. There was one space in a taxonomy for warm-blooded animals, and another for animals that produce eggs. Scientists thought the platypus must be a hoax, because it didn’t fit within existing categories. Even when presented with a specimen from Tasmania with eggs intact, they fought the platypus “hoax” as something that didn’t work within existing categories.
Now we can solve problems of overly rigid taxonomies by using linked namespaces. We can create a database of names, and a database of taxonomies. We can deal with the platypus and the water mole, and map scientific and colloquial names onto different possible structures. “Pick your name, pick your taxonomy and get on with your life. So what if we disagree? Yay for difference!”
David is actually quite concerned about difference, and just how much difference we can tolerate and still interact and function. He acknowledges that there’s a human tendency towards homophily, flocking together in groups united by race, gender, belief, socioeconomic status, etc. This can lead to a serious challenge to public discourse – echo chambers that can solidify beliefs, making them more extreme and polarized. But David worries that posing issues this way relies on an unquestioned assumption: that conversations are between people who disagree deeply and looking for solutions and common ground by trying to get to the facts. This analysis misses the social role of conversation. We need so much context and so much agreement to even have a conversation. “To have a good conversation, you need to have 99% similarity and 1% difference.” He suggests that some of the work Yochai Benkler and I have been doing may help us find productive paths towards including difference, but reminds us that the high level of disagreement and the difficulty of finding common ground is likely a core feature of the internet and knowledge in an internet age.
Finally, knowledge in this new paradigm is unstructured. We’re used to the idea that knowledge has a basic structure. We have grown used to long form arguments that take us from A to Z, and we’re particularly fond of arguments that take us from A to Z in an orderly path, where Z is an unexpected place to end up. “This is a magnificent form of thought, but the long form argument is losing it’s preeminence.”
We might think of Darwin as a leading proponent of the long form argument. And his argument certainly led somewhere unfamiliar. But he wouldn’t have analyzed data for years and released a massive book if he were working today. He would publish online. And even if he didn’t, the conversation about his work would be based online. Whether or not we imagine Darwin tweeting from The Beagle, the web is where the thinking about and reacting to Darwin’s work would take place, and collectively, it will have more value that Darwin’s long form work taken alone. Moving forward, we will not just see these long form works, but the webs that precede and follow them.
Michael Nielsen has recently written about scholarly community reaction to results at CERN that offer evidence for faster than light neutrinos. As these results came in, they were posted to arXiv.org, a journal preprint site. They stirred up a firestorm of interest and reactions. Some of those reactions are brilliant, some are stupid and wrong. But that welter of discussion is where knowledge is – it’s taking place outside of printed peer review journals.
Darwin spent seven years studying and dissecting barnacles before working on The Origin of Species. His two volume work on barnacles includes countless facts, and his hard work to discover and pin them down was an act of nobility. But science doesn’t work quite like that anymore. We work with clouds of data about genetics, astronomy, and other topics. These data clouds are fundamentally different than facts. When data.gov released sets of government information, they didn’t clean or normalize it ahead of time – they released raw data. They concluded that it was better to put the data out there than to constrain themselves to information that was consistent and known, for the simple reason that this constraint would have slowed them down badly. Darwin would not have agreed – he spent seven years on one fact.
There’s value in getting the data out quickly, David argues. It may be the one approach that’s scaleable – releasing raw data and letting individuals and groups clean, analyze and share what they find. Peer review scientific journals don’t scale, but perhaps peer to peer peer review might. We’re seeing growth in the Open Access journal field, particularly in spaces of repository where data is released, not peer reviewed.
One way we can start making sense of these new data sets is through the magic of linked data, a format suggested by Tim Berners-Lee, father of the web. We organize information in triples:
the platypus | lives in | Tasmania
Watermoles | lay | eggs
When we link triples to a central reference, we can resolve our platipae to water moles and link our triples together. Facts, which used to look like bricks, now look like links.
David closes by returning to his original question: why were old knowledge systems so fragile? These systems assumed knowledge was bounded, settled, orderly and proceeded step by step. But that’s not what knowledge feels like in the age of the internet. It feels unbounded, overwhelming, unsettled, messy, linked and governed by our interests. And those properties are the properties of what it means to be human in the world.
“Networked knowledge may or may not be truer about the world, but is is truer about knowing… This crazy approach to knowledge feels familiar to us, because it’s how we tend to know.” He closes with an observation that’s both hopeful and unsettling: “What we have in common is a shared world about which we disagree, not a common knowledge we share and can collectively come to.”
I’ve followed David’s work for a long time, and had the pleasure of watching him work through the ideas behind this book – David and I are both part of a group at Berkman that helps colleagues explore book-length projects. While I’m familiar with this line of David’s though, it was exciting and unsettling to hear him work through these ideas covering the whole arc of the book. I think this may be the most unsettling and radical book David’s put forth. On the one hand, it’s not a surprise that people will disagree on any concievable fact. But David’s suggestion that we give up on achieving an impossible consensus and proceed with the hard work of getting on with our lives strikes me as challenging and liberating, a very different path than I hear from most activists and advocates. I’m enjoying wrestling with the ideas David puts forth both in this talk and in the paper and hope lots of readers will take up the challenge as well.