Philosopher Helen Nissenbaum is one of the leading thinkers about the ethical issues we face in digital spaces. Her work on privacy as “contextual integrity” is one of the most tools for understanding why some online actions taken by companies and individuals feel deeply transgressive while others seem normal – we expect online systems to respect norms of privacy appropriate for the context we are interacting in, and we are often surprised and dismayed when those norms are violated. At least as fascinating, for me at least, Nissenbaum doesn’t just write books and articles – she writes code, in collaboration with a team of longtime collaborators, which brings her strategies for intervention into the world, where others can adopt them, test them out, critique or improve them.
Professor Nissenbaum spoke at MIT last Thursday about a new line of inquiry, the idea of obfuscation as a form of resistance to “data tyranny”. She is working on a book with Finn Brunton on the topic and, to my delight, more software that puts forward obfuscation as a strategy for resistance to surveillance.
Her talk begins by considering PETs – privacy enhancing technologies – building on definitions put forth by Aleecia McDonald. In reacting to “unjust and uncomfortable data collection”, we wish to resist but we do not have the capacity within the systems themselves. We can create privacy enhancing tools as a mode of self-help, and tools that leverage obfuscation fit within the larger frame of PETs, self-help and resistance.
She defines “data tyranny” drawing on work by Michael Walzer, whose work focuses on approaches to ethics in practice: “You are tyrannized to the extent that you can be controlled by the arbitrary decision of others.” Obfuscation, Nissenbaum tells us, fights against this tyranny.
Using her framework of privacy as contextual integrity, from Privacy in Context (2010), she explains that privacy is not complete control of our personal information, nor is it perfect secrecy. Instead, it is “appropriate information flow that is consistent with ideal informational norms.” This contextual understanding is key, she explains, “I don’t believe that a right to privacy is a right to control information about oneself, or that any spread of information is wrong.” What’s wrong is when this flow of information is inconsistent with our expectations given the context. Sharing information about what books we’re searching for with the librarian we asked for help doesn’t mean we’ve consented to share that information with a marketer looking to target advertisements to us – we expect one sort of sharing in that context and are right to feel misused when those norms are bent or broken.
Different privacy enhancing technologies use different strategies. One project Nissenbaum has collaborated on uses cryptography to facilitate resistance. Cryptagram (think Instagram plus crypto) allows you to publish password-protected photos on online social media. The photos appear as a black and white bitmap, unless you have the password (and the Cryptagram Chrome plugin installed). By encrypting the photos (with AES, and rendering the output as a JPEG), you gain finer control over Facebook’s ever-changing privacy settings, and you prevent whoever is hosting your media from attempting to identify faces in photos, or building a more detailed profile of you using information gleaned from the images.
Other PETs use data obfuscation as their core tool of resistance. Nissenbaum defines obfuscation as “The production, inclusion, addition or communication of misleading, ambiguous, or false data in an effort to evade, distract or confuse data gatherers or diminish the reliability (and value) of data aggregations.” In other words, obfuscation doesn’t make data unreadable, it hides it in a crowd.
Good luck finding Waldo now.
TrackMeNot is a project Nissenbaum began in 2006 in collaboration with Daniel Howe and Vincent Toubiana. It was a reaction to a Department of Justice subpoena that sought search query data from Google as a way of documenting online child pornography, as well as the deanonymization of an AOL data set, suggesting that individuals could be personally identified on the basis of their search queries. “The notion that all searches were being stored and held felt shocking at the time,” Nissenbaum explained. “Perhaps we all take it for granted now.”
Her friends in computer science departments told her “there’s nothing you can do about it”, as Google is unlikely to change their policies about logging search requests. So she and her colleagues developed TrackMeNot, which sends a large number of fake search queries to a search engine, along with the valid query, then automatically sorts through the results the engine sends back, presenting only the valid query. The tool works within Firefox, and she reports that roughly 40,000 users use it regularly. You can watch what queries the tool is sending out, or choose your own RSS feed to generate queries from. (From the presentation, it looks like, by default, the tool is subscribing to a newspaper, chopping articles into n-grams and sending random n-grams to Google or other engines as “cover traffic”.)
The tool has prompted many reactions, including objects that TrackMeNot doesn’t work or is unethical. Security experts have suggested to her that search engines may be able to sort out the chaff from the wheat, filtering aside her fake queries. The questions about the ethics of obfuscation are at least as interesting to Nissenbaum as a philosopher.
Obfuscation, she tells us, is everywhere. It’s simply the strategy of hiding in plain sight. She quotes G.K. Chesterton, from The Innocence of Father Brown: “Where does a wise man hide a leaf? In the forest. But what does he do if there is no forest? He grows a forest to hide it in.”
With Finn Branton, she has been investigating historical and natural examples of leaf hiding and forest growing strategies. Some are easy to understand: if you don’t want your purchases tracked by your local supermarket, you can swap loyalty cards with a friend, obfuscating both your profiles. Others are more technically complicated. During the second World War, bomber pilots began releasing black paper backed with aluminum foil before releasing their payloads. The reflective paper obscured their signal on radar, showing dozens or hundreds of targets instead of the single plane, making it possible for bombers to evade interception.
Programmers and hardware engineers routinely obfuscate code to make it difficult to replicate their work. (As someone who has managed programmers for two decades, I think unintentional obfuscation is at least as common as intentional…) But some of the best examples of obfuscation are less technical in nature. Consider the Craigslist robber, who robbed a bank in Monroe, Washington in 2008, and got away by fading into a crowd. He’d put an ad on Craigslist, asking road maintenance workers to show up for a job interview by asking them to dress in a blue shirt, yellow vests, safety goggles, a respirator mask. A dozen showed up, and the robber – also wearing the outfit – was able to get away.
Nature obfuscates as well. The orb spider, who needs her web out in the open to catch prey, needs to avoid becoming prey herself. She builds a target of dirt, grass and other material in the web, exactly the size of the spider, hoping to lure wasps to attack the diversion instead of her.
Does obfuscation work? Is it ethical? Should it be banned? Examples like the orb spider suggest that it’s a natural strategy for self-preservation. But examples like Uber’s technique of calling Gett and Lyft drivers for rides, standing them up and then calling to recruit them, which Nissenbaum cites as another example of obfuscation, raise uncomfortable questions.
“What does it mean to ask ‘Does it work?'” asks Nissenbaum. “Works for what? There is no universally externalizable criterion we can ask for whether obfuscation works.” Obfuscation could work to buy time, or to provide plausible deniability. It could provide cover, foil profiling, elude surveillance, express a protest or subvert a system. Obfuscation that works for one or more criteria may fail for others.
The EFF, she tells us, has been a sometimes fierce critic of TrackMeNot, perhaps due to their ongoing support for Tor, which Nissenbaum makes clear she admires and supports. She concedes that TrackMeNot is not a tool for hiding your identity as Tor is, and notes that they’ve not yet decided to block third-party cookies with TrackMeNot, a key step to providing identity protection. But TrackMeNot is working under a different threat model – it seeks to obfuscate your search profile, not disguise your identity.
She and Brunton are working on a taxonomy of obfuscation, based on what a technique seeks to accomplish. Radar chaff and the Craigslist robber are examples of obfuscation to buy time, while loyalty card swapping, Tor relays and TrackMeNot seek to provide plausible deniability. Other projects obfuscate to provide cover, to elude surveillance or as a form of protest, as in the apocryphal story of the King of Denmark wearing a yellow Star of David and ordering his subjects to do so as well to obscure the identity of Jewish Danes.
For Nissenbaum, obfuscation is a “weapon of the weak” against data tyranny. She takes the term “weapon of the weak” from anarchist political scholar James C. Scott, who used the term to explain how peasants in Malaysia resist authority when they have insufficient resources to start a rebellion. Obfuscatory measures may be a “weak weapon of the weak”, vulnerable to attack. But these methods need to be considered in circumstantial ways, specific to the problem we are trying to solve.
“You’re an individual interacting with a search engine”, Nissenbaum tells us, “and you feel it’s wrong that your intellectual pursuit, your search engine use, should be profiled by search engines any more than we should be tracked in libraries.” The search engines keep telling you “we’re going to improve the experience for you”. How do we resist? “You can plead with the search engine, or plead with the government. But the one window we have into the search engine is our queries.” We can influence search engines this way “because we are not able to impose our will through other ways.”
This tactic of obfuscation as the weapon of the weak is one she’s bringing into a new space with a new project, Ad Nauseum, developed with Daniel Howe and Mushon Zer-Aviv. The purpose of Ad Nauseum is pretty straightforward: it clicks on all the ads you encounter. It’s built on top of Adblock Plus, but in addition to blocking ads, it registered a click on each one, and also collects the ads for you, so you can see them as a mosaic and better understand what the internet thinks of you.
Again, Nissenbaum asks us to consider strategies of obfuscation as having many strategies towards many ends. These strategies differ in terms of the source of obfuscation, the amount and type of noise in the system, whether targets are selective or general, whether the approach is stealthy or bald-faced, who benefits from the obfuscation and the resources of who you are trying to hide from. Ad Nauseum is bald-faced, general, personal in source (though it benefits from cooperation with others) and is taking on an adversary that is less powerful than the NSA, but perhaps not much less powerful.
Aside from questions of whether this will work, Nissenbaum asks if this is ethical. Objections include that it’s deceptive, that it wastes resources, damages the system, and enables free riding on the backs of your peers. In an ethical analysis, she reminds us, ends matter, and here the ends are just: eluding profiling and surveillance, preserving ideal information norms. This is different from robbing a bank, destroying a rival, or escaping a predator.
But means matter too. Ethicists ask if means are proportionate. If there is harm that comes from obfuscation, can another method work as well? In this case, that other method can be hard to see. Opting out hasn’t worked, as the Do Not Track effort collapsed. Transparency is a false solution, as companies already flood us with data about how they’re using our data, leading us to accept policies we don’t and can’t read. Should we shape corporate best practice? That’s simply asking the fox to guard the henhouse. And changing laws could take years if it ever succeeds.
In exploring waste, free ride, pollution, damage or subversion, Nissenbaum tells us, you must ask “What are you wasting? Who’s free riding? What are you polluting? Whose costs, whose risks, whose benefits should we consider.” Is polluting the stream of information sent to advertisers somehow worse that polluting my ability to read online without being polluted by surveillance?
Big data shifts risks around, Nissenbaum tells us. As an advertiser, I want to spend my ad money wisely. Tracking users shifts my risk in buying ads. The cost is backend collection of data, which places people at risk: think of recent revelations from Home Depot about stolen credit card information. Databases that are collected for the public good, for reasons like preventing terrorism, may expose individuals to even greater risk. We need a conversation about whether there are greater goods to protect than just keeping ourselves free of terrorism.
We can understand weapons of the weak, Nissenbaum tells us, by understanding threat models. We need to study the science, engineering and statistical capabilities of these businesses. In the process we discover “enabling execution vectors”, ways we can attack these systems through hackable standards, open protocols, and open access to systems. And we need to ensure that our ability to use these weapons of the weak is not quashed by enforceable terms of service that simply prevent their use. Without having access to the inner machination of systems, Nissenbaum argues, these weapons may be all we have.
An exceedingly lively conversation followed this talk. I was the moderator of that conversation, and so I have no good record of what transpired, but I’ll use this space – where I usually discuss Q&A – to share some of my own questions for Professor Nissenbaum.
One question begins by asking “What’s the theory of change for the project?” If the goal us to collapse the ad industry as we know it, I am skeptical of the project’s success at scale. Clicking ads is an extremely unusual behavior for human websurfers – clickthrough on banner ads is a tiny fraction of one percent for most users. Clicking on lots of ads is, however, frequent behavior for a clickfraud bot, a tool that’s part of a strategy in which a person hosts ads on their site, then unleashes a program to click on those ads, giving micropayments to person for each ad clicked. Essentially, it defrauds advertisers to reward a content provider. Clickfraud bots are really common, and most ad networks are pretty good at not paying for fraudulent clicks. This leads me to conclude that much of what Ad Nauseum does will be filtered out by ad networks and not counted as clicks.
This is a good outcome, Nissenbaum argues – you’ve disguised your human behavior as bot behavior and encouraged ad networks to remove you from their system. But it’s worth thinking about the costs. If I am a content provider, attracting human users who look like bots has two costs. One, I get no revenue from them as ad providers filter out their clicks. Two, I may well lose advertisers as they decide to move from my bot-riddled site and to sites that have a higher proportion of “human” readers. She’s shifted the cost from the reader to the content host rather than crashing the system. (Offering this scenario to Nissenbaum, using myself as an example of a content provider, and positing a Global Voices that was supported by advertising, I spun a case of our poor nonprofit going under thanks to her tool. In response, she quipped “Serves you right for working with those tracking-centric companies.” I’m looking forward to a more nuanced answer if she agrees with the premise of my critique.)
If the theory of change for the project is sparking a discussion about the ethics of advertising systems – a topic I am passionate about – rather than crashing the ad economy as we know it, I’m far more sympathetic. To me, Ad Nauseum does a great job of provoking conversations about the bargain of attention – and surveillance – for free content and services. I just don’t see it as an especially effective weapon for bringing those systems to their knees.
My other question centers on the idea that this technique is a “weapon of the weak”. To put it bluntly, a tenured professor at a prestigious university is nowhere near as disenfranchised as the peasants Scott is writing about. This isn’t a criticism that Nissenbaum is disrespecting the oppression of those worse off than she is, or a complaint about making a false comparison between two very different types of oppression. Instead, it’s a question about the current state of the political process in the United States. When a learned, powerful and enfranchised person feels like she’s powerless to change the regulation of technology, something is deeply messed up. Why does the regulation of technology turn the otherwise powerful into the weak? (Or is this perception of weakness the symbol of a broader alienation from politics and other forms of civic engagement?)
I’ve been advocating a theory of civic efficacy that considers at least four possible levers of change: seeking change through laws, through markets, through shaping social norms or through code. Using this framework, we could consider passing laws to ensure that the FTC protects user privacy on online systems. Or we could try to start companies that promise not to use user data (DuckDuckGo, Pinboard) and try to ensure they outcompete their competitors. We could try to shape corporate norms, seeking acknowledgement that “gentlemen do not read each other’s mail”.
If Nissenbaum’s solution were likely to crash advertising as we know it, it might be superior to any of these other theories of change. If it is, instead, a creative protest that obscures an individual’s data and makes a statement at the cost of damaging online publishers, it raises the question of whether these means are justifiable or others bear closer consideration.
I tend to share Nissenbaum’s sense that advocating for regulation in this space is likely to be futile. I have more hopes for the market-based and norms-based theories – I support Rebecca MacKinnon’s Ranking Digital Rights project because it seeks to make visible companies that protect digital rights and allows users to reward their good behavior.
But I raise the issue of weapons of the weak because I suspect Nissenbaum is right – I have a hard time imagining a successful campaign to defend online privacy against advertisers. If she’s right that many of us hate and resent the surveillance that accompanies an ad-supported internet, what’s so wrong in our political system that we feel powerless to change these policies through conventional channels?
My alma mater, Williams College, begins the academic year with a convocation, a ceremony for seniors, faculty and a small number of alumni who are being honored with the college’s Bicentennial medal, an award for “distinguished achievement in any field of endeavor”. I was honored to be one of those medal recipients this year, and the college asked me to address the students. My remarks follow below. (Should you want to see me deliver the address, here’s the YouTube video.)
If the job of a commencement address is to offer students thoughts on how to exit college and enter the world, a convocation address can urge students to make the best of their remaining time in college. I wanted to try to connect my time at Williams more than twenty years ago to my professional life, to talk about the college’s new library and connect the year’s academic theme – The Book, Unbound – to my own work.
I’m posting the speech here at the request of a few faculty and students who were kind enough to ask. It may not make a ton of sense to my regular readers, who don’t know about the rivalry between Williams and Amherst, two fine colleges in Western Massachusetts that have lots of similarities and a long-standing rivalry. I’m not above playing to the hometown crowd, so most of the laugh lines here are digs at Amherst and Lord Jeffrey Amherst, British Army commander and all-around nasty piece of work.
I’m honored and thrilled to be with you today for the Williams 2014 Convocation, for the dedication of the extraordinary and beautiful new Sawyer Library, and for this year’s conversation about “The Book, Unbound”. Given the circumstances, I’ve been thinking back to one of my favorite Williams origin stories. You know it, I’m sure: how in September 1821, Zephaniah Swift Moore, the second President of Williams College, skulked out of town in the dead of night, leaving the wilderness of western Massachusetts to build Amherst in the tamer lands of the Pioneer Valley, taking with him not only 15 students but key volumes from the Williams College library.
It’s a fantastic story, just the sort of thing to justify our centuries-old rivalry with our neighbors to the east. Unfortunately, it’s not true. Yes, President Moore left, and yes, fifteen students left with him. And there’s a lack of clarity about where the 700 books that constituted Amherst’s original library came from. But there’s no evidence that Amherst’s library was seeded with purloined volumes, only records of votes by student societies not to move libraries along with the students who left the college.
It’s possible the story began as an excuse for the poor quality of the Williams library in the 1800s. In 1821, the library wouldn’t have been that hard to steal – at that point, Williams had two buildings, two professors, two tutors and 1400 volumes, ot a huge expansion from the school’s original library, 360 volumes in a bookcase in West College. And students complained bitterly of the quality of those books, most of which were dusty theological texts – if you wanted to read non religious literature at Williams for much of 1800s, you would do better to turn to student-run literary and scientific societies.
It’s likely that the legend is much more recent, probably forming in early 1960s. In an essay about the legend, Dustin Griffin points out that early histories of the college discuss President Moore’s departure and rivalries with Amherst, but not the story of the books, and that the story of the books wasn’t one his contemporaries knew in the early 1960s. But by the mid 1960s, there’s record of John Chandler, then dean of the faculty, visiting Amherst’s new library in 1965 cracking a joke about coming to take our books back. When I was here in the early 1990s, the theft of Williams’s library was presented as fact, a simple explanation for the inherent moral superiority of our institution over our rival… which was helpful, as many of us had applied to Amherst as well and needed solid grounding for our contempt.
So we have a myth that’s fairly recent, but which has some powerful explanatory properties and enduring power. It’s worth picking at the myth and asking what it says about us as a culture that this is one of our origin stories, an explanation for our place in the world.
When I’ve heard the story, the theft of the books is always presented as the final straw: Yes, Amherst took our president and took our students. But can you believe they had the nerve to take our books! Bibliolarceny is somehow a more serious crime than other forms of theft – not, perhaps, as serious as proposing the extermination of native Americans with smallpox blankets, but worthy of special consideration nevertheless.
Books have a special, symbolic meaning in our culture. The burning of books – whether by the Mongols when they sacked Baghdad in the 9th century, the burning of “degenerate texts” by the Nazis in the 1930s or American extremists burning the Quran today – isn’t just about the destruction of an object. It’s the symbolic destruction of a people, a culture and a way of thinking. Whether we’re banning books from library shelves, burning them or stealing them, we’re talking about shrinking the universe of cultural possibilities, limiting the number of different ways we can look at the world through the eyes of the authors.
I think that’s why this story has special significance in the context of a college. Even before 1965, when this story gained its currency, college was a place to expand your worldview. The process of packing up, leaving your hometown and going to live with a new set of people is constructed not just to give you access to a different set of teachers, but to a different and broader set of friends and influences. In 1965, when this myth took root, colleges themselves were shifting. In 1964, the civil rights act mandated access to public schools for African Americans and for women. Williams became coeducational in 1970. When this myth arose, we were right on the cusp of the college experience changing: from one which exposed students to a world of mostly white men, to one which served as a bridge to a much wider, multicultural international world. Against that backdrop of the widening world, we have a story about part of a college’s community giving up, finding the challenge of building a community out in the wilds to be too difficult, and shrinking the horizons of those who stayed by taking their books.
One of the reasons I wanted to think about this story is that I wonder whether it has as much currency now as it did in the 60s, or even in the 90s. We’re at a very different moment in our relationship with books, our relationship with information, than we were even twenty years ago. The story of the stolen books is a story from the days of scarce information. Now most of us feel like we’re inundated with information, possibly drowning in it. How do we think about losing part of our library when we have an apparent infinity of information online?
I published a book last summer, Rewire, that looked at the question of how having access to an abundance of global information is changing what we know about the world. I had been a cyberutopian, someone who believed that the internet was going to make the world a smaller, more connected and more understanding place. This seemed pretty obvious to me – it used to be really hard to get news from Sub Saharan Africa or Central Asia – now you can read a Nigerian newspaper online or make a Skype call to Kazakhstan.
But a strange thing has happened as we’ve gotten access to more information from around the world – most of us are choosing to encounter less of it. We have to make thousands of decisions a day about whether we read a story about Ebola, a tweet from Ferguson, or a Facebook update from a high school classmate. In aggregate, most of us are getting much less international news than we did in an era of the daily newspaper and three television stations.
When we’re faced with a wealth of choices, we tend to opt for the familiar, for what we already know to be important. It’s a basic human tendency to pay more attention to members of “our tribe” than people we’ve never met and don’t have a reason to care about. This was a fine coping strategy for a world of disconnected villages, the world almost everyone lived in 500 years ago, but it’s deeply maladaptive for the connected world we live in today. We may not know anyone in Liberia, but it’s a pretty short plane flight from Monrovia to New York – problems that were distant have a way of become our problems very quickly.
Much of my work at MIT looks at questions of how we maintain a broad view of the world when we’re faced with an avalanche of information. It’s directly parallel to the problems librarians have today now that the problem isn’t expanding from 360 volumes to 1400 – the problem is engineering serendipity. It’s making the library – or, in my case, the internet – both a place where you can take a deep dive into a subject you care about, and also a place where you can discover something unexpected and life changing.
One of the things I’ve learned in my research is that it’s much easier to pay attention to people than to places. If there’s someone you care about who’s from Haiti, if you’ve had the chance to travel there and meet people from Haiti, you’ll watch the news differently. You’ll have a connection to that place, a context for a story you hear. The events will be more real to you because Haiti is more real to you through the people you know there.
For ten years, I’ve been helping run a website called Global Voices, which uses citizen media – blogposts, YouTube videos, tweets – to bring readers news from around the globe. The reason we use citizen media is that it gives you a connection to ordinary people writing online as well as to the events they’re describing. For our readers and our community, the Arab Spring wasn’t just the story about a political upheaval – it was the story of our friends who were in the streets, in and out of prison and then in and out of the new governments.
I started working on Global Voices because I wanted to read more news about sub-Saharan Africa in the newspaper. That’s because I spent five years in Ghana helping Ghanaians build internet service providers and other technology businesses. And I started doing that because I spent a year in Ghana on a Fulbright grant studying xylophone music. What got me interested in that was Sandra Burton, who I believe should be considered a national treasure as well as one of our colleges’ greatest heroes. Along with Gary Sojkowski and the late Ernest Brown, Sandra founded Kusika, the African dance ensemble, which was the center of my community when I was at Williams. The strange and wonderful path my life has taken, from starting an early web company to building internet businesses in Africa to working with media activists and journalists around the world, to teaching at MIT leads directly back to the dance studio and to the computer labs, to the professors and students who were passionate about a world wide enough to include both Africa and the internet.
The next time you visit Sawyer Library, I’d ask you to think about the ways in which it’s carefully curated, designed to make it possible to get lost productively, to discover something unexpected but wonderful. Possibly the only thing at Williams more carefully curated is the class you are a part of. We’ve got an almost infinite capacity to put information on shelves physically or virtually, but the opportunity to be in this place, with these people for four years is decidedly finite. I’m grateful for the effort that went into giving me a universe of a couple thousands people who challenged me and invited me to discover new ways of looking at the world.
This is something the college does very consciously, for the simple reason that who we know is going to help determine who we are. I don’t mean this in the narrow sense that, if the person sitting next to you founds the next Facebook, maybe you’ll get some stock options. I mean it in a much broader sense: that who you know, who you care about tends to determine how you view the world, what you pay attention to, and ultimately will shape your path through the world.
Like the library, like the internet, the class of 2015 is too big to know. But if the challenge of a really great library is not just to explore what you already know, what you already care about, the challenge is the same, to challenge yourself to expand your picture of the world by expanding who you know and who and what you care about.
Here’s what Zephiniah Swift Moore took from Williams when he left for Amherst: he took 15 students, 20% of the student body. We can think of those mythical stolen books as shrinking the universe, what we could learn from those volumes. But we should think of losing those students in the same way, as losing the opportunity to see the world through a different set of eyes.
We’re always going to have to make choices about who we know, what we read, what we care about. We never get to read every book, even when there are only 360 on the shelves, and we’re never going to know the people around us as well as they deserve to be known. But we can make decisions to choose a wider world. In ways I never expected, Williams launched me into a world that’s wider than I had imagined. I am eternally grateful for this and I hope the same for you.
Writing this address was a great chance to read up on the early history of Williams and its library. Here are some of the sources I benefited from:
Dustin Griffin (Williams ’65) wrote a terrific essay, “The Theft of the Williams Library”, which I drew from heavily. I’m especially grateful to Griffin for the term “bibliolarceny”.
Steve Satullo (Williams ’69) has been researching the history of Williams libraries as the college has built and moved Sawyer library. His essays have been very helpful for understanding the early days of the Williams library and its shortcomings.
In understanding the state of Williams College and the reasons Zephaniah Swift Moore and others believed it was important to move Williams from the wilderness toward the more settled Pioneer Valley, the 1895 “A History of Amherst College” by William S. Tyler is very helpful.
I spoke this morning at the tenth incarnation of BIF, Business Innovation Factory, an annual conference in Providence, RI that focuses on storytelling. It’s got a lot less product promotion and self-celebration than many conferences on innovation, and more personal stories, which is why I always enjoy attending. So I thought I’d share a personal story that I’m still digesting and the questions it raises for me. If it reads a little differently from most of my writing, the context helps explain why.
About a month ago, I wrote an article about a simple idea. I asked whether anyone really believed that advertising should be the main way we supported content and services on the internet. Given how poorly banner advertising on the web worksGiven that nobody likes banner ads, and given that the current system puts users under surveillance, which in turn seems to inure us to government surveillance, I wondered whether there might be a better way.
Initially, the piece did what I hoped for – it sparked a lively conversation about business models for the web, particularly about business models for news. My friend Jeff Jarvis, whose faith in Google inspires envy in leaders of the Catholic Church, reassured me that once advertising got a little better, I’d like the ads I was reading online, really! I heard pitches for ethical advertising, for different approaches to subscriptions and to micropayments. I’d gotten something off my chest, had sparked a good conversation and was learning from the responses. As a blogger and a media scholar, life was good.
But something unexpected happened halfway through my day in the sun. For a brief and uncomfortable time, I became the most hated man on the internet. Here’s how that happened.
In writing about advertising, I wanted to talk about mistakes we’d made collectively, as an industry, not just beat up on Mark Zuckerberg or any other individual. So I took responsibility for my own small contribution to making the world an ugly, ad-filled place. I admitted, somewhat sheepishly, that almost twenty year ago, I invented the pop-up ad.
My intentions were good. (Please stop throwing things at the stage.) I was part of the team that started Tripod.com, an early webpage hosting company. My popup put an ad in a separate window from a user’s webpage, which was a way of distancing an advertiser from user generated content, reassuring advertisers that they wouldn’t actually be on the same page as a paranoid essay about government conspiracies or a collection of nude images. I thought it was a pretty good solution. I’m really, really sorry.
My editor at the Atlantic, Adrienne LaFrance, is a lot smarter about her readers I am. She knew that this admission, which I hid as a parenthetical comment, would be far more interesting to most readers than a 3000 word essay on advertising and a culture of surveillance. She did a brief interview with me about the process of creating the ad and wrote a 300 word story titled “The First Pop Up Ad”
And then things got weird.
Yes, that’s Jimmy Fallon and Conan O’Brien making fun of me. And yes, that’s one of the strangest things that’s ever happened tome.
Adrienne’s Atlantic piece, lightly rewritten, appeared in about forty news outlets over the next 24 hours. I got loads of interview requests and I did one of them before realizing that this was a very bad idea, and that even normally staid news outlets like the BBC would be far more interested in my “confession” than in the broader argument about advertising and surveillance. And then the emails and the tweets started to come in, first cursing me out in English, and then in Turkish, Portuguese, Chinese and Croatian as the story spread globally.
Andy Warhol predicted that, in the future, we would all be famous for 15 minutes. There’s another possibility: on the internet, we will all be intensely loathed for about 15 seconds.
Let me just pause for a moment and mention that my no good, very bad day on the internet is approximately equivalent to what many of my friends refer to as “Friday”. Which is to say, if you are an opinionated woman who writes online, you likely face ugly, misogynistic bullshit on a daily basis significantly more acute that the tongue in cheek death threats I received once this story took off. I don’t in any way mean to compare the mild abuse I faced at a moment of extreme visibility to the sorts or routine, everyday harassment many smart women face for merely expressing themselves online.
That said, I do now understand why someone would choose to go offline rather than wallow in hostility. I ended up taking a week entirely offline while things blew over, until only one in five emails was a death threat.
Of all the tweets hoping for my swift and painful demise, I was particularly struck by one written by a user in southeast Asia, whose tweet read, more or less: “@ethanz I do not accept your apology. This is your Frankenstein monster. You made it, you should kill it.”
I really liked this tweet. There’s something refreshingly simple about the idea that someone – one guy – could be responsible for things in the world that are badly broken. And once we find that guy, we can either string him up or demand that he fix it. It’s the inverse of the great man theory of history, where we declare Edison the great man who commercialized electricity and brought about the modern world, cutting out Tesla, Westinghouse and hundreds of others. This is the rat bastard theory of history, where if only that one SOB hadn’t put pop up ads on user homepages, the world would be a slightly pleasanter place.
The rat bastard theory is helpful because it gives you a single, concrete individual to hate. Back in my darkest, saddest days, when I used to use Windows, I really enjoyed hating Bill Gates, despite the fact that my anger was clearly misplaced. It’s clear that there were all sorts of design decisions that made Windows agonizing to use in the late 1990s, not an evil plan from Bill Gates, who has turned out to be a pretty terrific guy. But it’s a lot easier to see the rat bastard than to see the whole problem.
The problem with the internet in the late 1990s is that we wanted it to be available to everybody in the world. That’s partially because it seemed like the fair thing to do, and partially because we genuinely loved the web and wanted to evangelize for it globally. But most people didn’t know why the web was cool, why they’d want to build a homepage or send status updates to friends, so they weren’t about to pay us to use it. And so we ended up with the only business model we could think of – broadcast advertising. You’ll get our services for free, and we’ll demand some of your attention to sell to advertisers in exchange. Basically, we didn’t know how to pay for the internet for everyone, so we decided to make the internet work the way broadcast television worked.
We had a failure of imagination. And the millions of smart young programmers and businesspeople spending their lives trying to get us to click on ads are also failing to imagine something better. We’re all starting from the same assumptions: everything on the internet is free, we pay with our attention, and our attention is worth more if advertisers know more about who we are and what we do, we start business with money from venture capitalists who need businesses to grow explosively if they’re going to make money.
But there are people imagining something very different. Maceij Ceglowski, whose brilliant talk got me thinking about the broken state of the web, has built a bookmarking service called pinboard that’s really cheap – about $5 – but not free. He did it because he imagined something different, not a site to sell to another company, but a service he wanted to have available to the world that’s been profitable since the day he started it. WhatsApp is a wildly popular messaging service that charges users $0.99 a year, and had over 400 million users before Facebook bought it, an amazing example that people are willing to pay for tools they need and use. I just learned about a social network called Connect Fireside that’s designed for the mobile phone and optimized for sharing photos with your family and close friends, not with the whole world. It’s not cheap – it’s $20 a month right now – and who knows if it will survive, but it’s exciting to see someone build something based on a different set of assumptions.
Once you get rid of those assumptions – everything is free, the user is the product being sold to advertisers, and the goal is to be a venture-backed billion dollar business – and lots of things are possible. My friends in Kenya got sick of losing internet connectivity every time the power went off, so they went onto Kickstarter and funded BRCK, which is a portable internet router for the developing world that relies on the mobile phone network. The people who paid for it were people who wanted and needed the device, so were willing to pay for it to be built – crowdfunding, as a model, asks that you give up the assumption that people have to instantly receive the things they buy. Turns out that sometimes we’re willing to pay for something in the hope that it might come to pass, someday.
Turns out we’re also willing to pay for things because we love them and we want them to exist. I’ve become a massive fan of a deeply strange podcast called Welcome to Night Vale, which might be described as a cross between A Prairie Home Companion and the Twilight Zone. The podcast is free, but listeners are encouraged to donate $5 a month, and many do… Many more come to live shows, buy tickets and merchandise, which has allowed the creators to expand the podcast, bringing in new actors and musicians, launch a 16 city tour of Europe and write a book about the Night Vale universe. Putting something beautiful and strange out into the world and hoping people will love it is not the most reliable business model, but it sometimes turns out to be viable.
Remember my friend on Twitter who wanted me to kill Frankenstein’s monster? He got part of it wrong in assuming that I’m the rat bastard solely responsible for the situation we’re in today. But he got part of it right. He’s right that people like me can and should be trying to fix things.
The things that are broken about the internet today – and there are a lot of them – are the product of design decisions that fallible, mortal human beings have made over the past twenty years. Twenty years is not a long time. I have t-shirts older than the world wide web. The web wasn’t built by enlightened geniuses whose trains of thought we could never comprehend – it was built by idiots like me, doing the best we could at the time. It’s possible to look at every technical and design decision that’s led to where we are today and make different choices than the people who made those bad choices.
There’s a lot of things I don’t like about how the web works today. I don’t like that our attention is constantly for sale. I don’t like that public sharing is the default. I don’t like that the web never forgets. I think the always on, inescapable nature of the web is proving really exhausting for us as human beings. And I’m scared that the new public spaces, the places where we come together to debate the future, are owned and controlled by a few massive companies that have enormous potential power over what we’re able to discuss online.
But the mistake would be to assume that these shortcomings are inevitable, that they are simply natural consequences of how people interact online. The mistake is to stop questioning the assumptions about how the world worlds, to stop imagining ways things could be different and better.
I should probably mention that this is not a talk about internet business models.
Actually, it’s a talk about civics.
If you think the web is broken – and I do – you should take a look at American democracy. Here’s another case where fallible humans made design decisions that seemed like a good idea at the time and now have had some clearly disastrous consequences. Sure, having professional representatives deliberate together and govern a nation is a pretty cool hack when the dominant governance technology of the time is feudalism. One person, one vote elections – once we finally got around to fully implementing it? Cool idea. Freedom of speech for individuals as well as organizations? Seems pretty important for the rest of the system to work.
Put these things together and we’ve somehow ended up with a system where the Democratic Congressional Committee recommends to new congresspeople that they spend 4-5 hours a day raising money, and only 3-4 hours meeting with constituents or working to craft legislation. It’s a system that constituents hate – it’s part of the reason Congress has a single-digit approval rating – and Congresspeople hate, too, which is why some incumbents are choosing to leave office. But it’s not hard to figure out how it happened, to trace the decisions that brought us to a deeply undesirable place. We can invoke the rat bastard theory and blame Chief Justice John Roberts for Citizens United decision, but we’re following the same fallacy – this is a failure of imagination, not the failure of any one person (or even five supreme court justices)
It’s possible to imagine something very different. Larry Lessig is working very hard to bring a new model to life, where every voter gets $50 of government funds to give to politicians who can use the money to campaign. To imagine that model, you have to give up a bunch of assumptions: that you have a right to spend money to influence politics; that governments should try to do less, not more. And you have to be ready to cope with all the unintended consequences of the new system as well, of people selling their government funds to politicians through brokers, of social media becoming an endless campaign battle ground.
We need a ton more of this, people questioning assumptions that representation makes more sense than direct democracy, that decisions at the federal level are more important than those locally, that money is convertible into power. We need lots of Lessigs taking on these sorts of imaginative experiments, because most of them are going to fail.
Here are two really big assumptions I want to question:
– that participating in representative democracy is the core act of what it means to be a good citizen
– and that everyone is going to participate in our democracy in more or less the same way
You know what it means to be a good citizen – you’re supposed to read the newspaper and keep up on events locally and globally, and vote every two years and maybe call your congressman if something really pisses you off. And you probably know that this isn’t going to make very much of a difference. And that your congressperson is heading into a paralyzed institution that’s rarely able to pass legislation.
So let’s question the assumption that fixing Congress – or even more ambitiously, fixing politics – is the most important part of fixing citizenship. In the 50s and 60s, people figured out that, when you’re prevented from voting by law, public protests – marches, sit ins, boycotts – are a critical part of citizenship. As Congress starting passing civil rights legislation, activists learned that lawsuits were a critical tool to ensure that laws applied to everyone and enforced fairly. We rarely think of suing the government as a way to be a good citizen, but it’s been critical in building the rights-based society we have today.
There are at least three ways I think people can be civically active even if they’re frustrated with paralysis in politics. Make media. When people saw tanks in the streets of Ferguson, Missouri to counter peaceful protests, they made media and started a conversation about the militarization of America’s police forces. When they saw a newspaper picture of Michael Brown looking threatening, they started tweeting #iftheyshotmedown, asking whether the media were being fair in their choice of picture to portray the victim of a police shooting. Media is how we understand the world, and we can shape our public conversation by building civic media.
Make new things. I’m deeply frustrated that my government is surveilling my communications and those of millions of others worldwide, and I’m angry that so little legislative action has been taken in the wake of Ed Snowden’s revelations. But I’m really happy that software developers like Tor, Mailpile, Redphone and Mailvelope are working to make it easy to encrypt email and phonecalls and make it harder for anyone to surveil our communications. Make businesses. I’m concerned about climate change, I want to see a carbon tax, but in the meantime, I’m excited to see Tesla making electric cars that are sexy and for people as broke as me, excited that I drove here in a diesel car that gets almost fifty miles to the gallon. Building businesses that do well while doing good is one of the most powerful ways we can engage in civics today.
Here’s what’s tricky about expanding civics to include making media, making code and making money: it’s not a level playing field. Voting is something everyone can do – right now, making code or making media is easier for some people than others. We’re going to need to think hard about how we prepare the next generation of citizens for a world where their power comes not just from voting but from making, and where someone’s civics unfolds in the marketplace while someone else’s unfolds in Congress.
If you want to overcome failures of imagination – accepting that the web or our politics are inescapably broken – you’ve got to try something new. You’re probably going to fail. And when you do, I recommend that you try again. But first I recommend that you apologize. It feels really good. I’m sorry, and thank you for listening.
Michael Brown was fatally shot by police officer Darren Wilson on Saturday August 9, 2014. After his body lay in the hot street for four hours, Ferguson residents took to the streets to protest his killing. Brown’s death, the ensuing protest and the militarized police response opened a debate about race, justice and policing in America that continues today.
I heard about Michael Brown’s death for the first time on the evening of the 9th, through Twitter. Sarah Kendzior, a St. Louis-based journalist, was retweeting accounts of the protest, and other Twitter friends who write about racial justice issues, including Sherrilyn Ifill, the director of NAACP’s Legal Defense Fund, were discussing the implications of Brown’s killing and the community and police response. I wrote a quick blog post, noting how little mainstream media response there had been at that point, and how much of the coverage had focused on the anger of the crowd, which one outlet termed “a mob”. At that early point, there was not a dedicated Wikipedia article focused on Brown’s death, nor had major national newspapers, like the New York Times, written about Brown. For many people outside of the reach of local St. Louis media, Twitter was the medium that introduced people to the city of Ferguson.
The Pew Research Center confirms this picture. In a thorough analysis of early media coverage of the events in Ferguson, Paul Hitlin and Nancy Vogt of Pew note that cable news didn’t report the story until Monday the 11th, two days after the shooting. Cable news attention peaked on Thursday, August 14, after Ferguson police arrested two reporters and President Obama interrupted his vacation to report on the situation. Twitter attention peaked at the same time, with 3.6 million Ferguson-related tweets on the 14th, but it’s worth noting that Twitter showed a steady and growing interest in the topic from Saturday the 9th onward.
This pattern of attention to Michael Brown’s killing contrasts sharply with early attention to the killing of Trayvon Martin by neighborhood watch volunteer George Zimmerman on February 26, 2012. My students Erhardt Graeff, Matt Stempeck and I studied media coverage of the Martin killing using our Media Cloud tools and found that they key factor in expanding the conversation about Martin’s death beyond Central Florida was a careful PR campaign orchestrated by Benjamin Crump, an attorney retained by Martin’s parents. These PR efforts led to national television coverage of the story, which in turn led to a surge of interest on Twitter and in other participatory media.
In the case of Michael Brown, over 140,000 Twitter posts mentioned Ferguson on August 9th, the day of the shooting. Twitter led other media in 2014, while broadcast media led in 2012. When Crump announced he would be representing the family of Michael Brown on August 11, two days after the shooting, there was no need for a PR campaign, as the story of Brown’s death was already gaining traction in the media.
Pew’s analysis of Michael Brown versus Trayvon Martin shows that attention to events in Ferguson built more quickly than attention to events in Stanford, FL. There are several reasons for this. The police response to protests added a dimension to the Ferguson story that was not present in discussions of Trayvon Martin – many of the tweets on August 14 focused on the arrest of working journalists and the use of military equipment by Ferguson police. (When Missouri Highway Patrol Captain Ron Johnson took over control of law enforcement response in Ferguson on August 14, ordering officers to remove riot gear, attention on Twitter dropped sharply, suggesting some of the attention was on policing tactics.)
Twitter’s reach has grown since 2012, and the power of “black twitter”, which Soraya Nadia McDonald describes as both “cultural force” and social network, to challenge racist narratives has grown. Outrage on Twitter over a book deal for a juror who acquitted George Zimmerman led to the deal being withdrawn. Noting the power of outraged channeled through a hashtag, some African American Twitter users began posting contrasting photos of themselves under the tag #iftheygunnedmedown, suggesting that media were emphasizing the narrative of Michael Brown as “thug”, by using a photo in which Brown is displaying a peace sign (which many have read as a gang sign), rather than other photos in which Brown looks young and entirely unthreatening. The hashtag allowed Twitter users to participate in the dialog about Ferguson in cases where they had nothing to share about the situation on the ground by participating in a conversation about larger issues of structural racism in American media.
While events in Ferguson received widespread attention on Twitter, some observers saw very different behavior on their Facebook feeds.
Twitter vs. Facebook: my tweetstream is almost wall-to-wall with news from Ferguson. Only two mentions of it in my Facebook news feed.
— Mark_Hamilton (@gmarkham) August 14, 2014
John McDermott, writing in Digiday, offered this elegant formulation: “Facebook is for Ice Buckets, Twitter is for Ferguson”. Using data from SimpleReach, McDermott suggests that there were roughly eight times as many stories about events in Ferguson posted to Facebook as stories about the ALS Ice Bucket challenge, but that the stories received roughly the same exposure on Facebook (approximately 3.5 million Facebook “referrals” – appearances on a Facebook user’s screen – for each story between August 7-20th.) In the same time period, Crimson Hexagon calculated that there were roughly 1.5 times as many tweets about Ferguson as about the Ice Bucket Challenge.
The near-equal attention to Ferguson and the Ice Bucket Challenge that SimpleReach’s numbers suggest hardly seems like the substance of a controversy about algorithmic censorship, but it is worth noting that vastly more stories about Ferguson were posted to Facebook than stories about the Ice Bucket Challenge. The average story about the Ice Bucket Challenge was much more heavily promoted by Facebook’s algorithms (a factor of 8x) than the average story about Ferguson. Sociologist Zeynep Tufekçi suggests that this disparity may have been more significant early in the Ferguson story, which made the disparity between Facebook and Twitter more dramatic.
Tufekci goes on to observe that while Twitter (at present) is a “neutral” provider, simply delivering you tweets from the accounts you’ve chosen to follow, Facebook algorithmically “curates” your feed, offering you a selection of content that it believes you will find most interesting. She points out that this is a serious danger to democratic discourse. Facebook could censor our information flows and it would be difficult for us to determine that such censorship was taking place. The algorithms Facebook uses are opaque, which means conversations about how the algorithm work sound a bit like speculation about divine will. Tufecki suggests that Facebook may have altered their algorithm overnight to give Ferguson stories more prominence, which may be true but is impossible to verify. Others have speculated that Facebook tunes its algorithm to favor “happy news” and discourage serious news. Again, this is possible – and could even reflect lessons learned from Facebook’s mood manipulation experiments – but unverifiable.
Here are some of the things we do know about Facebook, which may explain why news appears differently on different social media platforms.
– It’s hard to search Facebook. When news breaks, Twitter users enter search terms and hashtags and receive an updated stream of tweets on the topic in question. On August 9th, I was able to follow reports from Ferguson and quickly choose Twitter users to follow who appeared to be at or near the scene. On Facebook, searching for “Ferguson” gives you the names of many people named Ferguson who Facebook thinks you might know. (You can, conveniently, choose to “Like” the city of Ferguson, which is probably pretty far from what most people searching for Ferguson want to do.)
– Non-reciprocal following makes it easier to diversify who you follow on Twitter. If I see that St. Louis-based rapper Tef Poe is reporting on Twitter regarding Ferguson, I can follow him, and I will see any public tweets he makes in my timeline. On Facebook, I need the approval of someone to follow their personal feed – they need to confirm our friendship before I can read what they’re saying.
It’s typical on Twitter for people to start following accounts of people they don’t know personally, especially when those people are eyewitnesses to or well-informed commenters on an event. I followed a set of users in Egypt while the Tahrir protests were taking place and stopped following many afterwards – I’ve done the same with Ferguson. This is much more difficult to do on Facebook – the platform is designed to connect you with friends, not sources, and to maintain those friendships over time.
– People have a relatively small number of Facebook friends – the median figure is 200 – and many white American Facebook users likely have few or no African-American Facebook friends. This isn’t a phenomenon specific to Facebook – it’s a broader reflection of American demographics and patterns of homophily, the tendency of “birds of a feather” to flock together. Robert Jones at the Public Religion Research Institute used data from the 2013 American Values Survey to estimate the racial composition of people’s social networks. He projects that the average white American has only one black friend, and that 75% of white Americans have entirely white social networks. (There’s an interesting debate about PRRI’s methodology, with Eugene Volokh arguing that it’s hard to estimate weak social ties by asking people about people they confide in.)
We can build on these numbers to speculate about what might be going on with Facebook and stories about Ferguson. Most people who follow Facebook believe that the company’s algorithm favors stories likely to be shared by a lot of people in your social network – Facebook favors viral cascades over surfacing novel content. If few users in your circle of friends are sharing stories about Ferguson – which is a distinct possibility if you are white and most of your friends are white – Facebook’s algorithm may see this as a story unlikely to “go viral” and tamp it down, rather than amplifying it, as it has with the ALS stories. On the other hand, if many of your friends are sharing the Ferguson story – likely if you have many African-American friends or if you follow racial justice issues – your corner of the Facebook network could be part of a viral cascade.
A new paper from Pew and Rutgers suggests another reason why Ferguson might have had difficulty spreading on Facebook. Keith Hampton and the other authors of the study found that people are acutely attuned to what conversations are happening on social networks and are unlikely to bring up topics perceived to be controversial. The users the authors surveyed were half as likely to bring up controversial topics on Facebook or Twitter as they were in face to face conversations. The exception was when they believed their friends on social networks agreed with their positions, in which case they were more likely to bring up the topic on social networks. In the case of Ferguson, this suggests that a Facebook user, unsure of whether her friends share her opinion that the police overreacted in Ferguson, might be more reluctant to post a Ferguson story on Facebook than she would be to bring up the situation in conversation. If there were a cascade of stories on Ferguson, she would have a social signal that the topic was acceptable and would be more likely to participate. On Twitter, where it’s not uncommon to follow dozens of people you don’t know well, it’s easier to interpret those social signals than on Facebook, where you are more likely to know an ethnically homogenous set of friends.
I’m far from immune to these echo chamber effects. One of my students surprised me with a link to a USA Today article reporting that more money had been raised to support Darren Wilson, the police officer accused of killing Brown than had been raised by Brown’s family. Few of my Facebook or Twitter friends were raising money for Officer Wilson, and so I was surprised to discover that a successful campaign was being held on his behalf. My ignorance of the campaign to support Wilson suggests that I have my own diversity issues in social media – if I want a more nuanced understanding of Americans’ reactions to Ferguson, I need to increase the ideological diversity of the people I follow online.
Special bonus: Jon Stewart takes the Ferguson challenge, which turns out to be utterly unlike the ALS Ice Bucket challenge.
I wrote a book review, of sorts, last week about Walter Isaacson’s book on Steve Jobs and my concern that biographies, as a genre, celebrate a “great man” theory of history. While I remain convinced that we need more biographies of teams, of successful collaborations (an idea that Nathan Matias furthers in his post today on acknowledgement and gratitude), I do have a dark secret to admit: I periodically dream about becoming a biographer.
This isn’t because I believe in the biography as a form. It’s because there are people I find so fascinating, I’d enjoy spending a couple of years thinking about how they became who they are or were, and how their personal stories give us a picture of what was possible at different moments in time. I asked a room full of students and colleagues who they’d most like to read a biography of, and the responses were a fascinating picture of my friends as individuals and as part of a group trying to invent the field of civic media.
When the question came around to me, I told the room that I wanted to read the biography of Afrika Bambaataa, one of a few men who can reasonably claim the title “Godfather of Hip Hop”. What I didn’t admit is that I’ve periodically considered dropping my academic pursuits and researching this fascinating figure.
We’re getting to the moment in history where thoughtful popular books are being written about hiphop’s early years and innovators – Jeff Chang’s Can’t Stop Won’t Stop is extensively researched and thoughtfully written, and Ed Piskor’s Hip Hop Family Tree has a visual style that recalls the early 1980s better than any text could.
Ed Piskor talks about his Hip Hop Family Tree project
Throughout volume one of Piskor’s beautiful history, Bambaataa recurs as an iconic figure, looming over an interchangeable crowd of short-lived MCs and DJs, as a future-looking visionary. Bambaataa was a leader of the Black Spades gang in the Bronx before deciding to dedicate his formidable charisma and organizing skills towards building the Universal Zulu Nation, a group that was part hip hop music and dance crew and part consciousness-raising Afrocentric cosmopolitan social club. Raised in the Bronx River Projects by his activist mother, he traveled to Nigeria, Equatorial Guinea and the Ivory Coast after winning an essay contest run by the New York City housing authority, leading Bambaataa to adopt the identity of an African chieftan, leading his crew of former gangsters into a new artistic life of “peace, love and having fun”.
Throughout the early years of hip hop, Bam was a step ahead of his rivals. Other DJs would look over his shoulder to determine which eclectic selections Bam was using as beats – adopting a trick from DJ Kool Herc, Bam would soak the labels off his records and replace them with labels from unrelated albums, leading rivals to purchase legendarily bad albums in the hopes of replicating his sound. (It’s hard to know whether tales of Bambaataa rocking a party with two copies of the Pink Panther theme are authentic musicology or an unintentional consequence of this tactic.) While other DJs sets had MCs asking the audience their zodiac signs (early hip hop was a direct descendant of disco), Bam was playing Malcolm X speeches over his beats. (I like to think of Keith LeBlanc’s No Sell Out, sometimes cited as the first recording featuring digital samples, as a Bambaataa tribute.) When everyone else followed Bambaataa into the crates, crafting their tracks around James Brown and P-Funk, Bam had moved on sampling Kraftwerk, building “Planet Rock” and inventing the entire genre of Electro.
Planet Rock, 1982
At some point, hip hop stopped following Bambaataa. After about 1986 sampling ruled hip hop, blossoming until it was killed by the Bridgeport Music decision. Electro has influenced every generation of dance music since the early 80s, but you can instantly place any track with rapping and chilly synths as coming from the lost sonic territory of 1982-1985. More tragically, after Bam led gang members out of the streets and into the dance club, Ice-T, BDP and NWA led hip hop out of the clubs and back into the gang life.
“Surgery”, (1984) World Class Wreckin Cru, featuring Dr. Dre. Yes, THAT Dr. Dre. Look it up.
Somewhere there’s a parallel reality in which Afrika Bambaataa is the best known name in hip hop and Dr. Dre is a little-known electro DJ. It’s an alternate dimension where Bambaataa added laser fusion propulsion to P-Funk’s Starship and flew music into orbit around Jupiter rather than having it crash in South Central. In that parallel universe, the Universal Zulu Nation got Angela Davis elected president in 1988 and Bambaataa DJ’d the year-long party to celebrate the intergalactic peace accord of 1999, in which all interpersonal conflicts were put aside towards the shared goals of
“peace, unity, love and having fun“.
Instead, Bambaataa has remained an honored and (insufficiently) celebrated hiphop pioneer, best remembered for one unforgettable track than for his epic social hack in the Bronx or his subsequent activism (including Hip Hop Against Apartheid and Artists United Against Apartheid.) Fortunately, the man is starting to get the respect he deserves, from an unusual corner: academe.
In 2012, Cornell University gave Bambaataa a three-year visiting scholar post. Bambaataa responded by donating his legendary record collection to Cornell’s Hip Hop Collection. This has presented an interesting curatorial challenge – the collection contains 40,000 albums, many of them with notes, flyers, press releases or other materials attached, all of which need to be scanned or digitized for posterity. For the past year, archivists have been cataloging the collection, sometimes in public, in Gavin Brown’s gallery in Greenwich Village.
From a slideshow of the Bambaataa collection on Okayplayer
The public archiving project has attracted a raft of contemporary DJs desperate to spin the Godfather’s discs. Joakim Bouaziz was one of the lucky DJ’s to be invited to the gallery, and he recorded part of his set spinning his favorites from the collection and recording the experience. No need to kick yourself for missing the gallery show – Cut Chemist and DJ Shadow are touring the US and Canada this fall, spinning the records live as part of their work building a Bambaataa tribute mix.
As for the biography? Bambaataa has been promising an autobiography since the mid-1990s. Let’s hope the revival of interest in his records leads to some helpful pressure on the man to put aside pressing Zulu Nation business for a few weeks and explain to us all What Would Bambaataa Do.
While I’m waiting for a Bambaataa autobiography, my guess is that a book that answers the questions I have would need to be biography of social movements at least as much as the story of a single individual. It’s not a coincidence that hip hop grew up in the Bronx at a moment when New York City’s physical infrastructure was crumbling and the Bronx had become synonymous with danger and decay. (Fort Apache, The Bronx came out in 1981, two years after Rappers’ Delight.) The physical and conceptual isolation of the Bronx from the rest of the city and the world allowed a culture to evolve in comparative isolation, which means that a history of Bambaataa needs to be a history of urban planning, of urban poverty and systemic racism, of the US’s housing projects. It would be a history of street gangs in New York as well as a history of Afrocentric philosophy and resistance. It would reach back to The Last Poets and ahead to Native Tongues, explore the rise of P-Funk’s Mothership and Sun Ra to understand “the Afro-Alien diaspora”. It’s more book than I am capable of writing, but damn, I hope someone takes it on.
For a taste of what those Bronx parties sounded like in 1982, here’s a collection of live recordings of early Bambaataa sets.