I spoke this morning at the tenth incarnation of BIF, Business Innovation Factory, an annual conference in Providence, RI that focuses on storytelling. It’s got a lot less product promotion and self-celebration than many conferences on innovation, and more personal stories, which is why I always enjoy attending. So I thought I’d share a personal story that I’m still digesting and the questions it raises for me. If it reads a little differently from most of my writing, the context helps explain why.
About a month ago, I wrote an article about a simple idea. I asked whether anyone really believed that advertising should be the main way we supported content and services on the internet. Given how poorly banner advertising on the web worksGiven that nobody likes banner ads, and given that the current system puts users under surveillance, which in turn seems to inure us to government surveillance, I wondered whether there might be a better way.
Initially, the piece did what I hoped for – it sparked a lively conversation about business models for the web, particularly about business models for news. My friend Jeff Jarvis, whose faith in Google inspires envy in leaders of the Catholic Church, reassured me that once advertising got a little better, I’d like the ads I was reading online, really! I heard pitches for ethical advertising, for different approaches to subscriptions and to micropayments. I’d gotten something off my chest, had sparked a good conversation and was learning from the responses. As a blogger and a media scholar, life was good.
But something unexpected happened halfway through my day in the sun. For a brief and uncomfortable time, I became the most hated man on the internet. Here’s how that happened.
In writing about advertising, I wanted to talk about mistakes we’d made collectively, as an industry, not just beat up on Mark Zuckerberg or any other individual. So I took responsibility for my own small contribution to making the world an ugly, ad-filled place. I admitted, somewhat sheepishly, that almost twenty year ago, I invented the pop-up ad.
My intentions were good. (Please stop throwing things at the stage.) I was part of the team that started Tripod.com, an early webpage hosting company. My popup put an ad in a separate window from a user’s webpage, which was a way of distancing an advertiser from user generated content, reassuring advertisers that they wouldn’t actually be on the same page as a paranoid essay about government conspiracies or a collection of nude images. I thought it was a pretty good solution. I’m really, really sorry.
My editor at the Atlantic, Adrienne LaFrance, is a lot smarter about her readers I am. She knew that this admission, which I hid as a parenthetical comment, would be far more interesting to most readers than a 3000 word essay on advertising and a culture of surveillance. She did a brief interview with me about the process of creating the ad and wrote a 300 word story titled “The First Pop Up Ad”
And then things got weird.
Yes, that’s Jimmy Fallon and Conan O’Brien making fun of me. And yes, that’s one of the strangest things that’s ever happened tome.
Adrienne’s Atlantic piece, lightly rewritten, appeared in about forty news outlets over the next 24 hours. I got loads of interview requests and I did one of them before realizing that this was a very bad idea, and that even normally staid news outlets like the BBC would be far more interested in my “confession” than in the broader argument about advertising and surveillance. And then the emails and the tweets started to come in, first cursing me out in English, and then in Turkish, Portuguese, Chinese and Croatian as the story spread globally.
Andy Warhol predicted that, in the future, we would all be famous for 15 minutes. There’s another possibility: on the internet, we will all be intensely loathed for about 15 seconds.
Let me just pause for a moment and mention that my no good, very bad day on the internet is approximately equivalent to what many of my friends refer to as “Friday”. Which is to say, if you are an opinionated woman who writes online, you likely face ugly, misogynistic bullshit on a daily basis significantly more acute that the tongue in cheek death threats I received once this story took off. I don’t in any way mean to compare the mild abuse I faced at a moment of extreme visibility to the sorts or routine, everyday harassment many smart women face for merely expressing themselves online.
That said, I do now understand why someone would choose to go offline rather than wallow in hostility. I ended up taking a week entirely offline while things blew over, until only one in five emails was a death threat.
Of all the tweets hoping for my swift and painful demise, I was particularly struck by one written by a user in southeast Asia, whose tweet read, more or less: “@ethanz I do not accept your apology. This is your Frankenstein monster. You made it, you should kill it.”
I really liked this tweet. There’s something refreshingly simple about the idea that someone – one guy – could be responsible for things in the world that are badly broken. And once we find that guy, we can either string him up or demand that he fix it. It’s the inverse of the great man theory of history, where we declare Edison the great man who commercialized electricity and brought about the modern world, cutting out Tesla, Westinghouse and hundreds of others. This is the rat bastard theory of history, where if only that one SOB hadn’t put pop up ads on user homepages, the world would be a slightly pleasanter place.
The rat bastard theory is helpful because it gives you a single, concrete individual to hate. Back in my darkest, saddest days, when I used to use Windows, I really enjoyed hating Bill Gates, despite the fact that my anger was clearly misplaced. It’s clear that there were all sorts of design decisions that made Windows agonizing to use in the late 1990s, not an evil plan from Bill Gates, who has turned out to be a pretty terrific guy. But it’s a lot easier to see the rat bastard than to see the whole problem.
The problem with the internet in the late 1990s is that we wanted it to be available to everybody in the world. That’s partially because it seemed like the fair thing to do, and partially because we genuinely loved the web and wanted to evangelize for it globally. But most people didn’t know why the web was cool, why they’d want to build a homepage or send status updates to friends, so they weren’t about to pay us to use it. And so we ended up with the only business model we could think of – broadcast advertising. You’ll get our services for free, and we’ll demand some of your attention to sell to advertisers in exchange. Basically, we didn’t know how to pay for the internet for everyone, so we decided to make the internet work the way broadcast television worked.
We had a failure of imagination. And the millions of smart young programmers and businesspeople spending their lives trying to get us to click on ads are also failing to imagine something better. We’re all starting from the same assumptions: everything on the internet is free, we pay with our attention, and our attention is worth more if advertisers know more about who we are and what we do, we start business with money from venture capitalists who need businesses to grow explosively if they’re going to make money.
But there are people imagining something very different. Maceij Ceglowski, whose brilliant talk got me thinking about the broken state of the web, has built a bookmarking service called pinboard that’s really cheap – about $5 – but not free. He did it because he imagined something different, not a site to sell to another company, but a service he wanted to have available to the world that’s been profitable since the day he started it. WhatsApp is a wildly popular messaging service that charges users $0.99 a year, and had over 400 million users before Facebook bought it, an amazing example that people are willing to pay for tools they need and use. I just learned about a social network called Connect Fireside that’s designed for the mobile phone and optimized for sharing photos with your family and close friends, not with the whole world. It’s not cheap – it’s $20 a month right now – and who knows if it will survive, but it’s exciting to see someone build something based on a different set of assumptions.
Once you get rid of those assumptions – everything is free, the user is the product being sold to advertisers, and the goal is to be a venture-backed billion dollar business – and lots of things are possible. My friends in Kenya got sick of losing internet connectivity every time the power went off, so they went onto Kickstarter and funded BRCK, which is a portable internet router for the developing world that relies on the mobile phone network. The people who paid for it were people who wanted and needed the device, so were willing to pay for it to be built – crowdfunding, as a model, asks that you give up the assumption that people have to instantly receive the things they buy. Turns out that sometimes we’re willing to pay for something in the hope that it might come to pass, someday.
Turns out we’re also willing to pay for things because we love them and we want them to exist. I’ve become a massive fan of a deeply strange podcast called Welcome to Night Vale, which might be described as a cross between A Prairie Home Companion and the Twilight Zone. The podcast is free, but listeners are encouraged to donate $5 a month, and many do… Many more come to live shows, buy tickets and merchandise, which has allowed the creators to expand the podcast, bringing in new actors and musicians, launch a 16 city tour of Europe and write a book about the Night Vale universe. Putting something beautiful and strange out into the world and hoping people will love it is not the most reliable business model, but it sometimes turns out to be viable.
Remember my friend on Twitter who wanted me to kill Frankenstein’s monster? He got part of it wrong in assuming that I’m the rat bastard solely responsible for the situation we’re in today. But he got part of it right. He’s right that people like me can and should be trying to fix things.
The things that are broken about the internet today – and there are a lot of them – are the product of design decisions that fallible, mortal human beings have made over the past twenty years. Twenty years is not a long time. I have t-shirts older than the world wide web. The web wasn’t built by enlightened geniuses whose trains of thought we could never comprehend – it was built by idiots like me, doing the best we could at the time. It’s possible to look at every technical and design decision that’s led to where we are today and make different choices than the people who made those bad choices.
There’s a lot of things I don’t like about how the web works today. I don’t like that our attention is constantly for sale. I don’t like that public sharing is the default. I don’t like that the web never forgets. I think the always on, inescapable nature of the web is proving really exhausting for us as human beings. And I’m scared that the new public spaces, the places where we come together to debate the future, are owned and controlled by a few massive companies that have enormous potential power over what we’re able to discuss online.
But the mistake would be to assume that these shortcomings are inevitable, that they are simply natural consequences of how people interact online. The mistake is to stop questioning the assumptions about how the world worlds, to stop imagining ways things could be different and better.
I should probably mention that this is not a talk about internet business models.
Actually, it’s a talk about civics.
If you think the web is broken – and I do – you should take a look at American democracy. Here’s another case where fallible humans made design decisions that seemed like a good idea at the time and now have had some clearly disastrous consequences. Sure, having professional representatives deliberate together and govern a nation is a pretty cool hack when the dominant governance technology of the time is feudalism. One person, one vote elections – once we finally got around to fully implementing it? Cool idea. Freedom of speech for individuals as well as organizations? Seems pretty important for the rest of the system to work.
Put these things together and we’ve somehow ended up with a system where the Democratic Congressional Committee recommends to new congresspeople that they spend 4-5 hours a day raising money, and only 3-4 hours meeting with constituents or working to craft legislation. It’s a system that constituents hate – it’s part of the reason Congress has a single-digit approval rating – and Congresspeople hate, too, which is why some incumbents are choosing to leave office. But it’s not hard to figure out how it happened, to trace the decisions that brought us to a deeply undesirable place. We can invoke the rat bastard theory and blame Chief Justice John Roberts for Citizens United decision, but we’re following the same fallacy – this is a failure of imagination, not the failure of any one person (or even five supreme court justices)
It’s possible to imagine something very different. Larry Lessig is working very hard to bring a new model to life, where every voter gets $50 of government funds to give to politicians who can use the money to campaign. To imagine that model, you have to give up a bunch of assumptions: that you have a right to spend money to influence politics; that governments should try to do less, not more. And you have to be ready to cope with all the unintended consequences of the new system as well, of people selling their government funds to politicians through brokers, of social media becoming an endless campaign battle ground.
We need a ton more of this, people questioning assumptions that representation makes more sense than direct democracy, that decisions at the federal level are more important than those locally, that money is convertible into power. We need lots of Lessigs taking on these sorts of imaginative experiments, because most of them are going to fail.
Here are two really big assumptions I want to question:
– that participating in representative democracy is the core act of what it means to be a good citizen
– and that everyone is going to participate in our democracy in more or less the same way
You know what it means to be a good citizen – you’re supposed to read the newspaper and keep up on events locally and globally, and vote every two years and maybe call your congressman if something really pisses you off. And you probably know that this isn’t going to make very much of a difference. And that your congressperson is heading into a paralyzed institution that’s rarely able to pass legislation.
So let’s question the assumption that fixing Congress – or even more ambitiously, fixing politics – is the most important part of fixing citizenship. In the 50s and 60s, people figured out that, when you’re prevented from voting by law, public protests – marches, sit ins, boycotts – are a critical part of citizenship. As Congress starting passing civil rights legislation, activists learned that lawsuits were a critical tool to ensure that laws applied to everyone and enforced fairly. We rarely think of suing the government as a way to be a good citizen, but it’s been critical in building the rights-based society we have today.
There are at least three ways I think people can be civically active even if they’re frustrated with paralysis in politics. Make media. When people saw tanks in the streets of Ferguson, Missouri to counter peaceful protests, they made media and started a conversation about the militarization of America’s police forces. When they saw a newspaper picture of Michael Brown looking threatening, they started tweeting #iftheyshotmedown, asking whether the media were being fair in their choice of picture to portray the victim of a police shooting. Media is how we understand the world, and we can shape our public conversation by building civic media.
Make new things. I’m deeply frustrated that my government is surveilling my communications and those of millions of others worldwide, and I’m angry that so little legislative action has been taken in the wake of Ed Snowden’s revelations. But I’m really happy that software developers like Tor, Mailpile, Redphone and Mailvelope are working to make it easy to encrypt email and phonecalls and make it harder for anyone to surveil our communications. Make businesses. I’m concerned about climate change, I want to see a carbon tax, but in the meantime, I’m excited to see Tesla making electric cars that are sexy and for people as broke as me, excited that I drove here in a diesel car that gets almost fifty miles to the gallon. Building businesses that do well while doing good is one of the most powerful ways we can engage in civics today.
Here’s what’s tricky about expanding civics to include making media, making code and making money: it’s not a level playing field. Voting is something everyone can do – right now, making code or making media is easier for some people than others. We’re going to need to think hard about how we prepare the next generation of citizens for a world where their power comes not just from voting but from making, and where someone’s civics unfolds in the marketplace while someone else’s unfolds in Congress.
If you want to overcome failures of imagination – accepting that the web or our politics are inescapably broken – you’ve got to try something new. You’re probably going to fail. And when you do, I recommend that you try again. But first I recommend that you apologize. It feels really good. I’m sorry, and thank you for listening.
Michael Brown was fatally shot by police officer Darren Wilson on Saturday August 9, 2014. After his body lay in the hot street for four hours, Ferguson residents took to the streets to protest his killing. Brown’s death, the ensuing protest and the militarized police response opened a debate about race, justice and policing in America that continues today.
I heard about Michael Brown’s death for the first time on the evening of the 9th, through Twitter. Sarah Kendzior, a St. Louis-based journalist, was retweeting accounts of the protest, and other Twitter friends who write about racial justice issues, including Sherrilyn Ifill, the director of NAACP’s Legal Defense Fund, were discussing the implications of Brown’s killing and the community and police response. I wrote a quick blog post, noting how little mainstream media response there had been at that point, and how much of the coverage had focused on the anger of the crowd, which one outlet termed “a mob”. At that early point, there was not a dedicated Wikipedia article focused on Brown’s death, nor had major national newspapers, like the New York Times, written about Brown. For many people outside of the reach of local St. Louis media, Twitter was the medium that introduced people to the city of Ferguson.
The Pew Research Center confirms this picture. In a thorough analysis of early media coverage of the events in Ferguson, Paul Hitlin and Nancy Vogt of Pew note that cable news didn’t report the story until Monday the 11th, two days after the shooting. Cable news attention peaked on Thursday, August 14, after Ferguson police arrested two reporters and President Obama interrupted his vacation to report on the situation. Twitter attention peaked at the same time, with 3.6 million Ferguson-related tweets on the 14th, but it’s worth noting that Twitter showed a steady and growing interest in the topic from Saturday the 9th onward.
This pattern of attention to Michael Brown’s killing contrasts sharply with early attention to the killing of Trayvon Martin by neighborhood watch volunteer George Zimmerman on February 26, 2012. My students Erhardt Graeff, Matt Stempeck and I studied media coverage of the Martin killing using our Media Cloud tools and found that they key factor in expanding the conversation about Martin’s death beyond Central Florida was a careful PR campaign orchestrated by Benjamin Crump, an attorney retained by Martin’s parents. These PR efforts led to national television coverage of the story, which in turn led to a surge of interest on Twitter and in other participatory media.
In the case of Michael Brown, over 140,000 Twitter posts mentioned Ferguson on August 9th, the day of the shooting. Twitter led other media in 2014, while broadcast media led in 2012. When Crump announced he would be representing the family of Michael Brown on August 11, two days after the shooting, there was no need for a PR campaign, as the story of Brown’s death was already gaining traction in the media.
Pew’s analysis of Michael Brown versus Trayvon Martin shows that attention to events in Ferguson built more quickly than attention to events in Stanford, FL. There are several reasons for this. The police response to protests added a dimension to the Ferguson story that was not present in discussions of Trayvon Martin – many of the tweets on August 14 focused on the arrest of working journalists and the use of military equipment by Ferguson police. (When Missouri Highway Patrol Captain Ron Johnson took over control of law enforcement response in Ferguson on August 14, ordering officers to remove riot gear, attention on Twitter dropped sharply, suggesting some of the attention was on policing tactics.)
Twitter’s reach has grown since 2012, and the power of “black twitter”, which Soraya Nadia McDonald describes as both “cultural force” and social network, to challenge racist narratives has grown. Outrage on Twitter over a book deal for a juror who acquitted George Zimmerman led to the deal being withdrawn. Noting the power of outraged channeled through a hashtag, some African American Twitter users began posting contrasting photos of themselves under the tag #iftheygunnedmedown, suggesting that media were emphasizing the narrative of Michael Brown as “thug”, by using a photo in which Brown is displaying a peace sign (which many have read as a gang sign), rather than other photos in which Brown looks young and entirely unthreatening. The hashtag allowed Twitter users to participate in the dialog about Ferguson in cases where they had nothing to share about the situation on the ground by participating in a conversation about larger issues of structural racism in American media.
While events in Ferguson received widespread attention on Twitter, some observers saw very different behavior on their Facebook feeds.
Twitter vs. Facebook: my tweetstream is almost wall-to-wall with news from Ferguson. Only two mentions of it in my Facebook news feed.
— Mark_Hamilton (@gmarkham) August 14, 2014
John McDermott, writing in Digiday, offered this elegant formulation: “Facebook is for Ice Buckets, Twitter is for Ferguson”. Using data from SimpleReach, McDermott suggests that there were roughly eight times as many stories about events in Ferguson posted to Facebook as stories about the ALS Ice Bucket challenge, but that the stories received roughly the same exposure on Facebook (approximately 3.5 million Facebook “referrals” – appearances on a Facebook user’s screen – for each story between August 7-20th.) In the same time period, Crimson Hexagon calculated that there were roughly 1.5 times as many tweets about Ferguson as about the Ice Bucket Challenge.
The near-equal attention to Ferguson and the Ice Bucket Challenge that SimpleReach’s numbers suggest hardly seems like the substance of a controversy about algorithmic censorship, but it is worth noting that vastly more stories about Ferguson were posted to Facebook than stories about the Ice Bucket Challenge. The average story about the Ice Bucket Challenge was much more heavily promoted by Facebook’s algorithms (a factor of 8x) than the average story about Ferguson. Sociologist Zeynep Tufekçi suggests that this disparity may have been more significant early in the Ferguson story, which made the disparity between Facebook and Twitter more dramatic.
Tufekci goes on to observe that while Twitter (at present) is a “neutral” provider, simply delivering you tweets from the accounts you’ve chosen to follow, Facebook algorithmically “curates” your feed, offering you a selection of content that it believes you will find most interesting. She points out that this is a serious danger to democratic discourse. Facebook could censor our information flows and it would be difficult for us to determine that such censorship was taking place. The algorithms Facebook uses are opaque, which means conversations about how the algorithm work sound a bit like speculation about divine will. Tufecki suggests that Facebook may have altered their algorithm overnight to give Ferguson stories more prominence, which may be true but is impossible to verify. Others have speculated that Facebook tunes its algorithm to favor “happy news” and discourage serious news. Again, this is possible – and could even reflect lessons learned from Facebook’s mood manipulation experiments – but unverifiable.
Here are some of the things we do know about Facebook, which may explain why news appears differently on different social media platforms.
– It’s hard to search Facebook. When news breaks, Twitter users enter search terms and hashtags and receive an updated stream of tweets on the topic in question. On August 9th, I was able to follow reports from Ferguson and quickly choose Twitter users to follow who appeared to be at or near the scene. On Facebook, searching for “Ferguson” gives you the names of many people named Ferguson who Facebook thinks you might know. (You can, conveniently, choose to “Like” the city of Ferguson, which is probably pretty far from what most people searching for Ferguson want to do.)
– Non-reciprocal following makes it easier to diversify who you follow on Twitter. If I see that St. Louis-based rapper Tef Poe is reporting on Twitter regarding Ferguson, I can follow him, and I will see any public tweets he makes in my timeline. On Facebook, I need the approval of someone to follow their personal feed – they need to confirm our friendship before I can read what they’re saying.
It’s typical on Twitter for people to start following accounts of people they don’t know personally, especially when those people are eyewitnesses to or well-informed commenters on an event. I followed a set of users in Egypt while the Tahrir protests were taking place and stopped following many afterwards – I’ve done the same with Ferguson. This is much more difficult to do on Facebook – the platform is designed to connect you with friends, not sources, and to maintain those friendships over time.
– People have a relatively small number of Facebook friends – the median figure is 200 – and many white American Facebook users likely have few or no African-American Facebook friends. This isn’t a phenomenon specific to Facebook – it’s a broader reflection of American demographics and patterns of homophily, the tendency of “birds of a feather” to flock together. Robert Jones at the Public Religion Research Institute used data from the 2013 American Values Survey to estimate the racial composition of people’s social networks. He projects that the average white American has only one black friend, and that 75% of white Americans have entirely white social networks. (There’s an interesting debate about PRRI’s methodology, with Eugene Volokh arguing that it’s hard to estimate weak social ties by asking people about people they confide in.)
We can build on these numbers to speculate about what might be going on with Facebook and stories about Ferguson. Most people who follow Facebook believe that the company’s algorithm favors stories likely to be shared by a lot of people in your social network – Facebook favors viral cascades over surfacing novel content. If few users in your circle of friends are sharing stories about Ferguson – which is a distinct possibility if you are white and most of your friends are white – Facebook’s algorithm may see this as a story unlikely to “go viral” and tamp it down, rather than amplifying it, as it has with the ALS stories. On the other hand, if many of your friends are sharing the Ferguson story – likely if you have many African-American friends or if you follow racial justice issues – your corner of the Facebook network could be part of a viral cascade.
A new paper from Pew and Rutgers suggests another reason why Ferguson might have had difficulty spreading on Facebook. Keith Hampton and the other authors of the study found that people are acutely attuned to what conversations are happening on social networks and are unlikely to bring up topics perceived to be controversial. The users the authors surveyed were half as likely to bring up controversial topics on Facebook or Twitter as they were in face to face conversations. The exception was when they believed their friends on social networks agreed with their positions, in which case they were more likely to bring up the topic on social networks. In the case of Ferguson, this suggests that a Facebook user, unsure of whether her friends share her opinion that the police overreacted in Ferguson, might be more reluctant to post a Ferguson story on Facebook than she would be to bring up the situation in conversation. If there were a cascade of stories on Ferguson, she would have a social signal that the topic was acceptable and would be more likely to participate. On Twitter, where it’s not uncommon to follow dozens of people you don’t know well, it’s easier to interpret those social signals than on Facebook, where you are more likely to know an ethnically homogenous set of friends.
I’m far from immune to these echo chamber effects. One of my students surprised me with a link to a USA Today article reporting that more money had been raised to support Darren Wilson, the police officer accused of killing Brown than had been raised by Brown’s family. Few of my Facebook or Twitter friends were raising money for Officer Wilson, and so I was surprised to discover that a successful campaign was being held on his behalf. My ignorance of the campaign to support Wilson suggests that I have my own diversity issues in social media – if I want a more nuanced understanding of Americans’ reactions to Ferguson, I need to increase the ideological diversity of the people I follow online.
Special bonus: Jon Stewart takes the Ferguson challenge, which turns out to be utterly unlike the ALS Ice Bucket challenge.
I wrote a book review, of sorts, last week about Walter Isaacson’s book on Steve Jobs and my concern that biographies, as a genre, celebrate a “great man” theory of history. While I remain convinced that we need more biographies of teams, of successful collaborations (an idea that Nathan Matias furthers in his post today on acknowledgement and gratitude), I do have a dark secret to admit: I periodically dream about becoming a biographer.
This isn’t because I believe in the biography as a form. It’s because there are people I find so fascinating, I’d enjoy spending a couple of years thinking about how they became who they are or were, and how their personal stories give us a picture of what was possible at different moments in time. I asked a room full of students and colleagues who they’d most like to read a biography of, and the responses were a fascinating picture of my friends as individuals and as part of a group trying to invent the field of civic media.
When the question came around to me, I told the room that I wanted to read the biography of Afrika Bambaataa, one of a few men who can reasonably claim the title “Godfather of Hip Hop”. What I didn’t admit is that I’ve periodically considered dropping my academic pursuits and researching this fascinating figure.
We’re getting to the moment in history where thoughtful popular books are being written about hiphop’s early years and innovators – Jeff Chang’s Can’t Stop Won’t Stop is extensively researched and thoughtfully written, and Ed Piskor’s Hip Hop Family Tree has a visual style that recalls the early 1980s better than any text could.
Ed Piskor talks about his Hip Hop Family Tree project
Throughout volume one of Piskor’s beautiful history, Bambaataa recurs as an iconic figure, looming over an interchangeable crowd of short-lived MCs and DJs, as a future-looking visionary. Bambaataa was a leader of the Black Spades gang in the Bronx before deciding to dedicate his formidable charisma and organizing skills towards building the Universal Zulu Nation, a group that was part hip hop music and dance crew and part consciousness-raising Afrocentric cosmopolitan social club. Raised in the Bronx River Projects by his activist mother, he traveled to Nigeria, Equatorial Guinea and the Ivory Coast after winning an essay contest run by the New York City housing authority, leading Bambaataa to adopt the identity of an African chieftan, leading his crew of former gangsters into a new artistic life of “peace, love and having fun”.
Throughout the early years of hip hop, Bam was a step ahead of his rivals. Other DJs would look over his shoulder to determine which eclectic selections Bam was using as beats – adopting a trick from DJ Kool Herc, Bam would soak the labels off his records and replace them with labels from unrelated albums, leading rivals to purchase legendarily bad albums in the hopes of replicating his sound. (It’s hard to know whether tales of Bambaataa rocking a party with two copies of the Pink Panther theme are authentic musicology or an unintentional consequence of this tactic.) While other DJs sets had MCs asking the audience their zodiac signs (early hip hop was a direct descendant of disco), Bam was playing Malcolm X speeches over his beats. (I like to think of Keith LeBlanc’s No Sell Out, sometimes cited as the first recording featuring digital samples, as a Bambaataa tribute.) When everyone else followed Bambaataa into the crates, crafting their tracks around James Brown and P-Funk, Bam had moved on sampling Kraftwerk, building “Planet Rock” and inventing the entire genre of Electro.
Planet Rock, 1982
At some point, hip hop stopped following Bambaataa. After about 1986 sampling ruled hip hop, blossoming until it was killed by the Bridgeport Music decision. Electro has influenced every generation of dance music since the early 80s, but you can instantly place any track with rapping and chilly synths as coming from the lost sonic territory of 1982-1985. More tragically, after Bam led gang members out of the streets and into the dance club, Ice-T, BDP and NWA led hip hop out of the clubs and back into the gang life.
“Surgery”, (1984) World Class Wreckin Cru, featuring Dr. Dre. Yes, THAT Dr. Dre. Look it up.
Somewhere there’s a parallel reality in which Afrika Bambaataa is the best known name in hip hop and Dr. Dre is a little-known electro DJ. It’s an alternate dimension where Bambaataa added laser fusion propulsion to P-Funk’s Starship and flew music into orbit around Jupiter rather than having it crash in South Central. In that parallel universe, the Universal Zulu Nation got Angela Davis elected president in 1988 and Bambaataa DJ’d the year-long party to celebrate the intergalactic peace accord of 1999, in which all interpersonal conflicts were put aside towards the shared goals of
“peace, unity, love and having fun“.
Instead, Bambaataa has remained an honored and (insufficiently) celebrated hiphop pioneer, best remembered for one unforgettable track than for his epic social hack in the Bronx or his subsequent activism (including Hip Hop Against Apartheid and Artists United Against Apartheid.) Fortunately, the man is starting to get the respect he deserves, from an unusual corner: academe.
In 2012, Cornell University gave Bambaataa a three-year visiting scholar post. Bambaataa responded by donating his legendary record collection to Cornell’s Hip Hop Collection. This has presented an interesting curatorial challenge – the collection contains 40,000 albums, many of them with notes, flyers, press releases or other materials attached, all of which need to be scanned or digitized for posterity. For the past year, archivists have been cataloging the collection, sometimes in public, in Gavin Brown’s gallery in Greenwich Village.
From a slideshow of the Bambaataa collection on Okayplayer
The public archiving project has attracted a raft of contemporary DJs desperate to spin the Godfather’s discs. Joakim Bouaziz was one of the lucky DJ’s to be invited to the gallery, and he recorded part of his set spinning his favorites from the collection and recording the experience. No need to kick yourself for missing the gallery show – Cut Chemist and DJ Shadow are touring the US and Canada this fall, spinning the records live as part of their work building a Bambaataa tribute mix.
As for the biography? Bambaataa has been promising an autobiography since the mid-1990s. Let’s hope the revival of interest in his records leads to some helpful pressure on the man to put aside pressing Zulu Nation business for a few weeks and explain to us all What Would Bambaataa Do.
While I’m waiting for a Bambaataa autobiography, my guess is that a book that answers the questions I have would need to be biography of social movements at least as much as the story of a single individual. It’s not a coincidence that hip hop grew up in the Bronx at a moment when New York City’s physical infrastructure was crumbling and the Bronx had become synonymous with danger and decay. (Fort Apache, The Bronx came out in 1981, two years after Rappers’ Delight.) The physical and conceptual isolation of the Bronx from the rest of the city and the world allowed a culture to evolve in comparative isolation, which means that a history of Bambaataa needs to be a history of urban planning, of urban poverty and systemic racism, of the US’s housing projects. It would be a history of street gangs in New York as well as a history of Afrocentric philosophy and resistance. It would reach back to The Last Poets and ahead to Native Tongues, explore the rise of P-Funk’s Mothership and Sun Ra to understand “the Afro-Alien diaspora”. It’s more book than I am capable of writing, but damn, I hope someone takes it on.
For a taste of what those Bronx parties sounded like in 1982, here’s a collection of live recordings of early Bambaataa sets.
A friend recently posted a video on Facebook, a 1997 news story from ABC’s Nightline about Tripod, the social media company I helped build in Williamstown MA from 1994-1999. The video sparked a wave of reactions in me: nostalgia for those past days, pride in the accomplishments of the friends I’ve kept up with, regret for losing touch with others, and bafflement that I would choose to wear flannel and overalls to show off our company to the world. (Perhaps my favorite moment in watching the video was discovering that we’d been interviewed by Deborah Amos, NPR’s Middle East reporter, who has subsequently become a respected friend.)
I’m not proud of all of the emotions that I experienced traveling 17 years into the past. Seeing Bo Peabody, our co-founder and CEO, skateboard into the office and declare that we sold “eyeballs” gave me a wash of anger, envy and frustration that characterized much of my time at the company. Bo playing CEO – something he did splendidly – was often intolerable to me when I was in my twenties, and surprisingly uncomfortable for me to watch in my forties.
Like many companies, Tripod was run by a team of executives who worked closely together – Tripod was somewhat pioneering in that our executives were mostly in their mid-twenties, often working our first serious jobs. (I realize that all promising tech companies now recruit VPs from middle school and issue them standard-order Zuckerberg hoodies in kids sizes, but this was still pretty radical in 1997.) Our company succeeded to the extent it did (never profitable, but sold at a good price for our investors, and still survives as a service almost twenty years later) because we had a small, close-knit team of smart people with complementary skills, (One of those people now directs product design at Facebook. Another became chief marketing officer for Adap.tv and Rubicon, two pioneers in online advertising.)
I saw the team, its strengths and weaknesses as core to Tripod’s success. But whenever a journalist did a news story, it became the story of Bo, the founder, the solitary entrepreneurial genius who’d built our company.
I hated this. I thought it misrepresented our company, disrespected not only the contributions of the management team but the work done by the 60 smart people who built our products and served our users. Hearing me rant about this one too many times, Kara Berklich, our head of marketing pulled me aside and explained that the visionary CEO was a necessary social construct. With Bo as the single protagonist of our corporate story, we were far more marketable than a complex story with half a dozen key figures and a cast of thousands. When you’re selling a news story, it’s easier to pitch House than Game of Thrones.
Bo, to his great credit, understood that it was his job to play this role and was good about separating the character and the reality – his reflections on Tripod, Lucky or Smart?, make clear that Bo knew he was lucky enough to assemble a smart team and smart enough to let the team make the important decisions. Having taken on that visible visionary role at nonprofit organizations, I also now understand how often that job sucks, how being the avatar for a vast project forces you to try and manifest qualities that the company has and which you, personally, lack.
I was thinking of this ancient history last week as I worked my way through Walter Isaacson’s biography of Steve Jobs. (In defense of my choice of beach reading – I read several better books on paper that week. But Audible’s selections are a lot more limited, and I wanted something to “read” as I walked on the beach.) Isaacson’s biography, written with Jobs’s cooperation and hundreds of interviews with Jobs, his family, friends and colleagues, is an enjoyable and uncomfortable read. I found it enjoyable because it’s another personal time machine of sorts – reading it, I remember my first time using Apple products, from the venerable Apple II through the laptops and phones I use today. It’s uncomfortable because it becomes increasingly clear that Steve Jobs was an angry, manipulative asshole who slashed and burned his way through the lives of most people he encountered.
Sue Halpern reviewed Isaacson’s book for NYRB and does a better job than I could ever hope to, raising uncomfortable questions about Jobs’s attempts to be both corporate and counterculture, reminding us that Apple’s “Designed in California” is made possible by being “Assembled in China” under often troubling circumstances. My favorite of her observations is that Isaacson manages both to canonize Jobs while revealing his most damning flaws: “.. it is possible to write a hagiography even while exposing the worst in a person.” Jobs saw himself as an artist, Isaacson reminds us, and artistic geniuses are often too strange and pure to peacefully coexist with us lesser mortals.
When Jobs chose Isaacson to write his biography, it’s fair to assume he was aware of the author’s previous subjects: Benjamin Franklin, Albert Einstein and Henry Kissinger. The first two are routinely cited as exemplars of genius, and Kissinger may have his own dark claims to genius. It’s not hard to read Jobs’s selection of Isaacson as a way of inserting himself into the Pantheon.
Isaacson is happy to assist. The book was rushed into print when Jobs died, and Isaacson wrote a coda, excerpted in the New York Times, to cover Jobs’s death, funeral and legacy. In the New York Times excerpt, Isaacson makes clear that he saw Jobs as a genius, even if he wasn’t always conventionally smart. It was Jobs’s ingenuity and creativity, his ability to see a brilliant technical idea and turn it into something that consumers wanted that characterized his genius, Isaacson argues. One of the major themes of the book is the intersection of the sciences and the humanities – Jobs saw himself as standing at that crossroads, using his acutely honed sense of taste to predict the technical future and inspire the technicians to invent it.
This unusual form of genius, if that’s what it was, makes Jobs a particularly accessible role model for the tech industry. Many people who work on technology for a living are not Wozniak-level programmers. We flatter ourselves that we can contribute to the industry by helping those more gifted at writing code understand the needs of users, the importance of usability, the applicability of technical breakthroughts to unexpected new markets. Perhaps, like Steve, we can “put a dent in the universe” by connecting someone’s technical innovations with new markets.
People who can bridge between engineers and end users are important, necessary and often hard to find. It’s harder than it might appear to build these bridges in ways that respect and appreciate all those involved in building and marketing new technologies. In finding ways to bridge constructively and respectfully, Jobs is a lousy role model much of the time. The answer to “What Would Steve Jobs Do” is often “bully someone” or “throw a tantrum”. Unfortunately, it’s often easier to emulate Jobs’s less attractive personality traits than it is to replicate his design sensibilities.
Taking a break from Isaacson’s book, I read a thoughtful essay by Joshua Wolf Shenk, a preview of his new book, Powers of Two. Shenk argues that the myth of the solitary genius has dominated much of our thinking about creativity and obscures the fact that many people we know as geniuses worked in pairs or in larger teams. Shenk is particularly interested in creative pairings, pointing out that Einstein worked through the theory of relativity with Michele Besso, that Picasso invented Cubism with Georges Braque and that Dr. Martin Luther King co-led the civil rights movement with Ralph Abernathy and others.
There’s a way to read Isaacson’s biography in support of Shenk’s argument. Jobs was most productive as a serial collaborator, and was often disastrously unsuccessful when he wasn’t challenged by a strong partner or a team he respected. Jobs built Apple Computer on the brilliance of Steve Wozniak’s Apple computer, led Pixar to dominance over Disney’s animation business by hitching his star to filmmaker John Lasseter, and reinvented Apple as a music and phone company by partnering closely with Jony Ive. (Search for any of these men and you’ll find a wealth of articles and books declaring them geniuses.) Jobs has been good about crediting these collaborators and, occasionally, teams of collaborators – he saw the Macintosh as a team effort and honored team members at subsequent Apple product launches until his death.
When he didn’t have a strong collaborator or team, Jobs was often lost, as he was when Woz disengaged from Apple after the Apple II, when Jobs founded Next, or during the years Jobs dumped tens of millions into Pixar as a technology company, before Lasseter’s films demonstrating Pixar hardware took the company out of obscurity. In retrospect, this is obvious – Jobs didn’t write code or build prototypes. Instead, he shaped and guided the work that others did, making it better. Without a worthy collaborator, Jobs’s deeply impressive skillset was insufficient and often irrelevant.
It doesn’t lessen Jobs to recognize that creative genius comes from collaboration. Letting go of the idea that Shakespeare was a solitary genius writing masterworks in an attic without outside input and accepting that he was a member of a popular theatre company, incorporating the influences and feedback of other writers and actors into his creations makes him more fascinating to me, not less. Since we don’t have much access to the historical details of Shakespeare’s life, it’s easier to see these collaborative dynamics in modern biographies. Jobs may be one of the best examples of the collaborative genius idea, as the solitary genius narrative simply makes no sense in considering his history. We can imagine Shakespeare alone in a garrett or Einstein puzzling out equations alone at a blackboard, but Jobs alone is just an angry vegan too picky about design to furnish his own mansion.
In writing a biography, it’s natural to lionize the protagonist, if only to explain why she or he merited the author’s attention. Isaacson is better than some in featuring Jobs’s collaborators and influences, but the form ultimately dictates that the book is about a single individual, not pairs and teams of collaborators. The narrative arc is that of Jobs’s life, not the life of the companies he built, the products they created or the industries they influenced.
How do we tell the stories of partnerships and collaborations? Shenk’s book promises to tell the stories of creative pairings, both visible ones like Lennon and McCartney and invisible ones like that of Vladimir and Vera Nabokov. But his essay hints at the intriguing problem of telling stories of more complex collaborations, like the one I experienced at Tripod. How do we tell a story about creativity and collaboration at Wikipedia that doesn’t become a biography of Jimmy Wales? Is there a story about Linux that’s not a portrait of Linus Torvalds, an examination of Free Software that isn’t a character sketch of Richard Stallman? Not only are humans creatures who think in terms of stories, we are social beings, which means there is nothing we are so attuned to as the life stories of successful people.
Nathan Matias, a brilliant poet, literary scholar and software developer (who happens to be my doctoral student) has been working on better systems to acknowledge and credit the dozens of collaborators he’s worked with on his various projects. His personal website features almost a hundred collaborators – clicking on the icon for any of us reveals projects we’ve worked on with Nathan. It’s a first step towards a broader effort at designing acknowledgement on the web, and a key part of Nathan’s research on collaboration that leverages cultural and cognitive diversity. If we want to encourage diverse collaboration (and the end of Rewire makes a case for why we need to do so), we need to figure out how to recognize and celebrate people who work as creative teams, not just those who demand to be celebrated as geniuses.
Steve Jobs changed the world, or at least some highly visible corners of it. The story of his life, his successes and his failures is an important one for anyone who designs products and tools for large audiences. It would be a shame if the message we took from Isaacson’s book were that success comes from arrogance, self-certainty and cruelty. Until someone discovers a better way to write biographies of collaboration, that’s a message many readers will take away.
The Atlantic was kind enough to run a lightly edited version of this post. I’m posting here after their publication so that this remains in my archives.
Pharrell Williams is a happy man, but he’s crying. He’s one of the most in-demand record producers in the world, and had a hand in the two hottest songs of 2013, “Get Lucky” by Daft Punk and “Blurred Lines” by Robin Thicke. While those songs were inescapable on radio and television last summer, Pharrell’s most recent hit, “Happy”, has taken a different path to prominence.
French director Yoann Lemoine and production team We Are From LA worked with Pharrell to create a unique video for “Happy”. The video is 24 hours long, and was shot all across Los Angeles, featuring dozens of celebrity cameos interspersed amongst shot after shot of people dancing happily. It took 11 days to shoot the video, though many of the shots were single takes. The video follows the course of the day in LA, with footage from dawn to dusk and through the night, with Pharrell appearing each hour.
The original “Happy” video
The video quickly spawned thousands of fan remakes, featuring workplaces, business schools, college dorms who are all happy. Faced with a viral hit, Pharrell’s label, Columbia Records/Sony Music, has turned a blind eye to possible copyright violations, and one can easily spend hours on YouTube flipping from one fanvid to the next.
There’s a special subcategory of these videos that I think of as “georemixes”. The georemix builds on the idea that the original “Happy” video is a love letter to Los Angeles, a portrait of the city’s architecture, landscapes, people and spirit, and moves the party to a new location. More than a thousand georemixes of “Happy” exist, and they portray happy people on all six continents.
Pharrell, on Oprah, crying over “Happy”
Pharrell is on Oprah, watching a compilation of these remixes that bring his song around the world, from Detroit to Dakar. In the 30 seconds of the video Oprah shows, we catch glimpses of happy Taiwanese women on a spa day, Icelanders dancing on a glacier and Londoners strutting with Big Ben in the background. Pharrell’s reaction is the one many of us have had to the remixes of his video: he cries for a long time, overwhelmed not only by his success but by the experience of watching a simple idea – film yourself being happy – as it spreads around the world.
“Happy” is not the first video that’s been georemixed. Last summer, I gave a talk at the MIT8 conference focused on remixes of PSY’s Gangnam Style and Baauer’s Harlem Shake. In researching these localized remixes, my students pointed me to Jay-Z and Alicia Keys’s “Empire State of Mind”, remixed in remarkable fashion into “Newport State of Mind”, by comics M-J Delaney, Alex Warren and Terema Wainwright. (The parody was further parodied by Welsh rappers Goldie Looking Chain, who complained that the Newport parodiers lacked local knowledge and cred.)
The original “Empire State of Mind”
“Newport (Ymerodraeth State of Mind)”
The georemix dates back at least as early as 2005, when Lazy Sunday, produced by The Lonely Island (Saturday Night Live’s Chris Parnell and Andy Samberg) was remixed into parodies like Lazy Muncie, showing midwest pride, and Lazy Ramadi, which replaces a search for cupcakes with a confrontation with Iraqi insurgents.
The Lazy Sunday georemix was born out of a mock East Coast/West Coast rap beef, which quickly set the tone for georemix videos. Each response is a retelling of the core story, transposed to a new location, bragging about local landmarks and habits. While the braggadocio in these remixes is pure parody, there’s a sense in which each of these videos makes a claim to share the stage with the original. YouTube’s related videos feature means that there’s a chance that some of the 2 billion viewers of PSY’s Gangnam Style video will encounter Zigi’s “Ghana Style”, a georemix that relocates Seoul to Accra and replaces PSY’s horse dance with Ghanaian Azonto. (And if not through YouTube, viewers may encounter Zigi through the hundreds of listicles that advertise “10 Best Gangnam Style Parodies”)
Zigi’s Azonto version of Gangnam Style
I think of the georemix as a claim to attention, a way of demanding part of the spotlight that shines on a popular video. It’s a very basic demand: accept that we’re part of this phenomenon, too. In remixing Gangnam Style, Zigi sends the message that Ghana has YouTube, is clued into global cultural trends, has its own distinct sound and dance style to share with the world, and can produce videos as technically proficient as anything coming from other corners of the world. To me, “Ghana Style” reads both as lighthearted celebration of a catchy tune that truly went global, and a political statement about a world where culture can spread from South Korea to Ghana to the US, not just from the US and Europe to the rest of the world.
Of course, the georemix can also be purely political. Ai Wei Wei’s Gangnam style, titled Grass Mud Horse Style, moves the dance into his studio in Beijing and is filmed almost entirely within the walls of that compound, alluding perhaps to the artist’s frequent arrests and detentions. (If the location doesn’t set the theme, his appearance a minute into the video, spinning handcuffs certainly does.) Other georemixes take on specific issues explicitly. Consider Dig Grave Style, a protest video made by students from China’s Henan province, in which dancers rise from the earth to protest the moving of graves from villages to open land for real estate development.
Dig Grave Style
Remixing a video is a shortcut to creating original content. The script is partially written – the creativity comes from changing the lyrics and the setting. The popularity of the Harlem Shake meme (which was georemixed around the world, and saw political georemixes in Tunis and Cairo) came in part from the extremely low levels of effort required to participate in the phenomenon – simply film people behaving in an ordinary way, then dancing like madmen in strange costumes and you’ve got your localized Harlem Shake.
“Happy” benefits from this low barrier to entry. There are Happy remixes that function as shot-by-shot remakes of the short, official Pharrell video, and there are vastly more that adopt the spirit of the video and transpose it to a local context.
Loïc Fontaine and Julie Fersing deserve much of the credit for the georemixes that made Pharrell cry, though neither has made a video. Fersing, an interior designer in Nantes, began collecting georemixes of “Happy”, searching YouTube to find new material. When she’d located 21 of the videos, she turned to her husband, Fontaine, who’d begun a career in website development nine months earlier. Together, they launched We Are Happy From, a portal that now hosts 1682 videos from 143 countries.
Once the site had attracted about 50 remixes, Fontaine contacted the We Are From LA production team, who gave the project their blessing. While Fontaine had not spoken to Pharrell when I interviewed him a month ago, he felt quite confident that the project was consistent with the artist’s wishes and would survive, pointing out that Sony had not taken action to remove the vast majority of remix and parody videos posted online. Indeed, the success of the song has likely had a great deal to do with the widespread participation online, giving “Happy” an online life and prominence that no amount of radio payola could provide. (Pharrell has embraced the notion of the georemix, urging people around the world to produce their versions of the video as part of a UN-sponsored International Day of Happiness.)
We Are Happy From is simply an index, pointing to videos hosted on YouTube, Daily Motion and other platforms. While the videos have a consistent look, usually opening with a black on yellow title screen (as Pharrell’s video does), Fontaine doesn’t provide any production help or guidelines. Still, the videomakers are clearly conscious of We Are Happy From’s role in promoting “Happy” videos as a global form, as many videos feature a screencap of the We Are Happy From map.
While anyone can submit a video to We Are Happy From, not all videos appear on the map. Fersing is the curator, and she watches all videos before adding them to the map. (As of April, the couple were receiving 20-40 videos a day.) Videos that are overly commercial or connected to political or social causes don’t make the cut. Fontaine explained that some French political parties produced Happy videos as campaign materials – We Are Happy From chose not to feature those videos. An Italian version of Happy with an environmental message was also not included, nor was Porto (un)Happy, which features activists dancing through unfinished construction sites in Porto Allegre, Brazil, along with subtitles that critique government spending on public works projects. (Manaus is unhappy as well.)
I asked Fontaine why he and his wife had chosen to become active curators of the project. It was a practical decision, Fontaine explained: “They say it’s black, someone else says it’s white. How am I to judge?” Rather than evaluating the validity of political claims, he would rather focus on what he sees as the core message of these remixes: “We Are Happy From is purely about the happiness. We just want to show a simple message about being happy about where we live.”
For me, as a student of civic media, the dissident videos excluded from the We Are Happy Map are the most interesting ones. Fontaine has kindly shared the list of rejected videos with me, and I hope to spend some time this summer watching those 500 remixes in the hopes of developing an understanding of how “Happy” can work as a script for advocacy (or how videomakers think it might act as that script.) But for Fontaine and his wife, the mark of success wasn’t raising awareness for a cause or an issue – it was documenting the spread of happiness globally. When I interviewed Fontaine, he was celebrating the spread of “Happy” to Antarctica, with a video from French research station Dumont d’Urville.
The 1600 videos on We Are Happy From may not advocate for a political party or a cause, but they are “political”. When the residents of Toliara, Madagascar make their version of “Happy”, they’re making a statement that they’re part of the same media environment, part of the same culture, part of the same world as Pharrell’s LA. This assertion isn’t quite as anodyne as Disney’s “Small World After All” or the “I’d Like to Buy the World a Coke” campaign. Even with Fontaine and Fersing’s curation, we get distinct glimpses of how different it can be to be happy in different corners of the world: Happy in Damman, Saudi Arabia features wonderfully goofy men, but not a single woman. Beijing is happy, but profoundly crowded and hazy – intentionally or not, the video is a statement about air pollution as well as about a modern, cosmopolitan city.
A few weeks ago, We Are Happy From added a video from Tehran, Iran to the map. If you don’t know where the video is from, it’s unremarkable. A dozen twenty-somethings, men and women, dance on a rooftop, wear silly outfits, and wave their legs while lying on a bed. It’s remarkable only if you know that women in Iran are forbidden to appear in public without their hair covered and that men and women are prohibited from dancing together in public.
Happy in Tehran
Given context, the video is an incredibly brave statement. The young women in the video covered their own hair with wigs, keeping themselves technically in line with local Islamic law, and kept clothing around so they could cover up if seen from neighboring buildings. One of the videos stars, identified only as Neda, said, “We were really afraid. Whenever somebody looked out of a window or someone passed by, we ducked behind a door to make sure we were not seen.”
The makers of the video, forced to apologize on state television
Neda and her compatriots were right to be afraid. Six people involved with making the video were arrested and forced to appear on state television, testifying that they were tricked and duped into making the video. It is unclear what consequences the filmmakers will suffer beyond public humiliation, and a hashtag, #FreeHappyIranians is emerging to protest their detention. Pharrell, to his credit, has tried to call attention to the situation:
It's beyond sad these kids were arrested for trying to spread happiness http://t.co/XV1VAAJeYI
— Pharrell Williams (@Pharrell) May 21, 2014
It’s clear from Neda’s interview with Iran Wire that the intention behind the video is precisely Fontaine and Fersing’s intention. ““We wanted to tell the world that the Iranian capital is full of lively young people and change the harsh and rough image that the world sees on the news.” They chose a middle-class Tehran home to make the point that ordinary Iranians, not just the elite, were happy, creative, modern and globally engaged. And the video, with subtitles and credits in English, was clearly created for a global audience, designed to be part of the International Day of Happiness, though it was turned in too late for inclusion: “We want to tell the world that Iran is a better place than what they think it is. Despite all the pressures and limitations, young people are joyful and want to make the situation better. They know how to have fun, like the rest of the world.”
Perhaps a video that asserts that you and your friends are part of the wider world is political only if your nation has consciously withdrawn from that world. Perhaps it’s political any time your city, your country and your culture are misunderstood or ignored by the rest of the world. We Are Happy From is cultural politics in the best sense of the word, a good-natured assertion that what brings us together is more important than what divides us. That the Tehran video has led Pharrell to a different type of tears is a reminder of how powerful and threatening this sort of statement can be.