My alma mater, Williams College, begins the academic year with a convocation, a ceremony for seniors, faculty and a small number of alumni who are being honored with the college’s Bicentennial medal, an award for “distinguished achievement in any field of endeavor”. I was honored to be one of those medal recipients this year, and the college asked me to address the students. My remarks follow below. (Should you want to see me deliver the address, here’s the YouTube video.)
If the job of a commencement address is to offer students thoughts on how to exit college and enter the world, a convocation address can urge students to make the best of their remaining time in college. I wanted to try to connect my time at Williams more than twenty years ago to my professional life, to talk about the college’s new library and connect the year’s academic theme – The Book, Unbound – to my own work.
I’m posting the speech here at the request of a few faculty and students who were kind enough to ask. It may not make a ton of sense to my regular readers, who don’t know about the rivalry between Williams and Amherst, two fine colleges in Western Massachusetts that have lots of similarities and a long-standing rivalry. I’m not above playing to the hometown crowd, so most of the laugh lines here are digs at Amherst and Lord Jeffrey Amherst, British Army commander and all-around nasty piece of work.
I’m honored and thrilled to be with you today for the Williams 2014 Convocation, for the dedication of the extraordinary and beautiful new Sawyer Library, and for this year’s conversation about “The Book, Unbound”. Given the circumstances, I’ve been thinking back to one of my favorite Williams origin stories. You know it, I’m sure: how in September 1821, Zephaniah Swift Moore, the second President of Williams College, skulked out of town in the dead of night, leaving the wilderness of western Massachusetts to build Amherst in the tamer lands of the Pioneer Valley, taking with him not only 15 students but key volumes from the Williams College library.
It’s a fantastic story, just the sort of thing to justify our centuries-old rivalry with our neighbors to the east. Unfortunately, it’s not true. Yes, President Moore left, and yes, fifteen students left with him. And there’s a lack of clarity about where the 700 books that constituted Amherst’s original library came from. But there’s no evidence that Amherst’s library was seeded with purloined volumes, only records of votes by student societies not to move libraries along with the students who left the college.
It’s possible the story began as an excuse for the poor quality of the Williams library in the 1800s. In 1821, the library wouldn’t have been that hard to steal – at that point, Williams had two buildings, two professors, two tutors and 1400 volumes, ot a huge expansion from the school’s original library, 360 volumes in a bookcase in West College. And students complained bitterly of the quality of those books, most of which were dusty theological texts – if you wanted to read non religious literature at Williams for much of 1800s, you would do better to turn to student-run literary and scientific societies.
It’s likely that the legend is much more recent, probably forming in early 1960s. In an essay about the legend, Dustin Griffin points out that early histories of the college discuss President Moore’s departure and rivalries with Amherst, but not the story of the books, and that the story of the books wasn’t one his contemporaries knew in the early 1960s. But by the mid 1960s, there’s record of John Chandler, then dean of the faculty, visiting Amherst’s new library in 1965 cracking a joke about coming to take our books back. When I was here in the early 1990s, the theft of Williams’s library was presented as fact, a simple explanation for the inherent moral superiority of our institution over our rival… which was helpful, as many of us had applied to Amherst as well and needed solid grounding for our contempt.
So we have a myth that’s fairly recent, but which has some powerful explanatory properties and enduring power. It’s worth picking at the myth and asking what it says about us as a culture that this is one of our origin stories, an explanation for our place in the world.
When I’ve heard the story, the theft of the books is always presented as the final straw: Yes, Amherst took our president and took our students. But can you believe they had the nerve to take our books! Bibliolarceny is somehow a more serious crime than other forms of theft – not, perhaps, as serious as proposing the extermination of native Americans with smallpox blankets, but worthy of special consideration nevertheless.
Books have a special, symbolic meaning in our culture. The burning of books – whether by the Mongols when they sacked Baghdad in the 9th century, the burning of “degenerate texts” by the Nazis in the 1930s or American extremists burning the Quran today – isn’t just about the destruction of an object. It’s the symbolic destruction of a people, a culture and a way of thinking. Whether we’re banning books from library shelves, burning them or stealing them, we’re talking about shrinking the universe of cultural possibilities, limiting the number of different ways we can look at the world through the eyes of the authors.
I think that’s why this story has special significance in the context of a college. Even before 1965, when this story gained its currency, college was a place to expand your worldview. The process of packing up, leaving your hometown and going to live with a new set of people is constructed not just to give you access to a different set of teachers, but to a different and broader set of friends and influences. In 1965, when this myth took root, colleges themselves were shifting. In 1964, the civil rights act mandated access to public schools for African Americans and for women. Williams became coeducational in 1970. When this myth arose, we were right on the cusp of the college experience changing: from one which exposed students to a world of mostly white men, to one which served as a bridge to a much wider, multicultural international world. Against that backdrop of the widening world, we have a story about part of a college’s community giving up, finding the challenge of building a community out in the wilds to be too difficult, and shrinking the horizons of those who stayed by taking their books.
One of the reasons I wanted to think about this story is that I wonder whether it has as much currency now as it did in the 60s, or even in the 90s. We’re at a very different moment in our relationship with books, our relationship with information, than we were even twenty years ago. The story of the stolen books is a story from the days of scarce information. Now most of us feel like we’re inundated with information, possibly drowning in it. How do we think about losing part of our library when we have an apparent infinity of information online?
I published a book last summer, Rewire, that looked at the question of how having access to an abundance of global information is changing what we know about the world. I had been a cyberutopian, someone who believed that the internet was going to make the world a smaller, more connected and more understanding place. This seemed pretty obvious to me – it used to be really hard to get news from Sub Saharan Africa or Central Asia – now you can read a Nigerian newspaper online or make a Skype call to Kazakhstan.
But a strange thing has happened as we’ve gotten access to more information from around the world – most of us are choosing to encounter less of it. We have to make thousands of decisions a day about whether we read a story about Ebola, a tweet from Ferguson, or a Facebook update from a high school classmate. In aggregate, most of us are getting much less international news than we did in an era of the daily newspaper and three television stations.
When we’re faced with a wealth of choices, we tend to opt for the familiar, for what we already know to be important. It’s a basic human tendency to pay more attention to members of “our tribe” than people we’ve never met and don’t have a reason to care about. This was a fine coping strategy for a world of disconnected villages, the world almost everyone lived in 500 years ago, but it’s deeply maladaptive for the connected world we live in today. We may not know anyone in Liberia, but it’s a pretty short plane flight from Monrovia to New York – problems that were distant have a way of become our problems very quickly.
Much of my work at MIT looks at questions of how we maintain a broad view of the world when we’re faced with an avalanche of information. It’s directly parallel to the problems librarians have today now that the problem isn’t expanding from 360 volumes to 1400 – the problem is engineering serendipity. It’s making the library – or, in my case, the internet – both a place where you can take a deep dive into a subject you care about, and also a place where you can discover something unexpected and life changing.
One of the things I’ve learned in my research is that it’s much easier to pay attention to people than to places. If there’s someone you care about who’s from Haiti, if you’ve had the chance to travel there and meet people from Haiti, you’ll watch the news differently. You’ll have a connection to that place, a context for a story you hear. The events will be more real to you because Haiti is more real to you through the people you know there.
For ten years, I’ve been helping run a website called Global Voices, which uses citizen media – blogposts, YouTube videos, tweets – to bring readers news from around the globe. The reason we use citizen media is that it gives you a connection to ordinary people writing online as well as to the events they’re describing. For our readers and our community, the Arab Spring wasn’t just the story about a political upheaval – it was the story of our friends who were in the streets, in and out of prison and then in and out of the new governments.
I started working on Global Voices because I wanted to read more news about sub-Saharan Africa in the newspaper. That’s because I spent five years in Ghana helping Ghanaians build internet service providers and other technology businesses. And I started doing that because I spent a year in Ghana on a Fulbright grant studying xylophone music. What got me interested in that was Sandra Burton, who I believe should be considered a national treasure as well as one of our colleges’ greatest heroes. Along with Gary Sojkowski and the late Ernest Brown, Sandra founded Kusika, the African dance ensemble, which was the center of my community when I was at Williams. The strange and wonderful path my life has taken, from starting an early web company to building internet businesses in Africa to working with media activists and journalists around the world, to teaching at MIT leads directly back to the dance studio and to the computer labs, to the professors and students who were passionate about a world wide enough to include both Africa and the internet.
The next time you visit Sawyer Library, I’d ask you to think about the ways in which it’s carefully curated, designed to make it possible to get lost productively, to discover something unexpected but wonderful. Possibly the only thing at Williams more carefully curated is the class you are a part of. We’ve got an almost infinite capacity to put information on shelves physically or virtually, but the opportunity to be in this place, with these people for four years is decidedly finite. I’m grateful for the effort that went into giving me a universe of a couple thousands people who challenged me and invited me to discover new ways of looking at the world.
This is something the college does very consciously, for the simple reason that who we know is going to help determine who we are. I don’t mean this in the narrow sense that, if the person sitting next to you founds the next Facebook, maybe you’ll get some stock options. I mean it in a much broader sense: that who you know, who you care about tends to determine how you view the world, what you pay attention to, and ultimately will shape your path through the world.
Like the library, like the internet, the class of 2015 is too big to know. But if the challenge of a really great library is not just to explore what you already know, what you already care about, the challenge is the same, to challenge yourself to expand your picture of the world by expanding who you know and who and what you care about.
Here’s what Zephiniah Swift Moore took from Williams when he left for Amherst: he took 15 students, 20% of the student body. We can think of those mythical stolen books as shrinking the universe, what we could learn from those volumes. But we should think of losing those students in the same way, as losing the opportunity to see the world through a different set of eyes.
We’re always going to have to make choices about who we know, what we read, what we care about. We never get to read every book, even when there are only 360 on the shelves, and we’re never going to know the people around us as well as they deserve to be known. But we can make decisions to choose a wider world. In ways I never expected, Williams launched me into a world that’s wider than I had imagined. I am eternally grateful for this and I hope the same for you.
Writing this address was a great chance to read up on the early history of Williams and its library. Here are some of the sources I benefited from:
Dustin Griffin (Williams ’65) wrote a terrific essay, “The Theft of the Williams Library”, which I drew from heavily. I’m especially grateful to Griffin for the term “bibliolarceny”.
Steve Satullo (Williams ’69) has been researching the history of Williams libraries as the college has built and moved Sawyer library. His essays have been very helpful for understanding the early days of the Williams library and its shortcomings.
In understanding the state of Williams College and the reasons Zephaniah Swift Moore and others believed it was important to move Williams from the wilderness toward the more settled Pioneer Valley, the 1895 “A History of Amherst College” by William S. Tyler is very helpful.
I spoke this morning at the tenth incarnation of BIF, Business Innovation Factory, an annual conference in Providence, RI that focuses on storytelling. It’s got a lot less product promotion and self-celebration than many conferences on innovation, and more personal stories, which is why I always enjoy attending. So I thought I’d share a personal story that I’m still digesting and the questions it raises for me. If it reads a little differently from most of my writing, the context helps explain why.
About a month ago, I wrote an article about a simple idea. I asked whether anyone really believed that advertising should be the main way we supported content and services on the internet. Given how poorly banner advertising on the web worksGiven that nobody likes banner ads, and given that the current system puts users under surveillance, which in turn seems to inure us to government surveillance, I wondered whether there might be a better way.
Initially, the piece did what I hoped for – it sparked a lively conversation about business models for the web, particularly about business models for news. My friend Jeff Jarvis, whose faith in Google inspires envy in leaders of the Catholic Church, reassured me that once advertising got a little better, I’d like the ads I was reading online, really! I heard pitches for ethical advertising, for different approaches to subscriptions and to micropayments. I’d gotten something off my chest, had sparked a good conversation and was learning from the responses. As a blogger and a media scholar, life was good.
But something unexpected happened halfway through my day in the sun. For a brief and uncomfortable time, I became the most hated man on the internet. Here’s how that happened.
In writing about advertising, I wanted to talk about mistakes we’d made collectively, as an industry, not just beat up on Mark Zuckerberg or any other individual. So I took responsibility for my own small contribution to making the world an ugly, ad-filled place. I admitted, somewhat sheepishly, that almost twenty year ago, I invented the pop-up ad.
My intentions were good. (Please stop throwing things at the stage.) I was part of the team that started Tripod.com, an early webpage hosting company. My popup put an ad in a separate window from a user’s webpage, which was a way of distancing an advertiser from user generated content, reassuring advertisers that they wouldn’t actually be on the same page as a paranoid essay about government conspiracies or a collection of nude images. I thought it was a pretty good solution. I’m really, really sorry.
My editor at the Atlantic, Adrienne LaFrance, is a lot smarter about her readers I am. She knew that this admission, which I hid as a parenthetical comment, would be far more interesting to most readers than a 3000 word essay on advertising and a culture of surveillance. She did a brief interview with me about the process of creating the ad and wrote a 300 word story titled “The First Pop Up Ad”
And then things got weird.
Yes, that’s Jimmy Fallon and Conan O’Brien making fun of me. And yes, that’s one of the strangest things that’s ever happened tome.
Adrienne’s Atlantic piece, lightly rewritten, appeared in about forty news outlets over the next 24 hours. I got loads of interview requests and I did one of them before realizing that this was a very bad idea, and that even normally staid news outlets like the BBC would be far more interested in my “confession” than in the broader argument about advertising and surveillance. And then the emails and the tweets started to come in, first cursing me out in English, and then in Turkish, Portuguese, Chinese and Croatian as the story spread globally.
Andy Warhol predicted that, in the future, we would all be famous for 15 minutes. There’s another possibility: on the internet, we will all be intensely loathed for about 15 seconds.
Let me just pause for a moment and mention that my no good, very bad day on the internet is approximately equivalent to what many of my friends refer to as “Friday”. Which is to say, if you are an opinionated woman who writes online, you likely face ugly, misogynistic bullshit on a daily basis significantly more acute that the tongue in cheek death threats I received once this story took off. I don’t in any way mean to compare the mild abuse I faced at a moment of extreme visibility to the sorts or routine, everyday harassment many smart women face for merely expressing themselves online.
That said, I do now understand why someone would choose to go offline rather than wallow in hostility. I ended up taking a week entirely offline while things blew over, until only one in five emails was a death threat.
Of all the tweets hoping for my swift and painful demise, I was particularly struck by one written by a user in southeast Asia, whose tweet read, more or less: “@ethanz I do not accept your apology. This is your Frankenstein monster. You made it, you should kill it.”
I really liked this tweet. There’s something refreshingly simple about the idea that someone – one guy – could be responsible for things in the world that are badly broken. And once we find that guy, we can either string him up or demand that he fix it. It’s the inverse of the great man theory of history, where we declare Edison the great man who commercialized electricity and brought about the modern world, cutting out Tesla, Westinghouse and hundreds of others. This is the rat bastard theory of history, where if only that one SOB hadn’t put pop up ads on user homepages, the world would be a slightly pleasanter place.
The rat bastard theory is helpful because it gives you a single, concrete individual to hate. Back in my darkest, saddest days, when I used to use Windows, I really enjoyed hating Bill Gates, despite the fact that my anger was clearly misplaced. It’s clear that there were all sorts of design decisions that made Windows agonizing to use in the late 1990s, not an evil plan from Bill Gates, who has turned out to be a pretty terrific guy. But it’s a lot easier to see the rat bastard than to see the whole problem.
The problem with the internet in the late 1990s is that we wanted it to be available to everybody in the world. That’s partially because it seemed like the fair thing to do, and partially because we genuinely loved the web and wanted to evangelize for it globally. But most people didn’t know why the web was cool, why they’d want to build a homepage or send status updates to friends, so they weren’t about to pay us to use it. And so we ended up with the only business model we could think of – broadcast advertising. You’ll get our services for free, and we’ll demand some of your attention to sell to advertisers in exchange. Basically, we didn’t know how to pay for the internet for everyone, so we decided to make the internet work the way broadcast television worked.
We had a failure of imagination. And the millions of smart young programmers and businesspeople spending their lives trying to get us to click on ads are also failing to imagine something better. We’re all starting from the same assumptions: everything on the internet is free, we pay with our attention, and our attention is worth more if advertisers know more about who we are and what we do, we start business with money from venture capitalists who need businesses to grow explosively if they’re going to make money.
But there are people imagining something very different. Maceij Ceglowski, whose brilliant talk got me thinking about the broken state of the web, has built a bookmarking service called pinboard that’s really cheap – about $5 – but not free. He did it because he imagined something different, not a site to sell to another company, but a service he wanted to have available to the world that’s been profitable since the day he started it. WhatsApp is a wildly popular messaging service that charges users $0.99 a year, and had over 400 million users before Facebook bought it, an amazing example that people are willing to pay for tools they need and use. I just learned about a social network called Connect Fireside that’s designed for the mobile phone and optimized for sharing photos with your family and close friends, not with the whole world. It’s not cheap – it’s $20 a month right now – and who knows if it will survive, but it’s exciting to see someone build something based on a different set of assumptions.
Once you get rid of those assumptions – everything is free, the user is the product being sold to advertisers, and the goal is to be a venture-backed billion dollar business – and lots of things are possible. My friends in Kenya got sick of losing internet connectivity every time the power went off, so they went onto Kickstarter and funded BRCK, which is a portable internet router for the developing world that relies on the mobile phone network. The people who paid for it were people who wanted and needed the device, so were willing to pay for it to be built – crowdfunding, as a model, asks that you give up the assumption that people have to instantly receive the things they buy. Turns out that sometimes we’re willing to pay for something in the hope that it might come to pass, someday.
Turns out we’re also willing to pay for things because we love them and we want them to exist. I’ve become a massive fan of a deeply strange podcast called Welcome to Night Vale, which might be described as a cross between A Prairie Home Companion and the Twilight Zone. The podcast is free, but listeners are encouraged to donate $5 a month, and many do… Many more come to live shows, buy tickets and merchandise, which has allowed the creators to expand the podcast, bringing in new actors and musicians, launch a 16 city tour of Europe and write a book about the Night Vale universe. Putting something beautiful and strange out into the world and hoping people will love it is not the most reliable business model, but it sometimes turns out to be viable.
Remember my friend on Twitter who wanted me to kill Frankenstein’s monster? He got part of it wrong in assuming that I’m the rat bastard solely responsible for the situation we’re in today. But he got part of it right. He’s right that people like me can and should be trying to fix things.
The things that are broken about the internet today – and there are a lot of them – are the product of design decisions that fallible, mortal human beings have made over the past twenty years. Twenty years is not a long time. I have t-shirts older than the world wide web. The web wasn’t built by enlightened geniuses whose trains of thought we could never comprehend – it was built by idiots like me, doing the best we could at the time. It’s possible to look at every technical and design decision that’s led to where we are today and make different choices than the people who made those bad choices.
There’s a lot of things I don’t like about how the web works today. I don’t like that our attention is constantly for sale. I don’t like that public sharing is the default. I don’t like that the web never forgets. I think the always on, inescapable nature of the web is proving really exhausting for us as human beings. And I’m scared that the new public spaces, the places where we come together to debate the future, are owned and controlled by a few massive companies that have enormous potential power over what we’re able to discuss online.
But the mistake would be to assume that these shortcomings are inevitable, that they are simply natural consequences of how people interact online. The mistake is to stop questioning the assumptions about how the world worlds, to stop imagining ways things could be different and better.
I should probably mention that this is not a talk about internet business models.
Actually, it’s a talk about civics.
If you think the web is broken – and I do – you should take a look at American democracy. Here’s another case where fallible humans made design decisions that seemed like a good idea at the time and now have had some clearly disastrous consequences. Sure, having professional representatives deliberate together and govern a nation is a pretty cool hack when the dominant governance technology of the time is feudalism. One person, one vote elections – once we finally got around to fully implementing it? Cool idea. Freedom of speech for individuals as well as organizations? Seems pretty important for the rest of the system to work.
Put these things together and we’ve somehow ended up with a system where the Democratic Congressional Committee recommends to new congresspeople that they spend 4-5 hours a day raising money, and only 3-4 hours meeting with constituents or working to craft legislation. It’s a system that constituents hate – it’s part of the reason Congress has a single-digit approval rating – and Congresspeople hate, too, which is why some incumbents are choosing to leave office. But it’s not hard to figure out how it happened, to trace the decisions that brought us to a deeply undesirable place. We can invoke the rat bastard theory and blame Chief Justice John Roberts for Citizens United decision, but we’re following the same fallacy – this is a failure of imagination, not the failure of any one person (or even five supreme court justices)
It’s possible to imagine something very different. Larry Lessig is working very hard to bring a new model to life, where every voter gets $50 of government funds to give to politicians who can use the money to campaign. To imagine that model, you have to give up a bunch of assumptions: that you have a right to spend money to influence politics; that governments should try to do less, not more. And you have to be ready to cope with all the unintended consequences of the new system as well, of people selling their government funds to politicians through brokers, of social media becoming an endless campaign battle ground.
We need a ton more of this, people questioning assumptions that representation makes more sense than direct democracy, that decisions at the federal level are more important than those locally, that money is convertible into power. We need lots of Lessigs taking on these sorts of imaginative experiments, because most of them are going to fail.
Here are two really big assumptions I want to question:
– that participating in representative democracy is the core act of what it means to be a good citizen
– and that everyone is going to participate in our democracy in more or less the same way
You know what it means to be a good citizen – you’re supposed to read the newspaper and keep up on events locally and globally, and vote every two years and maybe call your congressman if something really pisses you off. And you probably know that this isn’t going to make very much of a difference. And that your congressperson is heading into a paralyzed institution that’s rarely able to pass legislation.
So let’s question the assumption that fixing Congress – or even more ambitiously, fixing politics – is the most important part of fixing citizenship. In the 50s and 60s, people figured out that, when you’re prevented from voting by law, public protests – marches, sit ins, boycotts – are a critical part of citizenship. As Congress starting passing civil rights legislation, activists learned that lawsuits were a critical tool to ensure that laws applied to everyone and enforced fairly. We rarely think of suing the government as a way to be a good citizen, but it’s been critical in building the rights-based society we have today.
There are at least three ways I think people can be civically active even if they’re frustrated with paralysis in politics. Make media. When people saw tanks in the streets of Ferguson, Missouri to counter peaceful protests, they made media and started a conversation about the militarization of America’s police forces. When they saw a newspaper picture of Michael Brown looking threatening, they started tweeting #iftheyshotmedown, asking whether the media were being fair in their choice of picture to portray the victim of a police shooting. Media is how we understand the world, and we can shape our public conversation by building civic media.
Make new things. I’m deeply frustrated that my government is surveilling my communications and those of millions of others worldwide, and I’m angry that so little legislative action has been taken in the wake of Ed Snowden’s revelations. But I’m really happy that software developers like Tor, Mailpile, Redphone and Mailvelope are working to make it easy to encrypt email and phonecalls and make it harder for anyone to surveil our communications. Make businesses. I’m concerned about climate change, I want to see a carbon tax, but in the meantime, I’m excited to see Tesla making electric cars that are sexy and for people as broke as me, excited that I drove here in a diesel car that gets almost fifty miles to the gallon. Building businesses that do well while doing good is one of the most powerful ways we can engage in civics today.
Here’s what’s tricky about expanding civics to include making media, making code and making money: it’s not a level playing field. Voting is something everyone can do – right now, making code or making media is easier for some people than others. We’re going to need to think hard about how we prepare the next generation of citizens for a world where their power comes not just from voting but from making, and where someone’s civics unfolds in the marketplace while someone else’s unfolds in Congress.
If you want to overcome failures of imagination – accepting that the web or our politics are inescapably broken – you’ve got to try something new. You’re probably going to fail. And when you do, I recommend that you try again. But first I recommend that you apologize. It feels really good. I’m sorry, and thank you for listening.
Michael Brown was fatally shot by police officer Darren Wilson on Saturday August 9, 2014. After his body lay in the hot street for four hours, Ferguson residents took to the streets to protest his killing. Brown’s death, the ensuing protest and the militarized police response opened a debate about race, justice and policing in America that continues today.
I heard about Michael Brown’s death for the first time on the evening of the 9th, through Twitter. Sarah Kendzior, a St. Louis-based journalist, was retweeting accounts of the protest, and other Twitter friends who write about racial justice issues, including Sherrilyn Ifill, the director of NAACP’s Legal Defense Fund, were discussing the implications of Brown’s killing and the community and police response. I wrote a quick blog post, noting how little mainstream media response there had been at that point, and how much of the coverage had focused on the anger of the crowd, which one outlet termed “a mob”. At that early point, there was not a dedicated Wikipedia article focused on Brown’s death, nor had major national newspapers, like the New York Times, written about Brown. For many people outside of the reach of local St. Louis media, Twitter was the medium that introduced people to the city of Ferguson.
The Pew Research Center confirms this picture. In a thorough analysis of early media coverage of the events in Ferguson, Paul Hitlin and Nancy Vogt of Pew note that cable news didn’t report the story until Monday the 11th, two days after the shooting. Cable news attention peaked on Thursday, August 14, after Ferguson police arrested two reporters and President Obama interrupted his vacation to report on the situation. Twitter attention peaked at the same time, with 3.6 million Ferguson-related tweets on the 14th, but it’s worth noting that Twitter showed a steady and growing interest in the topic from Saturday the 9th onward.
This pattern of attention to Michael Brown’s killing contrasts sharply with early attention to the killing of Trayvon Martin by neighborhood watch volunteer George Zimmerman on February 26, 2012. My students Erhardt Graeff, Matt Stempeck and I studied media coverage of the Martin killing using our Media Cloud tools and found that they key factor in expanding the conversation about Martin’s death beyond Central Florida was a careful PR campaign orchestrated by Benjamin Crump, an attorney retained by Martin’s parents. These PR efforts led to national television coverage of the story, which in turn led to a surge of interest on Twitter and in other participatory media.
In the case of Michael Brown, over 140,000 Twitter posts mentioned Ferguson on August 9th, the day of the shooting. Twitter led other media in 2014, while broadcast media led in 2012. When Crump announced he would be representing the family of Michael Brown on August 11, two days after the shooting, there was no need for a PR campaign, as the story of Brown’s death was already gaining traction in the media.
Pew’s analysis of Michael Brown versus Trayvon Martin shows that attention to events in Ferguson built more quickly than attention to events in Stanford, FL. There are several reasons for this. The police response to protests added a dimension to the Ferguson story that was not present in discussions of Trayvon Martin – many of the tweets on August 14 focused on the arrest of working journalists and the use of military equipment by Ferguson police. (When Missouri Highway Patrol Captain Ron Johnson took over control of law enforcement response in Ferguson on August 14, ordering officers to remove riot gear, attention on Twitter dropped sharply, suggesting some of the attention was on policing tactics.)
Twitter’s reach has grown since 2012, and the power of “black twitter”, which Soraya Nadia McDonald describes as both “cultural force” and social network, to challenge racist narratives has grown. Outrage on Twitter over a book deal for a juror who acquitted George Zimmerman led to the deal being withdrawn. Noting the power of outraged channeled through a hashtag, some African American Twitter users began posting contrasting photos of themselves under the tag #iftheygunnedmedown, suggesting that media were emphasizing the narrative of Michael Brown as “thug”, by using a photo in which Brown is displaying a peace sign (which many have read as a gang sign), rather than other photos in which Brown looks young and entirely unthreatening. The hashtag allowed Twitter users to participate in the dialog about Ferguson in cases where they had nothing to share about the situation on the ground by participating in a conversation about larger issues of structural racism in American media.
While events in Ferguson received widespread attention on Twitter, some observers saw very different behavior on their Facebook feeds.
Twitter vs. Facebook: my tweetstream is almost wall-to-wall with news from Ferguson. Only two mentions of it in my Facebook news feed.
— Mark_Hamilton (@gmarkham) August 14, 2014
John McDermott, writing in Digiday, offered this elegant formulation: “Facebook is for Ice Buckets, Twitter is for Ferguson”. Using data from SimpleReach, McDermott suggests that there were roughly eight times as many stories about events in Ferguson posted to Facebook as stories about the ALS Ice Bucket challenge, but that the stories received roughly the same exposure on Facebook (approximately 3.5 million Facebook “referrals” – appearances on a Facebook user’s screen – for each story between August 7-20th.) In the same time period, Crimson Hexagon calculated that there were roughly 1.5 times as many tweets about Ferguson as about the Ice Bucket Challenge.
The near-equal attention to Ferguson and the Ice Bucket Challenge that SimpleReach’s numbers suggest hardly seems like the substance of a controversy about algorithmic censorship, but it is worth noting that vastly more stories about Ferguson were posted to Facebook than stories about the Ice Bucket Challenge. The average story about the Ice Bucket Challenge was much more heavily promoted by Facebook’s algorithms (a factor of 8x) than the average story about Ferguson. Sociologist Zeynep Tufekçi suggests that this disparity may have been more significant early in the Ferguson story, which made the disparity between Facebook and Twitter more dramatic.
Tufekci goes on to observe that while Twitter (at present) is a “neutral” provider, simply delivering you tweets from the accounts you’ve chosen to follow, Facebook algorithmically “curates” your feed, offering you a selection of content that it believes you will find most interesting. She points out that this is a serious danger to democratic discourse. Facebook could censor our information flows and it would be difficult for us to determine that such censorship was taking place. The algorithms Facebook uses are opaque, which means conversations about how the algorithm work sound a bit like speculation about divine will. Tufecki suggests that Facebook may have altered their algorithm overnight to give Ferguson stories more prominence, which may be true but is impossible to verify. Others have speculated that Facebook tunes its algorithm to favor “happy news” and discourage serious news. Again, this is possible – and could even reflect lessons learned from Facebook’s mood manipulation experiments – but unverifiable.
Here are some of the things we do know about Facebook, which may explain why news appears differently on different social media platforms.
– It’s hard to search Facebook. When news breaks, Twitter users enter search terms and hashtags and receive an updated stream of tweets on the topic in question. On August 9th, I was able to follow reports from Ferguson and quickly choose Twitter users to follow who appeared to be at or near the scene. On Facebook, searching for “Ferguson” gives you the names of many people named Ferguson who Facebook thinks you might know. (You can, conveniently, choose to “Like” the city of Ferguson, which is probably pretty far from what most people searching for Ferguson want to do.)
– Non-reciprocal following makes it easier to diversify who you follow on Twitter. If I see that St. Louis-based rapper Tef Poe is reporting on Twitter regarding Ferguson, I can follow him, and I will see any public tweets he makes in my timeline. On Facebook, I need the approval of someone to follow their personal feed – they need to confirm our friendship before I can read what they’re saying.
It’s typical on Twitter for people to start following accounts of people they don’t know personally, especially when those people are eyewitnesses to or well-informed commenters on an event. I followed a set of users in Egypt while the Tahrir protests were taking place and stopped following many afterwards – I’ve done the same with Ferguson. This is much more difficult to do on Facebook – the platform is designed to connect you with friends, not sources, and to maintain those friendships over time.
– People have a relatively small number of Facebook friends – the median figure is 200 – and many white American Facebook users likely have few or no African-American Facebook friends. This isn’t a phenomenon specific to Facebook – it’s a broader reflection of American demographics and patterns of homophily, the tendency of “birds of a feather” to flock together. Robert Jones at the Public Religion Research Institute used data from the 2013 American Values Survey to estimate the racial composition of people’s social networks. He projects that the average white American has only one black friend, and that 75% of white Americans have entirely white social networks. (There’s an interesting debate about PRRI’s methodology, with Eugene Volokh arguing that it’s hard to estimate weak social ties by asking people about people they confide in.)
We can build on these numbers to speculate about what might be going on with Facebook and stories about Ferguson. Most people who follow Facebook believe that the company’s algorithm favors stories likely to be shared by a lot of people in your social network – Facebook favors viral cascades over surfacing novel content. If few users in your circle of friends are sharing stories about Ferguson – which is a distinct possibility if you are white and most of your friends are white – Facebook’s algorithm may see this as a story unlikely to “go viral” and tamp it down, rather than amplifying it, as it has with the ALS stories. On the other hand, if many of your friends are sharing the Ferguson story – likely if you have many African-American friends or if you follow racial justice issues – your corner of the Facebook network could be part of a viral cascade.
A new paper from Pew and Rutgers suggests another reason why Ferguson might have had difficulty spreading on Facebook. Keith Hampton and the other authors of the study found that people are acutely attuned to what conversations are happening on social networks and are unlikely to bring up topics perceived to be controversial. The users the authors surveyed were half as likely to bring up controversial topics on Facebook or Twitter as they were in face to face conversations. The exception was when they believed their friends on social networks agreed with their positions, in which case they were more likely to bring up the topic on social networks. In the case of Ferguson, this suggests that a Facebook user, unsure of whether her friends share her opinion that the police overreacted in Ferguson, might be more reluctant to post a Ferguson story on Facebook than she would be to bring up the situation in conversation. If there were a cascade of stories on Ferguson, she would have a social signal that the topic was acceptable and would be more likely to participate. On Twitter, where it’s not uncommon to follow dozens of people you don’t know well, it’s easier to interpret those social signals than on Facebook, where you are more likely to know an ethnically homogenous set of friends.
I’m far from immune to these echo chamber effects. One of my students surprised me with a link to a USA Today article reporting that more money had been raised to support Darren Wilson, the police officer accused of killing Brown than had been raised by Brown’s family. Few of my Facebook or Twitter friends were raising money for Officer Wilson, and so I was surprised to discover that a successful campaign was being held on his behalf. My ignorance of the campaign to support Wilson suggests that I have my own diversity issues in social media – if I want a more nuanced understanding of Americans’ reactions to Ferguson, I need to increase the ideological diversity of the people I follow online.
Special bonus: Jon Stewart takes the Ferguson challenge, which turns out to be utterly unlike the ALS Ice Bucket challenge.
I wrote a book review, of sorts, last week about Walter Isaacson’s book on Steve Jobs and my concern that biographies, as a genre, celebrate a “great man” theory of history. While I remain convinced that we need more biographies of teams, of successful collaborations (an idea that Nathan Matias furthers in his post today on acknowledgement and gratitude), I do have a dark secret to admit: I periodically dream about becoming a biographer.
This isn’t because I believe in the biography as a form. It’s because there are people I find so fascinating, I’d enjoy spending a couple of years thinking about how they became who they are or were, and how their personal stories give us a picture of what was possible at different moments in time. I asked a room full of students and colleagues who they’d most like to read a biography of, and the responses were a fascinating picture of my friends as individuals and as part of a group trying to invent the field of civic media.
When the question came around to me, I told the room that I wanted to read the biography of Afrika Bambaataa, one of a few men who can reasonably claim the title “Godfather of Hip Hop”. What I didn’t admit is that I’ve periodically considered dropping my academic pursuits and researching this fascinating figure.
We’re getting to the moment in history where thoughtful popular books are being written about hiphop’s early years and innovators – Jeff Chang’s Can’t Stop Won’t Stop is extensively researched and thoughtfully written, and Ed Piskor’s Hip Hop Family Tree has a visual style that recalls the early 1980s better than any text could.
Ed Piskor talks about his Hip Hop Family Tree project
Throughout volume one of Piskor’s beautiful history, Bambaataa recurs as an iconic figure, looming over an interchangeable crowd of short-lived MCs and DJs, as a future-looking visionary. Bambaataa was a leader of the Black Spades gang in the Bronx before deciding to dedicate his formidable charisma and organizing skills towards building the Universal Zulu Nation, a group that was part hip hop music and dance crew and part consciousness-raising Afrocentric cosmopolitan social club. Raised in the Bronx River Projects by his activist mother, he traveled to Nigeria, Equatorial Guinea and the Ivory Coast after winning an essay contest run by the New York City housing authority, leading Bambaataa to adopt the identity of an African chieftan, leading his crew of former gangsters into a new artistic life of “peace, love and having fun”.
Throughout the early years of hip hop, Bam was a step ahead of his rivals. Other DJs would look over his shoulder to determine which eclectic selections Bam was using as beats – adopting a trick from DJ Kool Herc, Bam would soak the labels off his records and replace them with labels from unrelated albums, leading rivals to purchase legendarily bad albums in the hopes of replicating his sound. (It’s hard to know whether tales of Bambaataa rocking a party with two copies of the Pink Panther theme are authentic musicology or an unintentional consequence of this tactic.) While other DJs sets had MCs asking the audience their zodiac signs (early hip hop was a direct descendant of disco), Bam was playing Malcolm X speeches over his beats. (I like to think of Keith LeBlanc’s No Sell Out, sometimes cited as the first recording featuring digital samples, as a Bambaataa tribute.) When everyone else followed Bambaataa into the crates, crafting their tracks around James Brown and P-Funk, Bam had moved on sampling Kraftwerk, building “Planet Rock” and inventing the entire genre of Electro.
Planet Rock, 1982
At some point, hip hop stopped following Bambaataa. After about 1986 sampling ruled hip hop, blossoming until it was killed by the Bridgeport Music decision. Electro has influenced every generation of dance music since the early 80s, but you can instantly place any track with rapping and chilly synths as coming from the lost sonic territory of 1982-1985. More tragically, after Bam led gang members out of the streets and into the dance club, Ice-T, BDP and NWA led hip hop out of the clubs and back into the gang life.
“Surgery”, (1984) World Class Wreckin Cru, featuring Dr. Dre. Yes, THAT Dr. Dre. Look it up.
Somewhere there’s a parallel reality in which Afrika Bambaataa is the best known name in hip hop and Dr. Dre is a little-known electro DJ. It’s an alternate dimension where Bambaataa added laser fusion propulsion to P-Funk’s Starship and flew music into orbit around Jupiter rather than having it crash in South Central. In that parallel universe, the Universal Zulu Nation got Angela Davis elected president in 1988 and Bambaataa DJ’d the year-long party to celebrate the intergalactic peace accord of 1999, in which all interpersonal conflicts were put aside towards the shared goals of
“peace, unity, love and having fun“.
Instead, Bambaataa has remained an honored and (insufficiently) celebrated hiphop pioneer, best remembered for one unforgettable track than for his epic social hack in the Bronx or his subsequent activism (including Hip Hop Against Apartheid and Artists United Against Apartheid.) Fortunately, the man is starting to get the respect he deserves, from an unusual corner: academe.
In 2012, Cornell University gave Bambaataa a three-year visiting scholar post. Bambaataa responded by donating his legendary record collection to Cornell’s Hip Hop Collection. This has presented an interesting curatorial challenge – the collection contains 40,000 albums, many of them with notes, flyers, press releases or other materials attached, all of which need to be scanned or digitized for posterity. For the past year, archivists have been cataloging the collection, sometimes in public, in Gavin Brown’s gallery in Greenwich Village.
From a slideshow of the Bambaataa collection on Okayplayer
The public archiving project has attracted a raft of contemporary DJs desperate to spin the Godfather’s discs. Joakim Bouaziz was one of the lucky DJ’s to be invited to the gallery, and he recorded part of his set spinning his favorites from the collection and recording the experience. No need to kick yourself for missing the gallery show – Cut Chemist and DJ Shadow are touring the US and Canada this fall, spinning the records live as part of their work building a Bambaataa tribute mix.
As for the biography? Bambaataa has been promising an autobiography since the mid-1990s. Let’s hope the revival of interest in his records leads to some helpful pressure on the man to put aside pressing Zulu Nation business for a few weeks and explain to us all What Would Bambaataa Do.
While I’m waiting for a Bambaataa autobiography, my guess is that a book that answers the questions I have would need to be biography of social movements at least as much as the story of a single individual. It’s not a coincidence that hip hop grew up in the Bronx at a moment when New York City’s physical infrastructure was crumbling and the Bronx had become synonymous with danger and decay. (Fort Apache, The Bronx came out in 1981, two years after Rappers’ Delight.) The physical and conceptual isolation of the Bronx from the rest of the city and the world allowed a culture to evolve in comparative isolation, which means that a history of Bambaataa needs to be a history of urban planning, of urban poverty and systemic racism, of the US’s housing projects. It would be a history of street gangs in New York as well as a history of Afrocentric philosophy and resistance. It would reach back to The Last Poets and ahead to Native Tongues, explore the rise of P-Funk’s Mothership and Sun Ra to understand “the Afro-Alien diaspora”. It’s more book than I am capable of writing, but damn, I hope someone takes it on.
For a taste of what those Bronx parties sounded like in 1982, here’s a collection of live recordings of early Bambaataa sets.
A friend recently posted a video on Facebook, a 1997 news story from ABC’s Nightline about Tripod, the social media company I helped build in Williamstown MA from 1994-1999. The video sparked a wave of reactions in me: nostalgia for those past days, pride in the accomplishments of the friends I’ve kept up with, regret for losing touch with others, and bafflement that I would choose to wear flannel and overalls to show off our company to the world. (Perhaps my favorite moment in watching the video was discovering that we’d been interviewed by Deborah Amos, NPR’s Middle East reporter, who has subsequently become a respected friend.)
I’m not proud of all of the emotions that I experienced traveling 17 years into the past. Seeing Bo Peabody, our co-founder and CEO, skateboard into the office and declare that we sold “eyeballs” gave me a wash of anger, envy and frustration that characterized much of my time at the company. Bo playing CEO – something he did splendidly – was often intolerable to me when I was in my twenties, and surprisingly uncomfortable for me to watch in my forties.
Like many companies, Tripod was run by a team of executives who worked closely together – Tripod was somewhat pioneering in that our executives were mostly in their mid-twenties, often working our first serious jobs. (I realize that all promising tech companies now recruit VPs from middle school and issue them standard-order Zuckerberg hoodies in kids sizes, but this was still pretty radical in 1997.) Our company succeeded to the extent it did (never profitable, but sold at a good price for our investors, and still survives as a service almost twenty years later) because we had a small, close-knit team of smart people with complementary skills, (One of those people now directs product design at Facebook. Another became chief marketing officer for Adap.tv and Rubicon, two pioneers in online advertising.)
I saw the team, its strengths and weaknesses as core to Tripod’s success. But whenever a journalist did a news story, it became the story of Bo, the founder, the solitary entrepreneurial genius who’d built our company.
I hated this. I thought it misrepresented our company, disrespected not only the contributions of the management team but the work done by the 60 smart people who built our products and served our users. Hearing me rant about this one too many times, Kara Berklich, our head of marketing pulled me aside and explained that the visionary CEO was a necessary social construct. With Bo as the single protagonist of our corporate story, we were far more marketable than a complex story with half a dozen key figures and a cast of thousands. When you’re selling a news story, it’s easier to pitch House than Game of Thrones.
Bo, to his great credit, understood that it was his job to play this role and was good about separating the character and the reality – his reflections on Tripod, Lucky or Smart?, make clear that Bo knew he was lucky enough to assemble a smart team and smart enough to let the team make the important decisions. Having taken on that visible visionary role at nonprofit organizations, I also now understand how often that job sucks, how being the avatar for a vast project forces you to try and manifest qualities that the company has and which you, personally, lack.
I was thinking of this ancient history last week as I worked my way through Walter Isaacson’s biography of Steve Jobs. (In defense of my choice of beach reading – I read several better books on paper that week. But Audible’s selections are a lot more limited, and I wanted something to “read” as I walked on the beach.) Isaacson’s biography, written with Jobs’s cooperation and hundreds of interviews with Jobs, his family, friends and colleagues, is an enjoyable and uncomfortable read. I found it enjoyable because it’s another personal time machine of sorts – reading it, I remember my first time using Apple products, from the venerable Apple II through the laptops and phones I use today. It’s uncomfortable because it becomes increasingly clear that Steve Jobs was an angry, manipulative asshole who slashed and burned his way through the lives of most people he encountered.
Sue Halpern reviewed Isaacson’s book for NYRB and does a better job than I could ever hope to, raising uncomfortable questions about Jobs’s attempts to be both corporate and counterculture, reminding us that Apple’s “Designed in California” is made possible by being “Assembled in China” under often troubling circumstances. My favorite of her observations is that Isaacson manages both to canonize Jobs while revealing his most damning flaws: “.. it is possible to write a hagiography even while exposing the worst in a person.” Jobs saw himself as an artist, Isaacson reminds us, and artistic geniuses are often too strange and pure to peacefully coexist with us lesser mortals.
When Jobs chose Isaacson to write his biography, it’s fair to assume he was aware of the author’s previous subjects: Benjamin Franklin, Albert Einstein and Henry Kissinger. The first two are routinely cited as exemplars of genius, and Kissinger may have his own dark claims to genius. It’s not hard to read Jobs’s selection of Isaacson as a way of inserting himself into the Pantheon.
Isaacson is happy to assist. The book was rushed into print when Jobs died, and Isaacson wrote a coda, excerpted in the New York Times, to cover Jobs’s death, funeral and legacy. In the New York Times excerpt, Isaacson makes clear that he saw Jobs as a genius, even if he wasn’t always conventionally smart. It was Jobs’s ingenuity and creativity, his ability to see a brilliant technical idea and turn it into something that consumers wanted that characterized his genius, Isaacson argues. One of the major themes of the book is the intersection of the sciences and the humanities – Jobs saw himself as standing at that crossroads, using his acutely honed sense of taste to predict the technical future and inspire the technicians to invent it.
This unusual form of genius, if that’s what it was, makes Jobs a particularly accessible role model for the tech industry. Many people who work on technology for a living are not Wozniak-level programmers. We flatter ourselves that we can contribute to the industry by helping those more gifted at writing code understand the needs of users, the importance of usability, the applicability of technical breakthroughts to unexpected new markets. Perhaps, like Steve, we can “put a dent in the universe” by connecting someone’s technical innovations with new markets.
People who can bridge between engineers and end users are important, necessary and often hard to find. It’s harder than it might appear to build these bridges in ways that respect and appreciate all those involved in building and marketing new technologies. In finding ways to bridge constructively and respectfully, Jobs is a lousy role model much of the time. The answer to “What Would Steve Jobs Do” is often “bully someone” or “throw a tantrum”. Unfortunately, it’s often easier to emulate Jobs’s less attractive personality traits than it is to replicate his design sensibilities.
Taking a break from Isaacson’s book, I read a thoughtful essay by Joshua Wolf Shenk, a preview of his new book, Powers of Two. Shenk argues that the myth of the solitary genius has dominated much of our thinking about creativity and obscures the fact that many people we know as geniuses worked in pairs or in larger teams. Shenk is particularly interested in creative pairings, pointing out that Einstein worked through the theory of relativity with Michele Besso, that Picasso invented Cubism with Georges Braque and that Dr. Martin Luther King co-led the civil rights movement with Ralph Abernathy and others.
There’s a way to read Isaacson’s biography in support of Shenk’s argument. Jobs was most productive as a serial collaborator, and was often disastrously unsuccessful when he wasn’t challenged by a strong partner or a team he respected. Jobs built Apple Computer on the brilliance of Steve Wozniak’s Apple computer, led Pixar to dominance over Disney’s animation business by hitching his star to filmmaker John Lasseter, and reinvented Apple as a music and phone company by partnering closely with Jony Ive. (Search for any of these men and you’ll find a wealth of articles and books declaring them geniuses.) Jobs has been good about crediting these collaborators and, occasionally, teams of collaborators – he saw the Macintosh as a team effort and honored team members at subsequent Apple product launches until his death.
When he didn’t have a strong collaborator or team, Jobs was often lost, as he was when Woz disengaged from Apple after the Apple II, when Jobs founded Next, or during the years Jobs dumped tens of millions into Pixar as a technology company, before Lasseter’s films demonstrating Pixar hardware took the company out of obscurity. In retrospect, this is obvious – Jobs didn’t write code or build prototypes. Instead, he shaped and guided the work that others did, making it better. Without a worthy collaborator, Jobs’s deeply impressive skillset was insufficient and often irrelevant.
It doesn’t lessen Jobs to recognize that creative genius comes from collaboration. Letting go of the idea that Shakespeare was a solitary genius writing masterworks in an attic without outside input and accepting that he was a member of a popular theatre company, incorporating the influences and feedback of other writers and actors into his creations makes him more fascinating to me, not less. Since we don’t have much access to the historical details of Shakespeare’s life, it’s easier to see these collaborative dynamics in modern biographies. Jobs may be one of the best examples of the collaborative genius idea, as the solitary genius narrative simply makes no sense in considering his history. We can imagine Shakespeare alone in a garrett or Einstein puzzling out equations alone at a blackboard, but Jobs alone is just an angry vegan too picky about design to furnish his own mansion.
In writing a biography, it’s natural to lionize the protagonist, if only to explain why she or he merited the author’s attention. Isaacson is better than some in featuring Jobs’s collaborators and influences, but the form ultimately dictates that the book is about a single individual, not pairs and teams of collaborators. The narrative arc is that of Jobs’s life, not the life of the companies he built, the products they created or the industries they influenced.
How do we tell the stories of partnerships and collaborations? Shenk’s book promises to tell the stories of creative pairings, both visible ones like Lennon and McCartney and invisible ones like that of Vladimir and Vera Nabokov. But his essay hints at the intriguing problem of telling stories of more complex collaborations, like the one I experienced at Tripod. How do we tell a story about creativity and collaboration at Wikipedia that doesn’t become a biography of Jimmy Wales? Is there a story about Linux that’s not a portrait of Linus Torvalds, an examination of Free Software that isn’t a character sketch of Richard Stallman? Not only are humans creatures who think in terms of stories, we are social beings, which means there is nothing we are so attuned to as the life stories of successful people.
Nathan Matias, a brilliant poet, literary scholar and software developer (who happens to be my doctoral student) has been working on better systems to acknowledge and credit the dozens of collaborators he’s worked with on his various projects. His personal website features almost a hundred collaborators – clicking on the icon for any of us reveals projects we’ve worked on with Nathan. It’s a first step towards a broader effort at designing acknowledgement on the web, and a key part of Nathan’s research on collaboration that leverages cultural and cognitive diversity. If we want to encourage diverse collaboration (and the end of Rewire makes a case for why we need to do so), we need to figure out how to recognize and celebrate people who work as creative teams, not just those who demand to be celebrated as geniuses.
Steve Jobs changed the world, or at least some highly visible corners of it. The story of his life, his successes and his failures is an important one for anyone who designs products and tools for large audiences. It would be a shame if the message we took from Isaacson’s book were that success comes from arrogance, self-certainty and cruelty. Until someone discovers a better way to write biographies of collaboration, that’s a message many readers will take away.