… My heart’s in Accra Ethan Zuckerman’s online home, since 2003 2019-08-21T17:46:23Z http://www.ethanzuckerman.com/blog/feed/atom/ WordPress Ethan http://www.ethanzuckerman.com <![CDATA[On me, and the Media Lab]]> http://www.ethanzuckerman.com/blog/?p=5609 2019-08-21T17:46:23Z 2019-08-21T01:28:09Z Continue reading ]]> (Please be sure to read the addendum at the end of this post.)

A week ago last Friday, I spoke to Joi Ito about the release of documents that accuse Media Lab co-founder Marvin Minsky of involvement in Jeffrey Epstein’s horrific crimes.* Joi told me that evening that the Media Lab’s ties to Epstein went much deeper, and included a business relationship between Joi and Epstein, investments in companies Joi’s VC fund was supporting, gifts and visits by Epstein to the Media Lab and by Joi to Epstein’s properties. As the scale of Joi’s involvement with Epstein became clear to me, I began to understand that I had to end my relationship with the MIT Media Lab. The following day, Saturday the 10th, I told Joi that I planned to move my work out of the MIT Media Lab by the end of this academic year, May 2020.

My logic was simple: the work my group does focuses on social justice and on the inclusion of marginalized individuals and points of view. It’s hard to do that work with a straight face in a place that violated its own values so clearly in working with Epstein and in disguising that relationship.

I waited until Thursday the 15th for Joi’s apology to share the information with my students, staff, and a few trusted friends. My hope was to work with my team, who now have great uncertainty about their academic and professional futures, before sharing that news widely. I also wrote notes of apology to the recipients of the Media Lab Disobedience Prize, three women who were recognized for their work on the #MeToo in STEM movement. It struck me as a terrible irony that their work on combatting sexual harassment and assault in science and tech might be damaged by their association with the Media Lab. The note I sent to those recipients made its way to the Boston Globe, which ran a story about it this evening. And so, my decision to leave the Media Lab has become public well before I had intended it to.

That’s okay. I feel good about my decision, and I’m hoping my decision can open a conversation about what it’s appropriate for people to do when they discover the institution they’ve been part of has made terrible errors. My guess is that the decision is different for everyone involved. I know that some friends are committed to staying within the lab and working to make it a better, fairer and more transparent place, and I will do my best to support them over the months I remain at the Lab. For me, the deep involvement of Epstein in the life of the Media Lab is something that makes my work impossible to carry forward there.**

To clarify a couple of things, since I haven’t actually been able to control the release of information here:

– I am not resigning because I had any involvement with Epstein. Joi asked me in 2014 if I wanted to meet Epstein, and I refused and urged him not to meet with him. We didn’t speak about Epstein again until last Friday.

– I don’t have another university that I’m moving to or another job offer. I just knew that I couldn’t continue the work under the Media Lab banner. I’ll be spending much of this year – and perhaps years to come – seeing if there’s another place to continue this work. Before I would commit to moving the work elsewhere at MIT, I would need to understand better whether the Institute knew about the relationship with Epstein and whether they approved of his gifts.

– I’m not leaving tomorrow. That wouldn’t be responsible – I have classes I am committed to teaching and students who are finishing their degrees. I plan to leave at the end of this academic year.

– My first priority is taking care of my students and staff, who shouldn’t have to suffer because Joi made a bad decision and I decided I couldn’t live with it. My second priority is to help anyone at the Media Lab who wants to turn this terrible situation into a chance to make the Lab a better place. That includes Joi, if he’s able to do the work necessary to transform the Media Lab into a place that’s more consistent with its stated values.

I’m aware of the privilege*** that it’s been to work at a place filled with as much creativity and brilliance as the Media Lab. But I’m also aware that privilege can be blinding, and can cause people to ignore situations that should be simple matters of right and wrong. Everyone at the Media Lab is going through a process of figuring out how they should react to the news of Epstein and his engagement with the Lab. I hope that everyone else gets to do it first with their students and teams before doing it in the press.

Addendum, August 21, 2019:

* A friend of Marvin Minsky’s objected to this sentence opening this post, noting that Marvin, who died in 2016, cannot respond to these accusations. While that is true, the accusations made by Virginia Giuffre are a matter of public record and have been widely reported. I mention these accusations both because they were what motivated me to speak with Joi about Epstein and, more importantly, because unanswered questions about Minsky are part of the horror of this situation for some of my colleagues at the Media Lab. To be clear, I have no knowledge of whether any of these charges are true – they happened long before my time at the Media Lab.

I changed the word “implicate” to “accuse” as a result and added “of involvement” before the phrase about Epstein’s crimes.

** My original version of this post had two additional sentences here, describing my dismay about the implications of the Epstein revelations for one of my students and her research. She is not ready to talk about that subject, and I’ve withdrawn those sentences at her request.

*** A friend pointed out that I was able to choose to step away from the Media Lab because of my privilege: I’ve got money in the bank, I’ve got a supportive partner, I am at a stage of my career where I can reasonably believe I’ll find another high prestige job, I’m a cis-gendered straight white dude. She wanted me to be clearer about the fact that not everyone is going to be able to make the same decision I did.

She’s right. There are people who are going to remain working at the Media Lab because they sincerely believe that we finally have the opportunity to fix some of the deep structural problems of the place – I respect them and I will work hard to support them. But there’s also people who are going to continue at the lab because it’s the best opportunity they have to develop their own careers and reach a point where they’ve got more flexibility to make decisions like the one I made. I respect them too – they are the people doing the work that makes institutions work, but they rarely have the power to make decisions that steer an institution towards its values.

So thank you for all the kind words about bravery. Truth is I’m privileged enough to afford to be brave. For those of you who love the Media Lab and want to see it sail through these rough waters, please take time to reach out to people who may not be able to be as visible in their next steps. Make sure they’re doing okay. Support them whether their decision is to leave or to stay. So many of my colleagues at the Media Lab right now are hurting, and they need your support and love too. Hope we can redirect some of that love folks are sharing with me to them too.

]]>
2
Ethan http://www.ethanzuckerman.com <![CDATA[Training the next generation of ethical techies]]> http://www.ethanzuckerman.com/blog/?p=5607 2019-08-14T14:53:17Z 2019-08-14T14:53:17Z Continue reading ]]> My friend Christian Sandvig, who directs the Center for Ethics, Society, and Computing at the University of Michigan, started an interesting thread on Twitter yesterday. It began:

“I’m super suspicious of the “rush to postdocs” in academic #AI ethics/fairness. Where the heck are all of these people with real technical chops who are also deeply knowledgeable about ethics/fairness going to come from… since we don’t train people that way in the first place.”

Christian goes on to point out that it’s exceedingly rare for someone with PhD-level experience in machine learning to have a strong background in critical theory, intersectionality, gender studies and ethics. We’re likely to see a string of CS PhDs lost in humanities departments and well-meaning humanities scholars writing about tech issues they don’t fully understand.

I’m lucky to have students doing cutting-edge work on machine learning and ethics in my lab. But I’m also aware of just how unique individuals like Joy Buolamwini and Chelsea Barabas are. And realizing I mostly agree with Christian, I also think it’s worth asking how we start training people who can think rigorously and creatively about technology and ethics.

It’s certainly a good time to have this conversation. There’s debates about whether AI could ever make fair decisions given the need to extrapolate from data in an unfair world, whether we can avoid encoding racial and gender biases into automated systems, and whether AI systems will damage the concept of meaningful work. In my area of focus, there are complex and worthwhile conversations taking place about whether social media is leading towards extremism and violence, whether online interaction increases polarization and damages democracy, or whether surveillance capitalism can ever be ethically acceptable. And I see my colleagues in the wet sciences dealing with questions that make my head hurt. Should you be able to engineer estrogen in your kitchen so you can transition from male to female? Should we engineer mice to kill off deer ticks in the hopes of ending Lyme disease?

That last question has been a major one for friend and colleague Kevin Esvelt, who has been wrestling with tough ethical questions like who gets to decide if your community (Nantucket Island, for instance) should be a testbed for this technology? What is informed consent when it comes to releasing mice engineered with CRISPR gene drive into a complex ecosystem? Admirably, Dr. Esvelt has been working hard to level up in ethics and community design practices, but his progress just points to the need for scholars who straddle these different topics.

I think we need to start well before the postdoc to start training people who are comfortable in the worlds of science, policy and ethics. Specifically, I think we should start at the undergraduate level. By the time we admit you into somewhere like the Media Lab, we need you to already be thinking critically and carefully about the technology we’re asking you to invent and build.

I was lucky enough attend Williams College, which focused on the liberal arts and didn’t seem to care much what you studied so long as you got into some good arguments. I was in a dorm that had a residential seminar, which meant that everyone in my hall took the same class in ethics. Arguments about moral relativism continued over dinner and late into the night, in one case ending with a student threatening another with a machete in her desire to make her point. It wasn’t the most restful frosh year, but it cemented some critical ideas that have served me well over the years:

– Smart people may disagree with you about key issues, and you may be both making reasonable, logical arguments but starting from different sets of core values
– If you feel strongly about something, it behooves you to understand and strengthen your own arguments
– You probably don’t really understand something unless you can teach it to someone else

My guess is that courses that force us to have these sorts of arguments are critical to unpacking the intricacies of emerging technologies and their implications. To be clear, there’s the field of science and technology studies, which makes these questions central to its debates. But I think it’s possible to sharpen these cognitive skills in any field where the work of scholarship is in debating rival interpretations of the same facts. Was American independence from England the product of democratic aspirations, or economic ones? Is Lear mad, or is he the only truly sane one?

The fact that there’s dozens of legitimate answers to these questions can make them frustrating in fields where the goal is to calculate a single (very difficult) answer… but the problems we’re starting to face around regulating tech are complex, squishy questions. Should governments regulate dangerous speech online? Or platforms? Should communities work to develop and enforce their own speech standards? My guess is that answer looks more like an analysis of Lear’s madness than like the decomposition of a matrix.

But liberal arts isn’t all you’d want to teach if the goal is to prepare people who could work in the intersection of tech, ethics and policy. Much of my work is with policymakers who desperately want to solve problems, but often don’t know enough about the technology they’re trying to fix to actually make things better. I also work closely with social change leaders like Sherrilyn Ifill, the president of the NAACP Legal Defense Fund. She came to our lab to learn about algorithmic bias, noting that if the NAACP LDF had been able to fight redlining two generations ago, we might not face the massive wealth gap that divides Black and White Americans. Sherrilyn believes the next generation of redlining will be algorithmic, and that social justice organizations need to understand algorithmic bias to combat it. We need people who understand new technologies well enough to analyze them and explain their implications to those who would govern them.

My guess is that this sort of work doesn’t require a PhD. What it requires is understanding a field well enough that you can discern what’s likely, what’s possible and what’s impossible. One of my dearest friends is a physicist who now evaluates clean energy and carbon capture technologies, but has also written on topics from nuclear disarmament to autonomous vehicles. His PhD work is on Bose-Einstein condensate, a strange state of matter that involves superimposing atoms at very low temperatures by trapping them in place with lasers. His PhD and postdoc work have basically nothing to do with the topics he works on, but the basis he has in understanding complex systems and the implications of physical laws means he can quickly tell you that it’s possible to pull CO2 from the environment and turn it into diesel fuel, but that it’s probably going to be very expensive to do so.

I’m imagining a generation of students who have a solid technical background, the equivalent of a concentration if not a major in a field like computer science, as well as a sequence of courses that help people speak, write, argue and teach technological issues. We’d offer classes – which might or might not be about tech topics – that help teach students to write for popular audiences as well as academic ones, that help students learn how you write an oped and make a convincing presentation. We’d coach students on teaching technical topics in their field to people outside of their fields, perhaps the core skillset necessary in being a scientific or technical advisor.

There’s jobs for people with this hybrid skill set right now. The Ford Foundation has been hard at work creating the field of “Public Interest Technology”, a profession in which people use technical skills to change the world for the better. This might mean working in a nonprofit like NAACP LDF to help leaders like Sherrilyn understand what battles are most important to fight in algorithmic justice, or in a newsroom, helping journalists maintain secure channels with their sources. I predict that graduates with this hybrid background will be at a premium as companies like Facebook and YouTube look to figure out whether their products can be profitable without being corrosive to society… and the students who come out with critical faculties and the ability to communicate their concerns well will be positioned to advocate for real solutions to these problems. (And if they aren’t able to influence the directions the companies take, they’ll make great leaders of Tech Won’t Build It protests.)

(I was visiting Williams today and discovered a feature on their website about four alums who’ve taken on careers that are right at the center of Public Interest tech.)

Building a program in tech, ethics and policy helps address a real problem liberal arts colleges are experiencing right now. The number of computer science majors has doubled at American universities and colleges between 2013 and 2017, while the number of tenure-track professors increased only by 17%, leading the New York Times to report that the hardest part of a computer science major may be getting a seat in a class. Really terrific schools like Williams can’t hire CS faculty fast enough, and graduates of programs like the one I teach in at MIT are often choosing between dozens of excellent job offers.

Not all those people signing up for CS courses are going to end up writing software for a living – my exposure to CS at Williams helped me discover that I cared deeply about tech and its implications, but that I was a shitty programmer. Building a strong program focused on technology, ethics and policy would offer another path for students like me who were fascinated with the implications of technology, but less interested in becoming a working programmer. It also would take some of the stress off CS professors as students took on a more balanced courseload, building skills in writing, communications, argument and presentation as well as technical skills.

Christian Sandvig is right to be worried that we’re forcing scholars who are already far into their intellectual journeys into postdocs intended to deal with contemporary problems. But the problem is not that we’re asking scholars to take on these new intellectual responsibilities – it’s that we should have started training them ten years before the postdoc to take on these challenges.

]]>
1
Ethan http://www.ethanzuckerman.com <![CDATA[Philanthropy and the hand-off – what happens if government can’t scale social experiments?]]> http://www.ethanzuckerman.com/blog/?p=5604 2019-08-05T19:15:27Z 2019-08-05T19:15:27Z Continue reading ]]> My friend and (lucky for me) boss Joi Ito has an excellent essay in Wired which considers the challenges of measuring the impact of philanthropy. For Joi, one of the key problems is that social problems are complex, and the metrics we use to understand them too simple. Too often we’re measuring something that’s a proxy for something else – we can measure circulation levels at libraries as a proxy for their usage, but we’ll miss all the novel ways libraries are reaching communities through makerspaces, classrooms and public spaces. What we need are better ways of understanding and measuring the resilience and robustness of systems, not just simple proxies that measure growth or contraction.

Joi’s meditation on measurement is consistent with his current intellectual interests: irreducible complexity and resisting reduction. And, like Joi, I’m obsessed with how philanthropy could do a better job at making progress on social challenges. I’ve done my own work around measuring impact with the Media Cloud platform, as my friend Anya Schiffrin and I explored in this article on measuring the impact of foundation funded journalism.

But I came away from Joi’s article wondering if there wasn’t a major factor he missed: the disappearance of governments from the equation of social change. Joi works with some of the biggest and wealthiest players in American philanthropy – the Knight and MacArthur Foundations. I work with some of the others – the Open Society Foundation, the Ford Foundation. We’ve both been involved with helping invest enormous sums of money… and we’ve both learned that those sums aren’t so enormous when you put them up against massive social challenges, like addressing poverty through improved school quality. There are models that could work at scale – the model pioneered by Geoffrey Canada as the Harlem Children’s Zone starts working with children pre-birth, through parenting classes and follows students through high school and into college. But it’s depended on massive infusions of private investment, and when the Obama administration sought to replicate its success as “promise zones”, the project received only a small percentage of the funds the President sought for it, and its impacts are likely to be quite diffuse.

It’s possible for philanthropists to fund experiments, even multi-decade experiments like Harlem Children’s Zone. But it’s unlikely that philanthropists can, or should, take responsibility for solving problems like intergenerational poverty in African American communities. At best, we ask phianthropists to enable and lift up promising experiments, in the hopes that governments could learn from those results and adopt best policies. But since the Reagan/Thatcher moment of the 1980s, we’ve expected less and less from our governments, and they’ve seemed less able partners to transform societies for the better. I’m increasingly worried that working with philanthropies – something I spend a great deal of my time doing – is missing the larger point. We need revolutionary change, where government becomes part of the solution again, not better metrics within philanthropy.

In the spirit of the mid-2000s, Joi, I’m opening a blog conversation – do I have it right, or do you believe that philanthropy without handing ideas off to governments to scale? And if those governments aren’t there to receive these experiments, what are we spending our time on in philanthropy?

]]>
1
Ethan http://www.ethanzuckerman.com <![CDATA[Beyond the Vast Wasteland: briefing Congresspeople for the Aspen Institute]]> http://www.ethanzuckerman.com/blog/?p=5600 2019-07-31T20:49:58Z 2019-07-31T20:49:58Z Continue reading ]]> I was privileged to speak to a gathering of Senators and Representatives who came to MIT for an Aspen Institute event in May, 2019 titled “Internet, Big Data and Algorithms: Threats to Privacy and Freedom, or Gateway to a New Future”. It was a pleasure to share the stage with old friends Jonathan Zittrain and Cathy O’Neil as well as my student Joy Buolamwini, qnd a wonderful opportunity to share some of my thinking about the future of social media with lawmakers who could help or hinder this vision becoming a reality. This piece draws on my earlier piece “Six or Seven Things Social Media Can Do for Democracy”, as well as a speech from late 2018, “We Make the Media”. More forthcoming on this topic later this summer/early fall.

In 1961, the newly appointed chairman of the FCC, Newt Minow, addressed the National Association of Broadcasters in Washington DC. Minow’s speech demanded that broadcasters take seriously the idea that serve the public interest – and distinguished the public interest from simply what interests the public. And Minow coined an unforgettable phrase to explain what a poor job broadcasters were doing. Challenging executives to watch a day of their own programming without anything to distract or divert them, Minow declared, “I can assure you that what you will observe is a vast wasteland.”

There have been hundreds of articles written over the past two years about social media that might have been better titled “a vast wasteland”. This flood of articles argues that social media often doesn’t work the way we think it should, that partisan manipulation of Facebook may be swaying elections, and that extremism on YouTube may be contributing to a wave of ethnonationalist violence. It’s a thoroughly appropriate moment to evaluate whether social media is making our society and our democracy stronger, or pulling it apart. From Cambridge Analytica to Comet Ping Pong to the massacre in New Zealand, alarm bells are sounding that not all is well in our online public spaces.

But Minow’s speech didn’t end with a condemnation of the sorry state of broadcasting in 1961. Instead, Minow articulated a vision for television to inform, enlighten and entertain, a future he hoped to achieve without censorship, without replacing private companies with government entities, and mostly through voluntary compliance. And, with 1967’s Public Broadcasting Act, the founding of PBS in 1969 and NPR in 1970, a surprising amount of Minow’s vision came to pass.

It’s important that we consider the real and potential harms linked to the rise of social media, from increasing political polarization, the spread of mis-, dis- and malinformation to trolling, bullying and online abuse. But much as television was in its teenage years in the early 1960s, social media isn’t going away any time soon. It’s essential that we have a positive vision for what social media can be as well as a critical take on mitigating its harms.

I’m interested in what social media should do for us as citizens in a democracy. We talk about social media as a digital public sphere, invoking Habermas and coffeehouses frequented by the bourgeoisie. Before we ask whether the internet succeeds as a public sphere, we ought to ask whether that’s actually what we want it to be.

I take my lead here from journalism scholar Michael Schudson, who took issue with a hyperbolic statement made by media critic James Carey: “journalism as a practice is unthinkable except in the context of democracy; in fact, journalism is usefully understood as another name for democracy.” For Schudson, this was a step too far. Journalism may be necessary for democracy to function well, but journalism by itself is not democracy and cannot produce democracy. Instead, we should work to understand the “Six or Seven Things News Can Do for Democracy”, the title of an incisive essay Schudson wrote to anchor his book, Why Democracies Need an Unlovable Press.

The six things Schudson sees news currently doing for democracy are presented in order of their frequency – as a result, the first three functions Schudson sees are straightforward and unsurprising. The news informs us about events, locally and globally, that we need to know about as citizens. The news investigates issues that are not immediately obvious, doing the hard work of excavating truths that someone did not want told. News provides analysis, knitting reported facts into complex possible narratives of significance and direction.

Schudson wades into deeper waters with the next three functions. News can serve as a public forum, allowing citizens to raise their voices through letters to the editor, op-eds and (when they’re still permitted) through comments. The news can serve as a tool for social empathy, helping us feel the importance of social issues through careful storytelling, appealing to our hearts as well as our heads. Controversially, Schudson argues, news can be a force for mobilization, urging readers to take action, voting, marching, protesting, boycotting, or using any of the other tools we have access to as citizens.

His essay closes with a seventh role that Schudson believes the news should fill, even if it has yet to embrace it. The news can be a force for the promotion of representative democracy. For Schudson, this includes the idea of protecting minority rights against the excesses of populism, and he sees a possible role for journalists in ensuring that these key protections remain in force.

This is perhaps not an exhaustive list, nor is the news required to do all that Schudson believes it can do. Neither does the list include things that the news tries to do that aren’t necessarily connected to democracy, like providing an advertising platform for local businesses, providing revenue for publishers, or entertaining audiences. And Schudson acknowledges that these functions can come into conflict – the more a news organization engages in mobilization, the more likely it is that it will compromise their ability to inform impartially.

In this same spirit, I’d like to suggest six or seven things social media can do for democracy. As with Schudson’s list, these functions are not exhaustive – obviously, social media entertains us, connects us with family, friends and any advertiser willing to pay for the privilege, in addition to the civic functions I outline here. Furthermore, as with news media, these civic purposes are not always mutually reinforcing and can easily come into conflict. (And because I’m much less learned than Schudson, my list may be incomplete or just plain wrong.)

Social media can inform us.
Many of us have heard the statistic that a majority of young people see Facebook as a primary source for news , and virtually every newsroom now considers Facebook as an important distributor of their content (sometimes to their peril.) But that’s not what’s most important in considering social media as a tool for democracy. Because social media is participatory, it is a tool people use to create and share information with friends and family, and potentially the wider world. Usually this information is of interest only to a few people – it’s what you had for lunch, or the antics of the squirrel in your backyard. But sometimes the news you see is of intense importance to the rest of the world.

When protesters took to the streets of Sidi Bouzid, Tunisia, they were visible to the world through Facebook even though the Tunisian government had prevented journalists from coming to the town. Videos from Facebook made their way to Al Jazeera through Tunisian activists in the diaspora, and Al Jazeera rebroadcast footage, helping spread the protests to Tunis and beyond. The importance of social media in informing us is that it provides a channel for those excluded by the news – whether through censorship, as in Tunisia, or through disinterest or ignorance – to have their voices and issues heard.

Places don’t need to be as far away as Tunisia for social media to be a conduit for information – when Michael Brown was killed in Ferguson, Missouri, many people learned of his death, the protests that unfolded in the wake, and the militarized response to those protests, via Twitter. (And as news reporters were arrested for covering events in Ferguson, they turned to Twitter to share news of their own detention.) Social media is critically important in giving voice to communities who’ve been systemically excluded from media – people of color, women, LGBTQIA people, poor people. By giving people a chance to share their under-covered perspectives with broadcast media, social media has a possible role in making the media ecosystem more inclusive and fair.

Finally, social media may be in helping replace or augment local information, as people connect directly with their children’s schools or with community organizations. This function is increasingly important as local newspapers shed staff or close altogether, as social media may become the primary conduit for local information.

Social media can amplify important voices and issues.
In traditional (broadcast or newspaper) media, editors decide what topics are worth the readers’ attention. This “agenda setting” function has enormous political importance – as Max McCombs and Donald Shaw observed in 1972, the news doesn’t tell us what to think, but it’s very good at telling us what to think about.

That agenda-setting power takes a different shape in the era of social media. Instead of a linear process from an editor’s desk through a reporter to the paper on your front porch, social media works with news media through a set of feedback loops . Readers make stories more visible by sharing them on social media (and help ensure invisibility by failing to share stories). Editors and writers respond to sharing as a signal of popularity and interest, and will often write more stories to capitalize on this interest. Readers may respond to stories by becoming authors, injecting their stories into the mix and competing with professional stories for attention and amplification.

Amplification has become a new form of exercising political power. In 2012, we watched Invisible Children use a carefully crafted campaign, built around a manipulative video and a strategy of sharing the video with online influencers. Within an few days, roughly half of American young people had seen the video, and US funding for the Ugandan military – the goal of the campaign – was being supported by powerful people in the US Congress and military . (That the organization’s director had a nervous breakdown, leading to the group’s implosion, was not a coincidence – Invisible Children managed to amplify an issue to a level of visibility where powerful backlash was inevitable.)

Amplification works within much smaller circles that those surrounding US foreign policy. By sharing content with small personal networks on social media, individuals signal the issues they see as most important and engage in a constant process of self-definition. In the process, they advocate for friends to pay attention to these issues as well. Essentially, social media provides an efficient mechanism for the two-step flow of communication, documented by Paul Lazarsfeld and Elihu Katz , to unfold online. We are less influenced by mass media than we are by opinion leaders, who share their opinions about mass media. Social media invites all of us to become opinion leaders, at least for our circles of friends, and makes the process entertaining, gamifying our role as influencers by rewarding us with up to the second numbers on how our tweets and posts have been liked and shared by our friends.

Social media can be a tool for connection and solidarity.
The pre-web internet of the 1980s and 1990s was organized around topics of interest, rather than offline friendships, as social networks like Facebook organize. Some of the most long-lasting communities that emerged from the Usenet era of the internet were communities of interest that connected people who had a hard time finding each other offline: young people questioning their sexuality, religious and ethnic minorities, people with esoteric or specialized interests. The spirit of the community of interest and identity continued through Scott Hefferman’s meetup.com, which helped poodle owners or Bernie Sanders supporters in Des Moines find each other, and now surfaces again in Facebook Groups, semi-private spaces designed to allow people to connect with likeminded individuals in safe, restricted spaces.

Social critics, notably Robert Putnam, have worried that the internet is undermining our sense of community and lessening people’s abilities to engage in civic behavior. Another possibility is that we’re forming new bonds of solidarity based on shared interests than on shared geographies. I think of Jen Brea, whose academic career at Harvard was cut short by myalgic encephalomyelitis , who used the internet to build an online community of fellow disease sufferers, a powerful documentary film that premiered at Sundance, and a powerful campaign calling attention to the ways diseases that disproportionately affect women are systemically misdiagnosed. Brea’s disease makes it difficult for her to connect with her local, physical community, but social media has made it possible to build a powerful community of interest that is working on helping people live with their disease.

One of the major worries voiced about social media is the ways in which it can increase political polarization. Communities of solidarity can both exacerbate and combat that problem. We may end up more firmly rooted in our existing opinions, or we may create a new set of weak ties to people who we may disagree with in terms of traditional political categories, but with whom we share powerful bonds around shared interests, identities and struggles.

Social media can be a space for mobilization

The power of social media to raise money for candidates, recruit people to participate in marches and rallies, to organize boycotts of products or the overthrow of governments is one of the best-documented – and most debated – powers of social media. From Clay Shirky’s examination of group formation and mobilization in Here Comes Everybody to endless analyses of the power of Facebook and Twitter in mobilizing youth in Tahrir Square or Gezi Park, including Zeynep Tufekçi’s Twitter and Tear Gas, the power of social media to both recruit people to social movements and to organize actions offline has been well documented. It’s also been heartily critiqued, from Malcolm Gladwell, who believes that online connections can never be as powerful as real-world strong ties for leading people to protest, or by thinkers like Tufekçi, who readily admit that the ease of mobilizing people online is an Achilles heel, teaching leaders like Erdogan to discount the importance of citizens protesting in the streets.

It’s worth noting that mobilization online does not have to lead to offline action to be effective. A wave of campaigns like Sleeping Giants, which has urged advertisers to pull support from Breitbart, or #metoo, where tens of thousands of women have demonstrated that sexual harassment is a pervasive condition, not just the product of a few Harvey Weinsteins, have connected primarily online action to real-world change. What’s increasingly clear is that online mobilization – like amplification – is simply a tool in the contemporary civic toolkit, alongside more traditional forms of organizing.

Social media can be a space for deliberation and debate.
Perhaps no promise of social media has been more disappointing than hope that social media would provide us with an inclusive public forum. Newspapers began experimenting with participatory media through open comments fora, and quickly discovered that online discourse was often mean, petty, superficial and worth ignoring. Moving debate from often anonymous comment sections onto real-name social networks like Facebook had less of a mediating effect that many hoped. While conversations less often devolve into insults and shouting, everyone who’s shared political news online has had the experience of a friend or family member ending an online friendship over controversial content. It’s likely that the increasing popularity of closed online spaces, like Facebook groups, has to do with the unwillingness of people to engage in civil deliberation and debate, and the hope that people can find affirmation and support for their views rather than experiencing conflict and tension.

Yet it is possible to create spaces for deliberation and debate within social media. Wael Ghonim was the organizer of the We Are All Khaled Said Facebook page, one of the major groups that mobilized “Tahrir youth” to stand up to the Mubarak regime, leading to the most dramatic changes to come out of the Arab Spring. After the revolution, Ghonim was deeply involved with democratic organizing in Egypt. He became frustrated with Facebook, which was an excellent platform for rallying people and harnessing anger, but far less effective in enabling nuanced debate about political futures. Ghonim went on to build his own social network, Parlio, which focused on civility and respectful debate, featuring dialogs with intellectuals and political leaders rather than updates on what participants were eating for lunch or watching on TV. The network had difficulty scaling, but was acquired by Quora, the question-answering social network, which was attracted to Parlio’s work in building high-value conversations that went beyond questions and answers .

Parlio suggests that the dynamics of social networks as we understand them have to do with the choices made by their founders and governing team. Facebook and Twitter can be such unpleasant places because strong emotions lead to high engagement, and engagement sells ads. Engineer a different social network around different principles, and it’s possible that the deliberation and debate we might hope from a digital public sphere could happen within a platform.
Social media can be a tool for showing us a diversity of views and perspectives.

Social media could serve as a tool to increase diversity of our networks
The hope that social media could serve as a tool for introducing us to people we don’t already know – and particularly to people we don’t agree with – may seem impossibly cyberutopian. Indeed, I wrote a book, Rewire, that argues that social media tends to reinforce homophily, the tendency of birds of a feather to flock together. Given the apparent track record of social media as a space where ethnonationalism and racism thrive, skepticism that social media can introduce us to new perspectives seems eminently reasonable.

Contemporary social networks have an enormous amount of potential diversity, but very little manifest diversity. In theory, you can connect with 2 billion people from virtually every country in the world on Facebook. In practice, you connect with a few hundred people you know offline, who tend to share your national origin, race, religion and politics. But a social network that focused explicitly on broadening your perspectives would have a tremendous foundation to build upon: networks like Facebook know a great deal about who you already pay attention to, and have a deep well of alternative content to draw from.

Projects like FlipFeed from MIT’s Laboratory for Social Machines and gobo.social from my group at the MIT Media Lab explicitly re-engineer your social media feeds to encourage encounters with a more diverse set of perspectives. If a network like Twitter or Facebook concluded that increased diversity was a worthy metric to manage to, there’s dozens of ways to accomplish the goal, and rich questions to be solved in combining increased diversity with a user’s interests to accomplish serendipity, rather than increased randomness.

Social media can be a model for democratically governed spaces.
Users in social networks like Twitter and Facebook have little control over how those networks are governed, despite the great value they collectively create for platform owners. This disparity has led Rebecca MacKinnon to call for platform owners to seek Consent of the Networked, and Trebor Scholz to call us to recognize participation in social networks as Digital Labor. But some platforms have done more than others to engage their communities in governance.

Reddit is the fourth most popular site on the US internet and sixth most popular site worldwide, as measured by Alexa Internet, and is a daily destination for at least 250 million users. The site is organized into thousands of “subreddits”, each managed by a team of uncompensated, volunteer moderators, who determine what content is allowable in each community. The result is a wildly diverse set of conversations, ranging from insightful conversations about science and politics in some communities, to ugly, racist, misogynistic, hateful speech in others. The difference in outcomes in those communities comes in large part to differences in governance and to the partipants each community attracts.

Some Reddit communities have begun working with scholars to examine scientifically how they could govern their communities more effectively. /r/science, a community of 18 million subscribers and over a thousand volunteer moderators, has worked with communications scholar Nathan Matias to experiment with ways of enforcing their rules to maximize positive discussions and throw out fewer rulebreakers . The ability to experiment with different rules in different parts of a site and to study what rulesets best enable what kinds of conversations could have benefits for supporters of participatory democracy offline as well as online.

Beyond the vast wasteland

It’s fair to point out that the social media platforms we use today don’t fulfill all these functions. Few have taken steps to increase the diversity of opinions users are exposed to, and though many have tried to encourage civil discourse, very few have succeeded. It’s likely that some of these goals are incompatible with current ad supported business models. Political polarization and name-calling may well generate more pageviews than diversity and civil deliberation.
Some of these proposed functions are likely incompatible. Communities that favor solidarity and subgroup identity, or turn that identity into mobilization, aren’t the best ones to support efforts for diversity or for dialog.

Finally, it’s also fair to note that there’s a dark side to every democratic function I’ve listed. The tools that allow marginalized people to report their news and influence media are the same ones that allow fake news to be injected into the media ecosystem. Amplification is a technique used by everyone from Black Lives Matter to neo-Nazis, as is mobilization, and the spaces for solidarity that allow Jen Brea to manage her disease allow “incels” to push each other towards violence. While I feel comfortable advocating for respectful dialog and diverse points of view, someone will see my advocacy as an attempt to push politically correct multiculturalism down their throat, or to silence the exclusive truth of their perspectives through dialog. The bad news is that making social media work better for democracy likely means making it work better for the Nazis as well. The good news is that there’s a lot more participatory democrats than there are Nazis.

My aim in putting forward seven things social media could do for democracy is two-fold. As we demand that Facebook, Twitter and others do better – and we should – we need to know what we’re asking for. I want Facebook to be more respectful of my personal information, more dedicated to helping me connect with my friends than marketing me to advertisers, but I also want them to be thinking about which of these democratic goals they hope to achieve.

The most profound changes Newt Minow inspired in television happened outside of commercial broadcasting, in the new space of public broadcasting. I believe we face a similar public media moment for social media. Achieving the democratic aims for social media outlined here requires a vision of social media that is plural in purpose, public in spirit and participatory in governance. Rather than one social network that fills all our needs, we need thousands of different social networks that serve different communities, meeting their needs for conversation with different rules, norms and purposes.

We need tools that break the silos of contemporary social media, allowing a citizen to follow conversations in dozens of different spaces with a single tool. Some of these spaces will be ad or subscription supported, while some might be run by local governments with taxpayer funds, but some subset of social media needs to consciously serve the public interest as its primary goal.

Finally, farming the management of online spaces to invisible workers half a world away from the conversations they’re moderating isn’t a viable model for maintaining public discussions. Many of these new spaces will be experiments in participatory governance, where participants will be responsible for determining and enforcing the local rules of the road.

We accept the importance of a free and vibrant press to the health of our democracy. It’s time to consider the importance of the spaces where we deliberate and debate that news, where we form coalitions and alliances, launch plans and provide support to each other. The free press had defenders like Thomas Jefferson, who declared that if he had to choose between “a government without newspapers or newspapers without a government, I should not hesitate a moment to prefer the latter”.

The health of our digital public spheres is arguably as important, and worth our creative engagement as we imagine and build spaces that help us become better citizens. Social media as a vast wasteland is not inevitable, and it should not be acceptable. Envisioning a better way in which we interact with each other online is one of the signature problems of modern democracy and one that demands the attention of anyone concerned with democracy’s health in the 21st century.

]]>
0
Ethan http://www.ethanzuckerman.com <![CDATA[Thinking in Solid]]> http://www.ethanzuckerman.com/blog/?p=5594 2019-07-29T19:18:36Z 2019-07-29T19:18:36Z Continue reading ]]> “Why does Amazon ask me to review something the day it arrives?” Amy asks. “I usually don’t know if it’s any good for a couple of weeks. They should email you again a hundred days later.”

We’re walking the dog on the Ashuwillticook rail trail, which runs along side Cheshire Lake, a few miles from our house. When we manage to get our schedules in sync, this is one of my favorite rituals. We walk four miles in a little more than an hour. The doggo gets properly exercised and we get the chance to talk about whatever’s on our minds.

Amy has been sewing new cushions for our patio furniture since the previous ones decayed. Her mind is on reviews of patio furniture. You have no idea if your patio furniture is any good until you’ve had it for at least one season, and it should be possible to sort reviews on Amazon and find only the ones by folks posting after they’ve owned things and lived with them for a while.

What’s on my mind is a talk Tim Berners-Lee gave about Solid, and decentralized models for rebuilding the web. And because we’re walking the dog, these trains of thought merge on the rail trail, and we start designing a new product review site based on Solid.

Amazon reviews work by keeping track of what products you’ve ordered and when they’ve been delivered. A day or two after Amazon believes they’ve been successfully delivered, they ask you to review the product, giving it between 1-5 stars and a short review.

There’s all sorts of things wrong with this system. Only 3-10% of consumers rate any given purchase, and only about 40% of consumers rate at all. We’re more likely to review a product that we loved, or one we really, really hated, so reviews tend towards binary extremes – ones and fives, with very few twos, threes and fours. And while Amazon requires you to purchase an item before it will let you review it, there’s still a vast ecosystem of review fraud, in which sellers refund the cost of an item and send a bonus in exchange for five star reviews. This practice is so common that as many as one in three online reviews may be fake in some product categories (inexpensive electronics, in particular), and a group of watchdogs, including ReviewMeta and FakeSpot have sprung up to combat fake reviews. Amazon reports that it’s putting significant resources into combatting review fraud.

These are real problems, and none of these are the problem Amy wants to fix. Amazon could implement her suggestion – it knows when you purchased outdoor cushions and could email you in 100 days and then again at 400 days for “lifetime” reviews of a product. It’s not clear whether they would. Imagine that reviews submitted months later were more negative than those made at time of purchase. Amazon needs some negative reviews – most consumers are smart enough to grow suspicious when they encounter only positive reviews – but a consistent pattern in which purchases become more disappointing over time might retard sales. Independent review sites like TrustPilot – which has its own serious review fraud problems – could build this service, but they lack key pieces of information: the date that you purchased something, and the ability to verify that you actually paid for it.

Turns out Amy’s service is very easy to build in a Solid world. In Solid, you store data in a “pod”, a data store you control either on your own server at home, or cloud space you control. When you buy something from Amazon, you make a record in your pod of the transaction; Amazon does the same, so they can update their inventory, send you your shipment, etc. Because you have access to your transaction records, you can write a simple tool to ping you 100 days after you’ve bought something to review it. You could write the review on Amazon, TrustPilot, or a new Solid-compliant LifetimeReviews, which would allow you to keep the contents of your review in your Pod, but would include it in a search on the LifetimeReviews.solid site for reviews of patio furniture (with your permission, of course.) In fact, LifetimeReviews.solid would invite you to share a subset of the data stored in your pod so it could prompt you 100 days later about every purchase you’ve made on any different Solid-compliant platform and collect reviews on any product you were willing to evaluate. You’d own those reviews – they’d be stored on your pod – but it would provide a useful service in indexing those reviews and making them available to the rest of the web.

Building a novel product review service in the contemporary Web can feel both impossible and futile. If it were worth building, Amazon would have a massive advantage in building it, given the amount of transactional data they already control… and they’d probably block you from using “their” (your) data to build such a service. And if you succeeded, they’ll just implement their own version of your feature, putting you out of business. And if it were widely used, it would almost certainly be filled with fraud much as Amazon’s system already is. Why bother?

I’m trying to remember what the web felt like in the early 1990s, when there was so much left to build and such low barriers to building it. We built silly and frivolous shit all the time, and occasionally, it turned out to be useful and important. The homepage builder, the product that ultimately attracted users to Tripod, was built essentially on a lark. It took months for us to realize that it was going to be popular and years to realize it would become the heart of our business.

I think Solid has me thinking about those early days because it promises a world of permissionless innovation. Obtaining Amazon’s permission to build a new type of review site feels essentially impossible; the idea that I might build something new – possibly cool, possibly frivolous – and only need the permission of the people who want to use it feels liberating.

Here’s what I really want to build: a news-factchecking tool that lets me control what’s considered a reliable source, rather than giving Facebook that control. And I know how to build it. And I can’t.

Gobo.social lets you integrate posts from different social media – Twitter, Mastodon and parts of Facebook – into a single feed, which you can sort and filter as you’d like. A team in my lab built it so we could experiment with two ideas:
– People should have the ability to filter their newsfeeds as they choose, not as Facebook chooses.
– We need social media browsers that let us manage our different identities, communities and preferences with a single tool, instead of through dozens of incompatible silos.

In one sense, Gobo has been a success – it’s generated some robust discussion about how social media could work better for its users. But in another sense, it’s been an uncomfortable reminder that innovation these days is anything BUT permissionless. Thus far, Gobo has played by the rules – we’ve used the documented APIs offered by social media platforms, which has meant we have full access to Twitter and Mastodon content, but only very limited access to Facebook. The Facebook API gives us access to the Pages you follow, but not to the posts from any of your friends. (I don’t know about you, but I don’t follow a lot of pages, which tend to be run by marketing departments, not by real people.) It could be worse – we just spent six months trying to get permission from LinkedIn to access their API and were flat-out denied.

We could – and may – integrate social media another way. We could ask you to give us your Facebook or LinkedIn username and password. Using those credentials, we could then access your unfiltered timeline, scrape it and present it to you to filter as you’d like. But that’s a terrible idea – it makes us responsible for managing your credentials, which has all sorts of dangers. (We can create a Tinder account for you, for instance…) And Facebook would demand we shut the service immediately, citing Facebook vs. Power Ventures as precedent.

I’d love to hook Gobo up to Factmata, a very cool new company that evaluates online content and provides scores for believability based on nine different signals. Rather than giving a compound score, or a binary “fake/true” distinction, Factmata offers scores on the different signals, so we could give you – through Gobo and Factmata – the ability to filter out news it thinks is clickbait, or thinks is politically biased, insulting or sexist. Would Factmata do a perfect job of filtering out bogus news? Almost certainly not, but Facebook is extremely unlikely to do the job perfectly either, and while you’d know the ways in which Gobo and Factmata failed, the inner workings of Facebook are entirely opaque.

Would Solid solve this problem? Not immediately, of course. In a world where Facebook, LinkedIn and everyone else chose to make their services Solid-compatible, it would be trivial to pipe these services together. But Sir Tim has made it clear that his goal is not to challenge Facebook, but to invite innovators to experiment with a new way of building websites.

My fear is this – that we need to experiment with tools like Solid and start working to pry open Facebook at the same time. There’s immense amounts of human effort going into closed, silo’d, non-interoperable platforms like Facebook, LinkedIn and YouTube. I’m not comfortable ceding that accumulation of creativity to those who’ve moved fast and fenced off their corner of the web. We need to create new social media platforms, but we need to understand that 99% of what people want to do at present is communicate with friends on existing platforms, and we need tools that bridge that gap. We need the ability to innovate around huge, existing services like Amazon, if only so Amy can stop sewing couch cushions and start her new review business.

]]>
0
Ethan http://www.ethanzuckerman.com <![CDATA[Sir Tim versus Black Mirror]]> http://www.ethanzuckerman.com/blog/?p=5581 2019-06-06T22:44:53Z 2019-06-06T22:44:53Z Continue reading ]]> On a sunny summer morning in June, professor Jonathan Zittrain is hosting Sir Tim Berners-Lee in a Harvard Law School classroom. The audience is a smattering of visiting scholars at the Berkman Klein Center for Internet and Society and a few local techies involved with open source software development. I’d come to the room half an hour early to snag a seat, but I needn’t have bothered, as the crowd to see the man who invented the World Wide Web is attentive, but thin.

Jonathan Zittrain, one of the world’s leading scholars of creativity in an internet-connected universe, points out that Sir Tim’s current work is attempting to make a second correction in the arc of the internet. His first innovation, thirty years ago, was “the conceptualization and the runaway success of the World Wide Web.” Sir Tim’s current idea is a protocol – Solid – and a company – Inrupt – which want to make the Web as it is now significantly better. Just what are Solid and Inrupt? That’s what a smattering of us are here to find.

Sir Tim draws an arc on the chalkboard behind him. “People talk about the meteoric rise of the web – of course, meteors go down.” Referencing internet disinformation expert Joan Donavan, sitting in the audience, he notes “If you study the bad things on the web, there’s hundreds and thousands to study.” Almost apologetically, he explains that “there was a time when you could see things that were new [online], but not the ways they were bad.” For Sir Tim, the days of blogs were pretty good ones. “When you made a blog, you tried to make it high quality, and you tried to make your links to high quality blogs. You as a blogger were motivated by your reading counter, which led to a virtuous system based on custodianship as well as authorship.” Wistfully, he noted, “You could be forgiven for being fairly utopian in those days.”

What came out of this moment in the web’s evolution was a “true scale-free network, based on HTTP and HTML.” (Scale-free networks follow a Pareto distribution, with a small number of highly connected nodes and a “long tail” of less-connected nodes.) “It was extraordinary to discover that when you connect humanity, they form scale-free networks at all different levels. We put out HTTP and HRTML and ended up with humanity forming scale-free networks on a planetary – okay, a tenth of a planet – scale.”

Sir Tim noted that much of what was most interesting about the web was in the long tail, the less connected and less popular nodes. Zittrain invokes philosopher David Weinberger’s maxim, “In the future, everyone will be famous for 15 people” to acknowledge this idea, and Sir Tim pushes back: “That’s not scale free. What’s possible is that for n people on the planet, we might have root-n groups. We’re not trying to make one network for everyone, not trying to design something for Justin Bieber tweeting.”

So why doesn’t blogosphere still work? Sir Tim blames the Facebook algorithms which determine what you read, breaking network effects and leading to a huge amount of consolidation. Zittrain wonders whether Facebook’s power is really all that new – didn’t Google’s search algorithm have similar effects? Sir Tim demurs – “Google just looks at all links and takes an eigenvector – it’s still using the web to search.” There’s a fascinating parenthetical where Sir Tim explains that he never thought search engines were possible. “Originally, we thought no one would be able to crawl the entire web – you would need so much storage, it wouldn’t be possible. We hadn’t realized that disk space would become ridiculously cheap.” Jonathan Zittrain likens the moment when Google comes into being as a science fiction moment, where our ability to comprehend the universe as limited by the speed of light suddenly allows us to transcend those barriers – prior to search, we might only know our local quadrant of the web, while search suddenly made it possible to encounter any content, anywhere.

Sir Tim brings us back to earth by discussing clickbait. “Blogging was driven by excitement around readership. But eventually ads come into play – if I am writing, I should have recompense.” What follows is content written specifically to generate money, like the fake news content written by Macedonian bloggers that might have influenced US elections. Zittrain generously references my “The Internet’s Original Sin” article, and Sir Tim notes that “some people argue that if you start off with advertising, you’re never going to have a successful web.”

The consequence of a monetized web, Sir Tim believes, is consolidation, designed to give advertisers larger audiences to reach. That consolidation leads to silos: “My photos are on Flickr, but my colleagues are all on LinkedIn? How do I share them? Do I have to persuade all my friends to move over to the platform I’m on?”

Zittrain offers two possible solution the problem: interoperability, where everything shares some common data models and can exchange data, or dramatic consolidation, where LinkedIn, for instance, just runs everything. Sir Tim isn’t overly optimistic about either, noting that totalitarian societies might be able to demand deep interop, but that it seems unlikely in our market democracy. And while consolidation is easier to work within, “consolidation is also incredibly frustrating. If you want to make a Facebook app, you need to work within not only the Facebook API, but the Facebook paradigm, with users, groups, and likes. Silos are very bad for innovation.”

Returning to the arc he’s drawn on the blackboard, Sir Tim notes that the meteor is crashing into earth. “We don’t need to imagine future web dystopias. We’ve got a television show where every single episode illustrates a different form of dysfunction.” The arc of the Web is long and it leads towards Black Mirror.

In March of this year, Sir Tim launched the #ForTheWeb campaign to celebrate the thirtieth anniversary of the Web. For Tim, the campaign was meant to feature the web worth saving, not to demand that either governments or Facebook fix it for us. “We need to fix networks and communities all at once, because it’s a sociotechnical system,” he explains. “We need to work inside the companies and inside the government. Some things are simple to fix – net neutrality, cheaper broadband, those were relatively simple. This isn’t simple. Free speech and hate speech are complicated and need complex social processes around them.” And while #ForTheWeb is a space for articulating the key values we want to support for a future direction of the web, that new direction needs a technical component as well. We need a course correction – what’s the White Mirror scenario?

Sir Tim pushes up the blackboard featuring the web as a meteor crashing back to earth. On the board below it, he starts drawing a set of cylinders. Solid is based around the idea of pods, personal data stores that could live in the cloud or which you could control directly. “Solid is web technology reapplied,” Sir Tim explains. “You use apps and web apps, but they don’t store your data at all.”

Returning to his photo sharing scenario, Sir Tim imagines uploading photos taken from a digital camera. The camera asks where you want to store the data. “You have a Solid pod at home, and one at work – you decide where to put them based on what context you want to use them in. Solid is a protocol, like the web. Pods are Solid-compatible personal clouds. Apps can talk to your pod.” So sharing photos is no longer about making LinkedIn and Flickr talk to each other – it’s simply about both of them talking to your pod, which you control.

“The web was all about interoperability – this is a solution for interoperability,” explains Sir Tim. “You choose where to store your information and the pods do access control, There’s a single sign on that leads to a WebID. Those WebIDs plus access controls are a common language across the Solid world.” These WebIDs support groups as well as individuals… and groups have pages where you can see who belongs to them. Apps look up the group and deliver information accordingly. The content delivery mechanism underneath Solid is WebDAV, a versioning and authoring protocol that Sir Tim has supported from very early on as a way of returning the Web to its read/write roots, though he notes that Solid plans on running on protocols that will be much faster.

Zittrain picks up the legal implications of this new paradigm: “Right now, each web app or service has custody of the data it uses – LinkedIn has a proprietary data store behind it. But there might also be some regulations that govern what LinkedIn can do with that data – how does that work in a Solid world?”

Ducking the legal question, Sir Tim looks into ways we might bootstrap personal data pods. “Because of GDPR, the major platforms bave been forced to create a way for people to export their content. You’d expect that Google, Facebook and others would fight this tooth and nail – instead they’re cooperating.” Specifically, they’re developing the Data Transfer Project, a common standard for data export that allows you not only to export your data, but to import it into a different platform. “They’ve gone to the trouble of designing common data models, which is brilliant from the Solid point of view.”

Zittrain suggests that we can think of Solid’s development in stages. In Stage 0, you might be able to retrieve your data from a platform, possibly from the API, possibly by scraping it, and you might get sued in the process. In Step 1, you can get your data through a Data Transfer dump. In Step 2, companies might begin making the data available regularly through Solid-compatible APIs. In Step 3, the Solid apps start working off the data that’s been migrated into personal pods.

Sir Tim notes that exciting things start to happen in Step 3. “My relationship with a bank is just a set of transactions and files. I can get a static copy of how the bank thinks of my current relationships. What I would like is for all those changes to be streamed to my Solid pod.” He concedes, “I probably don’t want to have the only copy.” Much of what’s interesting about Solid comes from the idea that pods can mirror each other in different ways – we might want to have a public debate in which all conversations are on the record and recorded, or an entirely ephemeral interaction, where all we say to one another disappears. This is one of many reasons, Sir Tim explains, “Solid does not use Blockchain. At all.”

Zittrain persists in identifying some of the challenges of this new model, referencing the Cambridge Analytica scandal that affected Facebook. “If the problem is privacy, specifically an API that made it easy to get not only my data, but my friends’ data, how does Solid help with this? Doesn’t there need to be someone minding controls of the access lists?”

Solid, Sir Tim explains, is not primarily about privacy. Initially, people worried about their personal data leaking, a compromising photo that was supposed to be private becoming public. Now we worry about how our data is aggregated and used. The response shouldn’t be to compensate people for that data usage. Instead, we need to help combat the manipulation. “Data is not oil. It doesn’t work that way, it’s not about owning it.” One of Sir Tim’s core concerns is that people offer valuable services, like free internet, in exchange for access to people’s datastream.

Zittrain points out that the idea that you own your own data – which is meant to be empowering – includes a deeply disempowering possibility. You now have the alienable right of giving away your own data.

Sir Tim is more excited about the upsides: “In a Solid world, my doctor has a Solid ID and I can choose the family photo that has a picture of my ankle and send it to the doctor for diagnosis. And I can access my medical data and share it with my cousin, if I choose.” Financial software interoperates smoothly, giving you access to your full financial picture. “All your fitness stuff is in your Solid Pod, and data from your friends if they want to share it so you can compete.” He imagines a record of purchases you’ve made on different sites, not just Amazon, and the possibility of running your own AI on top of it to make recommendations on what to buy next.

A member of the audience asks whether it’s really realistic for individuals to make decisions about how to share their data – we may not know what data it is unsafe to share, once it gets collected and aggregated. Can Solid really prevent data misuse?

“The Solid protocol doesn’t tell you whether these services spy on you, but the spirit of Solid is that they don’t,” offers Sir Tim. Apps are agents acting on your behalf. Not all Solid apps will be beneficent, he notes, but we can train certified developers to make beneficent apps, and offer a store of such apps. Zittrain, who wrote a terrific book about the ways in which app stores can strangle innovation, is visible uncomfortable and suggests that people may need help knowing who to trust in a Solid world. “Imagine a party able to be designated as a helper with respect to privacy. Maybe a grandchild is a helper for a grandmother. Maybe we need a new role in society – a fiduciary whose responsibility is to help you make trust decisions.” Zittrain’s question links Sir Tim’s ideas about Solid to an idea he’s been developing with Jack Balkin about information fiduciaries, the idea that platforms like Facebook might be required to treat our personal data with the legal respect that doctors, lawyers and accountants are forced to apply to personal data.

Another question wonders who will provide the hardware for Solid pods. Zittrain points out that Solid could run on Eben Moglen’s “Freedom Box”, a long-promised personal web server designed to put control of data back into users hands. Sir Tim suggests that your cable or ISP router might run a Pod in the future.

My question for Sir Tim focuses on adoption. Accepting for the moment the desirability of a Solid future – and, for the most part, I like Sir Tim’s vision a great deal – how do we get from here to there? For the foreseeable future, billions of people are using proprietary social networks that surveil their users and cling to their data. When Sir Tim last disrupted the Internet, it was an academic curiosity, not an industry worth hundreds of billions. How do we get from here to there?

Sir Tim remembers the advent of the web as a struggle. “Remember when Gopher was taking off exponentially, and the web was growing really slowly? Remember that things that take off fast can drop off fast.” Gopher wasn’t free, and its proprietary nature led it to die quickly; “People seem locked into Facebook – one of the rules of Solid is not to disturb them.” People who will adopt Solid will work around them, and when people begin using Solid, that group could explode exponentially. “The billion people on Facebook don’t affect the people using a Solid community.”

Returning to the 80s, Sir Tim notes that it was difficult for the Web to take off – there were lots of non-internet documentation systems that seemed like they might win. What happened was that CERN’s telephone directory was put on the web, and everyone got a web browser to access that directory. It took a while before people realized that they might want to put other information on top of the directory.

“We don’t want everyone using Facebook to switch to Solid tomorrow – we couldn’t handle the scale.” Instead, Sir Tim offers, “We want people who are passionate about it to work within it. The reward is being part of another revolution.”


There’s something very surreal about a moment in which thousands of researchers and pundits are studying what’s wrong with social media and the Web, and surprisingly few working on new models we can use to move forward. The man who built the web in the first place is now working on alternative models to save us from the Black Mirror universe and the broader academic and professional world seems… surprisingly uninterested.

I can certainly see problems with Solid apps – your Pod will become a honeypot of private information that’s a great target for hackers. Apps will develop to collect as much of your Pod data as possible, unless they’re both regulated and technically prevented from doing so. Unless Pods are mostly on very fast cloud services, apps that draw from multiple pods will be significantly slower than the web as it operates today.

But there’s so much to like in Sir Tim’s vision. My lab and I are working now on the idea that what the world needs now is not a better Facebook, but thousands of social networks, with different rules, purposes and community standards. Like Sir Tim, we’re not looking to replace Facebook but to create new communities for groups of 5 to 50,000, self-governing and capable of different behaviors than the communities with hundreds of millions of users and central corporate governance are capable of. There’s no reason why the networks we’re imagining couldn’t live atop Solid.

It’s hard to remember how small and strange an experiment the web was in 1989, or even in 1994. I remember dropping out of graduate school to work on a web startup. My motivation wasn’t that I might make a lot of money – that seemed extraordinarily unlikely. It was that someone was willing to pay me to work on something that seemed… right. Like a plausible and desirable future. And for me, at least, Solid seems plausible and desirable in much the same way. It also seems roughly as hard to love as the Web was in 1994, with its grey backgrounds and BLINK tag – Solid.Community allows you to register an ID, which at present doesn’t seem to let you do anything, though you can read the Github repository and see how you might create a chat app atop Solid.

Can Sir Tim revolutionize the Internet again? I have no idea. But someone needs to, because a web that crashes to earth is a Black Mirror episode I don’t want to see.

]]>
0
Ethan http://www.ethanzuckerman.com <![CDATA[Rest in peace, Binyavanga Wainaina]]> http://www.ethanzuckerman.com/blog/?p=5577 2019-05-22T22:29:05Z 2019-05-22T22:29:05Z Continue reading ]]> Binyavanga Wainaina died last night in a hospital in Nairobi at the age of 48. We lost him far, far too soon, but Bin spent his brief time on earth remarkably well, and packed more insight and discovery into his time than many people who survive twice as long.


Binyavanga Wainaina, photographed by Victor Dlamini for The JRB.

Like many people, I learned of Binyavanga’s work first from his remarkable and cutting essay, “How to Write About Africa”, a compendium of clichés that infect a great deal of writing about Africa, especially writing by well-meaning, liberal white westerners like myself. We met in person at TED Africa in Arusha in June, 2007, where he gave a funny and rollicking speech that touched on the rapid changes Kenya was going through, and the need for an African literary scene not centered around London or New York. (TED recently released his talk from the archives – it’s a wonderful picture of his thinking and his passions at the time.)

He and I found ourselves on the conference circuit together – searching around today, I found a video of us on a panel at PICNIC in the Netherlands in 2008. We got to know each other better that fall, when he came to Williams College – about ten miles from where I live – and was a scholar in residence for a year, and we met a few times for coffee and chats about politics. Looking back on his writing at that time, I can see his thinking move from the politics of the moment in Kenya to larger issues of the legacy of colonialism, the emergence of new pan-African identities, and the ways in which his own biography illustrated those themes. Writing in the Guardian, Helon Habila describes his autobiography, One Day I Will Write About This Place, as “subtle”, a coming of age story that helps explain how he became the brilliant and incisive commentator he was as a grown man.

What Helon and other readers didn’t know was that Bin had left a key part out of that autobiography: his identity as a gay man. In 2014, he came out in a “missing chapter” from that book, a letter to his late mother titled “I am a homosexual, mum”. In it, he explains that it took him until he was 39 to self-identify as gay, and until he was 43 to come out publicly. His coming out was a deeply brave act, as homosexuality is not recognized under Kenyan law, sexual acts between men are a felony, and there are no legal protections against discrimination for gay citizens. Over the last few years, he’s been an extremely visible LGBT activist, using the combination of his ever-sharp wit and his increasing fabulousness to bring the issue of LGBT equality to new levels of prominence and visibility in Kenya. It’s a terrible irony of his death that the Kenyan high court is about to issue a ruling that may recognize rights for LGBT Kenyans.

I sent Bin congratulations after his coming out, but the next exchanges I had with him were around his health, which took a sharp turn for the worse in 2015, with a series of strokes. Friends helped raise money for him to seek treatment in India, and he recovered well enough to tour and speak. Unfortunately, it was another stroke that felled him last night.

I am reaching the age where I am starting to lose peers. Not lots of them yet, thank god, but enough that I have noticed a pattern. I search my email and look at what we talked about and when. With Binyavanga, it’s logistics: where might we meet up and when? There’s a long exchange about Kenyan musicians Just a Band and helping find them gigs at US colleges, thoughts on what US schools are good places to spend a semester as a writer.

Today I realized that I am looking not just for memories, but for reassurance that I didn’t leave a last email unanswered. And while I’m glad that my last exchange with Binyavanga was one where he asked a question and I answered, I’m angry at myself that I hadn’t reached out in the last couple of years to ask him a question: how he was, what he was doing and thinking, his thoughts on the high court case.

Binyavanga was an inspiration as a thoughtful, brave, colorful, provocative, passionate and wise man. His transformation into a fuller, happier version of himself as he became an avatar of queer Africa was remarkable to watch, and an inspiration to think about what transformations I want to make in my own life as a mostly het, cis-gendered, middle-aged white dude. I regret that I didn’t have a last chance to talk with Binyavanga, waiting as he rolled a cigarette, collected his thoughts and declaimed his truths.

Rest in peace.

Daily Active Kenya has a fine collection of photos and quotes from Binyavanga.

]]>
0
Ethan http://www.ethanzuckerman.com <![CDATA[Don’t use A. Briggs]]> http://www.ethanzuckerman.com/blog/?p=5574 2019-01-13T18:49:13Z 2019-01-13T18:49:13Z Continue reading ]]> If you’re a frequent traveler, you probably have needed a visa expediter at some point. Good expediters can get you out of a serious jam, helping you get a visa or even a new passport in a short time. For over a decade, I used A. Briggs, a long-established expediter used by many large firms and institutions. They once helped me get a Nigerian visa and a new passport in under a week, which was pretty amazing.

But they’ve gone downhill. Way down. I’m enroute to Nairobi today and from there to Sierra Leone, and given some tight timing, I sent my paperwork to A. Briggs to get the Sierra Leone visa. I should have backed off once I noticed some significant changes to their website. They have been acquired by another firm, CIBT, and their application process is now loaded with hidden fees. By default, you’re signed up for a number of expensive extras, including a $25 fee for keeping a digital copy of the visa they obtain and $25 for registering you with the US State Department, a service the US government provides for free. The online process heavily upsells their “concierge service”, which promises handholding through the visa process for a mere $300 extra – in retrospect, I wonder whether my dreadful experience would have been better or worse after paying that extortionate fee.

People use visa expediters because they need a visa in a narrow window of time – you’re basically paying someone to carry your paperwork to the consulate, wait for it to be completed and send it back to you. The most critical piece of the application is the time by which you need the visa, which in my case was Friday, as my flight to Kenya left Saturday at noon. I spoke to Briggs several times through the process, as they needed even more documents for Sierra Leone than expected, and they assured me they’d have the visa by Wednesday to send it to me on Thursday. When I didn’t get word from a courier that it was enroute on Thursday, I called. Turned out they had gotten the wrong visa – a tourist visa instead of the much more expensive, multiple entry business visa I’d asked for. Instead of calling me and giving mr the choice of traveling with the tourist visa – which I would have chosen – they sent the passport back to the embassy. This meant I wouldn’t have the visa until Friday, and there was no way to get it before getting on my plane.

I got on the phone and got to a manager at Briggs who offered me the solution of a same day courier to deliver me the visa… for a mere $729. When I explained that this was their mistake not mine, she offered to have a courier meet me at the airport just before my flight, for only $200, which she rapidly reduced to $80. (It’s not clear what I might have been able to bargain the $700+ courier down to, but it strongly suggests that A. Briggs is marking up the cost of courier services as another revenue stream.)

I scheduled delivery of my passport to JFK for 10am the day of my flight, which left at noon. Tight but doable. The person I worked with gave me several numbers to try if there were any problems. Predictably, there were. When no courier contacted me by 10am, I started calling numbers. All went to voicemail boxes which hadn’t been set up, except one the woman had given me as her business cellphone, which went to a very confused woman in DC who had nothing to do with the company. Even though no one at A. Briggs or their parent company answered their phones, fortunately their courier did… who explained that A. Briggs had requested delivery at 11am, the time the flight would be closing. I begged the courier to come as fast as he could, tipped him generously when he made it by 10:40 and made my flight with a few seconds to spare.

So yes, I got the visa. I also vomited twice from stress, first when I discovered they’d resubmitted the passport, creating the crisis, and again when I discovered the courier wasn’t coming. Oh, and for such thoughtful service, A. Briggs charged me over $400 in handling fees on top of the $160 visa fee.

Don’t use them, or any company that’s part of CIBT. They won’t give you direct phone numbers to talk with whoever is processing your visa unless you pay an absurd extra fee. Their phone system is misconfigured, so if you’re in a jam, trying to reach someone, you’ll be sent to a broken voicemail inbox. I have no way of knowing whether my miserable experience was incompetence, or a new business strategy – I suspect the former – but I am now trying to get MIT to stop using A. Briggs as their visa expediter, and I would urge anyone, an individual or a corporate travel department, to find someone else to work with.

(Fun postscript – once I finally got my visa, I expected to see a cancelled tourist visa as well as a business visa. I didn’t – just a clean business visa. Given that there’s no pages missing from my passport, and no alterations to that visa page, it looks like A. Briggs just… lied. Either they got the visa on time and failed to send it to me in time, or they didn’t get it until a day late… or maybe they simply didn’t send it on time so they could charge fees on top of what they paid a courier to deliver it. Please, please don’t use this company’s services.)

]]>
1
Ethan http://www.ethanzuckerman.com <![CDATA[Deceptive ads and the DRC election: help us document possible election fraud]]> http://www.ethanzuckerman.com/blog/?p=5569 2019-01-07T20:36:56Z 2019-01-07T18:01:57Z Continue reading ]]> en français, ci-dessous

The Democratic Republic of Congo held presidential elections on December 30, 2018. Preliminary results were originally scheduled to be released yesterday, January 6th, but the head of the electoral commission has delayed reporting those results because as of Saturday the 5th, less than half of the votes had been transported to counting centers.

So why are ads on Google and Facebook, apparently targeted towards internet users in DRC’s neighbor, Congo-Brazzaville, declaring Emmanuel Shadary to be DRC’s new president?

The ads above were forwarded to me from an NGO worker in Brazzaville, across the river from Kinshasa, the capital of the DRC. There’s regular traffic between Brazzaville and Kinshasa, which may be one of the major ways information is getting into DRC, as election officials have shut off the internet, turned off SMS messaging, and ordered Radio France Internationale off the air.

These ads would be illegal in DRC, where it is prohibited to announce an election winner before the electoral commission releases results. Furthermore, there’s a good chance that they are fake news, designed to help the incumbent government remain in power. Unfortunately, Facebook and Google’s powerful ad systems may be being used to reinforce election fraud, either by targeting these ads to Brazzaville or to DRC itself, where a small number of people are still on the internet. (While 3G and 4G services are down, some businesses are reported to be online.)

Background: For the past 18 years, Joseph Kabila has been president of the Democratic Republic of Congo, who took office after his father, President Laurent-Désiré Kabila, was assassinated in 2001. Elected to two terms in 2006 and 2011, Kabila was mandated to step down from his office in 2016. He didn’t. Instead, DRC’s electoral authority announced that an election couldn’t be held until 2018. This is that election, and Kabila eventually announced that he would not stand.

Instead, he threw his support behind Emmanuel Shadary, who served under Kabila as minister of the interior. During his time serving Kabila, Shadary controlled the police and security services, and is alleged to have used those forces to violently suppress protests and to arrest opposition politicians. He has been sanctioned by the European Union for human rights violations and is prohibited from entering the EU.

The Catholic Church, a powerful force in Congo, monitored the elections using 40,000 observers and states that it knows who actually won the elections. Given that businessman Martin Fayulu had led Shadary by more than 30 percentage points in recent polls, the Church’s call for the release of results is seen as an indication that they believe Shadary has lost the election.

If you are anywhere in DRC, or in Brazzaville, Kigali, Gabarone, Kampala or in other locations that border on DRC, and you’re seeing ads that declare any candidate the winner of the DRC elections, PLEASE TAKE SCREENSHOTS including the URL of the page. Please click on the ad, and screenshot the page it returns, including the URL. Send those screenshots to my team at MIT: ethanz AT mit DOT edu – we are collecting these images so we can ask Google and Facebook to prevent the transmission of false information that could be used to cement a stolen election.

Updates –
– translation in French follows below
– I have spoken with FB – they’ve identified the ad featured above and removed it. That said, there are likely more to come, and we could use help identifying others that appear.

# Publicités trompeuses et élections en RDC: aidez-vous à documenter une
éventuelle fraude électorale.

La République Démocratique du Congo a tenu des élections présidentielles
le 30 décembre 2018. Les résultats préliminaires devaient initialement
être publiés hier, le 6 Janvier, mais le président de la commission
électorale a reporté la publication de ces résultats car, [à la date du
samedi 5, moins de la moitié des votes avaient été transportés vers les
centres de comptage](https://www.bbc.com/news/world-africa-46771360).

Alors, pourquoi y avait il des publicités sur Google et Facebook,
apparemment destinées aux internautes du voisin de la RDC, le
Congo-Brazzaville, annoncant que Emmanuel Shadary est le nouveau
président de la DRC?

Les publicités ci-dessus m’ont été transmises par un employé d’une ONG à
Brazzaville, de l’autre côté du fleuve par rapport à Kinshasa, la
capitale de la RDC. Il y a un trafic régulier entre Brazzaville et
Kinshasa, ce qui pourrait être l’un des principaux flux d’information en
entrant en RDC, car [les responsables des élections ont coupé
l’internet, désactivé les SMS et bloqué la diffusion de Radio France
Internationale
(RFI)](https://www.theguardian.com/world/2019/jan/01/drc-electoral-fears-rise-as-internet-shutdown-continues).

Ces publicités seraient illégales en RDC, où il est interdit d’annoncer
un gagnant avant que la commission électorale ne publie les résultats.
En outre, il y a de fortes chances pour que ces informations soient
fausses, conçues pour aider le gouvernement en place à rester au
pouvoir. Malheureusement, les systèmes de publicité de Facebook et de
Google pourraient être utilisés pour crédibiliser la fraude électorale,
soit en ciblant ces publicités sur Brazzaville, soit sur la RDC même, où
un petit nombre de personnes se trouvent encore sur Internet. (Bien que
les services 3G et 4G soient coupés, certaines entreprises semblent
avoir accès à internet.)

Contexte: Joseph Kabila est président de la République démocratique du
Congo depuis 18 ans. Il a pris ses fonctions après l’assassinat de son
père, le président Laurent-Désiré Kabila, en 2001. Élu à deux mandats en
2006 et 2011, Kabila a été mandaté de quitter la présidence en 2016. Il
ne l’a pas fait. Au lieu de cela, les autorités électorales de la RDC
ont annoncé qu’une élection ne pourrait avoir lieu avant 2018. C’est
cette élection et Kabila a finalement annoncé qu’il ne se présenterait pas.

Au lieu de cela, il a apporté son soutien à Emmanuel Shadary, qui a été
ministre de l’intérieur sous Kabila. Au cours de ses années au service
de Kabila, Shadary contrôlait la police et les services de sécurité et
aurait utilisé ces forces pour réprimer violemment des manifestations et
arrêter des hommes politiques de l’opposition. Il a été [sanctionné par
l’Union européenne pour violation des droits de
l’homme](https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32017D0905&from=EN)
et il lui est interdit d’entrer dans l’UE.

L’Église catholique, une force importante au Congo, a surveillé les
élections à l’aide de 40 000 observateurs et a déclaré connaitre le
vainqueur des élections. Étant donné que l’homme d’affaires Martin
Fayulu avait plus de 30 points d’avance sur Shadary dans les derniers
sondages, l’appel de l’Église à la publication des résultats est [perçu
comme une indication qu’ils estiment que Shadary a perdu les
élections](https://www.nytimes.com/2019/01/04/world/africa/fayulu-congo-presidential-vote-catholic.html).

Si vous vous trouvez n’importe ou en RDC, à Brazzaville, à Kigali, à
Gabarone, à Kampala ou dans quelqu’autre localité limitrophe de la RDC,
et que vous voyez des publicités déclarant un candidat vainqueur des
élections en RDC, VEUILLEZ FAIRE DES COPIES D’ÉCRAN, comprenant l’URL de
la page. Merci de cliquer sur la publicité et de prendre une copie
d’écran de la page affichée, ainsi que de l’URL. Envoyez ces captures
d’écran à mon équipe du MIT: ethanz AROBASE mit POINT edu – nous
collectons ces images afin de demander à Google et Facebook d’empêcher
la transmission de fausses informations qui pourraient être utilisées
pour cimenter une élection volée.

]]>
0
Ethan http://www.ethanzuckerman.com <![CDATA[Protected: Fake News and the DRC election? Please help us find these rogue ads]]> http://www.ethanzuckerman.com/blog/?p=5565 2019-01-07T15:20:06Z 2019-01-07T15:20:06Z

This content is password protected. To view it please enter your password below:

]]>
0