Ethan Zuckerman’s online home, since 2003
Currently Browsing: long bookmark

The price of life on Florida’s Death Row

The world is slowly moving to abolish the death penalty. Around the world, 140 countries have either abolished the punishment in law or in practice, not executing a prisoner in the past ten years. The majority of US states still permit the death penalty, but the total people sentenced to death in 2012 dropped below 100 for the first time since the late 1970s, and executions are slowing as well.

But not in Florida. The State of Florida has an unusual approach to the death penalty. They are the only state where a simple majority on a jury can vote to sentence a person to death. (In most states, unanimous agreement is required.) And they lead the nation in exonerations, where lawyers and activists uncover evidence that someone sentenced to death is innocent, according to an editorial in the Tampa Bay Times. (The figures in the editorial come from the Death Penalty Information Center, which lists 142 exonerations, with 24 from Florida.) In other words, Florida sentences a lot of people to death, and they seem to get it wrong quite often.

This situation is about to get worse. Emily Bazelon wrote a powerful article for Slate examining Florida’s new law, the “Timely Justice Act”, which requires the governor to sign death warrants within 30 days of an inmate’s final appeal, and requires the state to execute the condemned within 180 days of that warrant. That’s a lot quicker than executions are generally carried out. Inmates remain on death row in Florida for 13.2 years on average, less that the nationwide average of 14.8 years.

What’s the rush? The purpose of the bill, sponsors say, is to ensure that executions are carried out in a timely fashion, to increase public confidence in the judicial system. One of the sponsors of the bill, Florida Republican Matt Gaetz quipped, “Only God can judge. But we sure can set up the meeting.” But, as Bazelon points out, Florida’s death penalty system is so flawed that it often requires years to uncover evidence that would exonerate a death row inmate.

There’s a brutal logic behind Florida’s bill. The Death Penalty Information Center calculates that it costs Florida $51 million a year more than holding them for life, given the extra costs of extra security and maintenance costs for death row facilities. Shorter stays on death row equal lower costs – the only downside is the likelihood of killing people who might well be found innocent with years to explore their cases.

Consider the case of Clement Aguirre, on death row in Florida since 2006 for the 2004 murders of a mother and daughter found dead in their trailer home. DNA evidence obtained by the Innocence Project in 2011 strongly suggests that Aguirre is innocent of the murders, and he is still fighting to overturn his conviction.

Fixing Florida’s criminal justice system requires more than building opposition to the death penalty or funding reviews of death penalty cases through the Innocence Project. It requires providing high quality public defenders to those accused of crimes. Bazelon reports that Florida’s death penalty defenders are some of the worst in the nation, and have allowed clients to go to death row without ever meeting them or responding to their letters.

Unfortunately, this means spending more money on criminal justice, not less. Organizations like Gideon’s Promise are helping young lawyers become public defenders and trying to improve the profession. One modest saving grace in Florida’s atrocious law is modest funding for public defense in northern Florida, but it’s far less support than the state needs to ensure that people facing the death penalty get a fair trial.

I had a conversation the other day with advisors to Northeastern University’s NuLawLab, which is dedicated to the idea of providing affordable legal services to all 7 billion people on the planet. One of the advisors expressed interest in the idea that new data sets could help make the case that failing to provide people with adequate representation has higher costs than representing them well – i.e., someone who might have fought for their home with legal counsel ends up creating societal costs through needing housing assistance. I’m supportive of the concept, but I worry that such an economic analysis needs to incorporate human rights. It’s cheaper for Florida to fail to represent indigent defendants and rapidly push them to execution than it is to represent them well and give time for the Innocence Project and others to try to establish their innocence. The only cost is the lives of people unlucky enough to be innocent but convicted of murder in Florida.


I encountered Bazelon’s story through This American Life, which ran an excellent set of short, timely stories around the theme, “This Week”. I’m normally grumpy when TAL denies me the long-form stories I so love, but grateful they featured this story.


Big stories and little details: what Charles Mann misses

Charles Mann offers a big story in the latest issue of the Atlantic. It’s 11,000 words, and it’s based around an audacious premise: the end of energy scarcity. The peg for the story is Japan’s ongoing research on methane hydrate, an amalgam of natural gas trapped in water ice that occurs in oceans around the world. If methane hydrate can be harvested, Mann tell us, the global supply of hydrocarbon fuels are virtually unlimited. This, he argues, would have massive geopolitical and strategic implications, as the history of the twentieth century can be read in part through the lens of wealthy nations without oil seeking the black stuff in less developed lands. New forms of power might center on who can extract ice that burns like natural gas.

The bulk of the Mann piece is a debate over “peak oil”, an idea put forward by M. King Hubbert in the 1950s, when he correctly predicted that US oil production would slow. Mann’s piece pits Hubbert against Vincent E. McKelvey, his boss at the US Geological Survey for years, who argued that energy supplies are virtually inexhaustible, though the costs to extract them increase as we use up the “easy” oil ready to burst above the surface. While Hubbert’s predictions about US oil production were initially right, Mann argues, the rise of techniques like horizontal drilling and hydrofracking means McKelvey is right in the long run. If we need methane hydrate – and Japan does, as it lacks other hydrocarbon resources – we’ll find a way to pay for it. The argument only looks like a contradiction, Mann argues, because it’s an argument between geologists on one side and social scientists on the other, and from the social scientists’ point of view, so long as there’s economic demands for hydrocarbons and the means to extract it, we should expect these fuels to keep flowing.

There’s something very attractive about Mann’s argument. He writes as an insider who’s going to let you in on what the smart guys know that poor, dumb saps like me would never imagine. It’s a tone you hear a lot in Washington policy circles, a realpolitik view of the world that suggests you can entertain yourself with solar panels as long as you’d like, but the adults in the room are deciding who gets invaded for their petrochemical wealth and whose civilizations will collapse into a new Medieval period.

Fortunately, there are some smart responses to Mann’s article, some vitriolic, some patient and thoughtful. (To the Atlantic’s credit, they published both Mann’s piece and Chris Nelder’s excellent response.) The essence of the responses is this: yes, there’s a whole lot of methane trapped in ice. Yes, if we could extract it, we’d have a whole lot of fuel that burns with half the carbon emissions of coal. But it’s unclear we can ever extract this at an affordable cost. (Canada just dropped out of the methane hydrate race, perhaps because they see extracting oil from tar sands as a more plausible source of hydrocarbons.) (more…)


In Soviet Russia, Google Researches You!

Martin Feuz, Matthew Fuller and Felix Stadler have a very clever paper in a recent edition of First Monday, titled “Personal Web searching in the age of semantic capitalism: Diagnosing the mechanisms of personalization.” In their study, they create three artificial search profiles on Google based on the topics of interest to three different philosophers (Foucault, Nietzsche and Kant, using terms from the indices of their books) and compare the results these personalized profiles receive to the results an “anonymous” profile – i.e., one without Google’s Web History service turned on – receives.

They see a very high degree of personalization – personalized search results appear in 50% of search queries for some of their profiles – and in the intensity of personalization – in some cases, 64% of results are different in content or rank from an anonymous profile. While there’s apparently lots of personalization going on, and personalized results emerge early in the training process, the authors don’t see the search algorithms reaching deep into the “long tail” of content. When personalized results differ from anonymous search results, 37% of the novel results can be found on the second page of anonymous results, while only 7% of novel results are found between results 100 – 1000 and 13% beyond result 1000. Finally, they are able to demonstrate that personalization is probably not based solely on the content an individual has searched for in the past – they see ample evidence that content on social networking is being heavily personalized for Nietzsche based only on his searches for power, morality and will, for instance.

That last example gives a bit of the flavor for the paper – it’s both a serious and methodologically defensible piece of research as well as a clever prank, demonstrating that Google will try to assign Immanuel Kant to a psycho/demographic group and target content based on those assumptions. This playful tone is accompanied by a willful naïvety that’s slightly frustrating – they start by taking Google’s descriptions of the effects of personalization at face value then offer the surprise that the hypotheses they derive from Google’s PR are invalidated. It’s not especially surprising for the reader, however, to discover that Google’s personalization is at least as much about helping advertisers target audiences as it is about helping users find the best possible content. It’s not a surprise to the authors, either – the term “semantic capitalism“, credited to Cristophe Bruno, implies that we’ve entered a world where words have market prices, with potentially different values to advertisers as to audiences.

While I find the levels of personalization the authors detect to be fascinating, I wonder whether their experiment correctly isolates the factors involved with personalization. Eli Pariser, in his talk last year at PDF and, presumably in his forthcoming book on the power and dangers of personalization, refers to 57 factors that allow Google to personalize results for users who are not using Web History (the “anonymous” users in this experiment.) The authors control for a key variable, conducting all searches from IP addresses in Central London. It’s unclear, though, whether Google is making other extrapolations – perhaps users who execute lots of searches at 3pm are more likely to be middle-aged businessmen than teenage girls, and results are targeted as a result? I’d be very interested to see the authors check to see if their anonymous search results are identical or nearly so – if not, there may be a great deal more personalization going on then they are accounting for outside of the experiment’s parameters.

I was struck by the apparent discontinuities in how often personalized search results appeared for the three different profiles. In one training session, there’s a sharp spike in personalization between sessions, a test of results where personalization appears three times as often as in other sessions. In another, there are two, smaller spikes, and in the third, a three-session long spike. With no easy way to explain what’s causing these spikes, it’s possible to speculate that Google’s algorithms for personalization are not just opaque and complex, but adaptive and changing. While the authors are experimenting with Google, it’s reasonable to assume that Google is experimenting with them, changing levels of personalization to see whether Google is able to achieve its desired result: clicks on ads.

I found the authors’ findings about the long tail particularly fascinating, though I’d frame them slightly differently than they do. They see the fact that most personalized results (results that differ between a query from a profiled and an anonymous user) that appear in the top 10 come from the top 100 results delivered to anonymous users as evidence that Google’s personalization is pretty shallow. I see the finding that 13% of personalized results in the top 10 come from outside of the top 1000 as downright remarkable – I’d thought that Google’s algorithm, both in terms of page rank and term relevancy, would resist such large reshufflings of the deck, bringing up pages considered irrelevant for an “anonymous” user to prominence for a profiled user. I see that finding as quite encouraging – even buried deep in the slag heap of low pagerank and low relevancy, personalization might occasionally bring a long-tail web page to the surface.

Of course, there’s another explanation: again, Google’s testing the experimenters as they’re testing the system. Google’s long said that they present different results to users as a way of testing result relevance – if a long-tail page appears in results and is widely clicked, perhaps it’s time to weight it more heavily or to tweak the algorithms that buried it in the first place.

This is the core problem of studying a system like Google. As the authors acknowledge, “How can we study a distributed machinery that is both wilfully opaque and highly dynamic? One which reacts to being studied and takes active steps to prevent such studies from being conducted on the automated, large-scale level required?” That second question is a reference to a methodological challenge the authors had – it’s deeply atypical behavior to click on every possible results page for a search query, which the authors needed to do, and Google periodically blocked their IPs for suspicion that they were bots attempting to scrape or game the search engine.

Google is not willfully opaque just out of spite or a desire to protect its secrets from Microsoft or other search engine builders. The sort of work their authors are conducting is exactly the sort of work search engine “optimizers” do by attempting to help their clients achieve a higher ranking in Google’s results. Were Google’s methods of personalization easy to understand, we would expect SEOfolk to take advantage of their newfound knowledge, as we’d expect them to use any knowledge about Google’s ranking algorithms. The more transparent those algorithms are, generally speaking, the more likely they are to be gamed, and the more gaming occurs, the less useful Google is for most users.

I wonder if there’s a provocative hypothesis the authors haven’t considered in analyzing the behaviors they saw – Google offers different results with a high frequency, in part because they’re trying to obfuscate their algorithms. The faster you poll the engine, the more variability you get, making it harder to profile the engine’s behavior. We can discard this hypothesis if the authors checked results of their anonymous searches against one another and got highly similar results – if not, then it’s possible that some of the hidden variables Eli Pariser talks about are in play… or that there’s an inherent amount of noise in the system, either for purposes of obfuscation or for allowing Google to try A/B tests with live users.

Researchers want to understand how Google works because it’s probably the most important node (at least at the moment) in our online information ecosystem. Whether we’re interested in driving attention or revenue, what Google points us towards becomes more powerful. But the better we understand Google, the more likely we are to break it. Security through obscurity is a dreadful strategy, but I’m hard-pressed to offer a better answer to Google for how they can prevent their engine from being gamed.

Deep in Feuz, Fuller and Stadler’s paper is the sense that there’s something unheimlich about the idea that something as important an influencer as Google being as mercurial as it is. Personalization is disturbing to the extent to which it separates us from the real, true, stable search results, the ur-results Google is withholding from us in the hopes of selling us ads for effectively… but even more disturbing is the idea that there’s no solid ground, no single set of best results Google could deliver, even if it wanted to.


A world roundup

Some other stories I’m trying to follow, in addition to the news from Bahrain:

There’s very little news from Libya, as protesters take to the streets, especially in the eastern city of Benghazi. Libya tightly restricts press coverage, and the New York Times observes that while Libya hasn’t been able to prevent news from Tunisia and Egypt from inspiring protesters to take to the streets, it has been pretty effective at restricting news from Libya from reaching the global press. There are reports that Libya began blocking access to social media sites, and last evening, Libya disconnected from the internet.

libyapullsplug

This graphic from Arbor Networks showing two sharp drops in Libyan internet traffic during the day, and a thorough shutoff at night. Heading forward, we’re likely to see reporting via land line phones, and perhaps some computer users dialing into modem banks in Joran and elsewhere, but the shutdown is likely to make what little reporting from the ground we’ve had even harder to get.

I argued previously that there’s great danger for protesters who are inspired to take to the streets in countries where the media isn’t paying attention – Libya is a special case of this scenario, as it’s extremely difficult for anyone to report, via traditional or social media. As Twitter user @EnoughGaddafi puts it, “For all those frustrated by reporting on #libya understand this. There is Zero indpt media on the ground. Nothing at all.” In the absence of coverage, it sounds like suppression of the protests has been quite brutal, with a death toll of at least two dozen, perhaps as high as 70.


My friend and former colleague Dewitt Clinton offers a decidedly geeky perspective on the Libyan unrest – a reminder that the bit.ly URL shortener (which I’ve been trying out the past few weeks) is located on a Libyan domain name:

In case it isn’t obvious, I’m still not a fan of URL shorteners. They’re a bug, not a feature.

And then things like this happen: http://goo.gl/fx3iA. Bye bye bit.ly? That’d be a lot of dead links.

I felt a great disturbance in the Web, as if millions of URLs suddenly cried out in terror and were suddenly silenced.

As far as I can tell, Libya Telecom (http://goo.gl/SsMAi) runs .ly. Willing to bet that they’d shut it down plenty fast if Gaddafi said to.

He’s not the first to observe that bit.ly’s domain is connected to a country that’s not exactly amenable to free speech. is.gd advertises itself as an “ethical URL shortener“, in part because they’re not vulnerable to shutdown by the Libyan government, which has previously shut down vb.ly, a “sex-positive” URL shortener. I suspect that if bit.ly has trouble, they’ll rapidly move everyone over to j.mp, which uses a domain name from the Northern Mariana Islands, which as of yet don’t appear to be experiencing street protests.

Despite the Libyan internet shutdown, bit.ly is still working. The site’s not hosted in Libya, and according to the CEO of the company that runs bit.ly, only two of the five root servers that control .ly are in the country. So while we should worry about people being massacred outside of the eyes of the media, at least we don’t have to change URL shorteners.


Given the dramatic developments in Tunisia, Egypt and now throughout the Arab world, it can be hard to remember the extent to which Wikileaks dominated online conversation late last year. While there was an interesting conversation about whether Wikileaks could be blamed or credited for protests in Tunisia, Wikileaks appears to be releasing documents in reaction to protests these days. Today’s dump of cables includes a wealth of dispatches from the US Embassy in Manama. It’s helpful, as it gives reporters another possible angle in analyzing the situation on the ground, and an extremely media-savvy way to keep Wikileaks in the news, even if the releases in the cables are following, not moving, the news.


While Gabon and Sudan may be the first sub-Saharan African nations to hold protests inspired by events in Tunisia and Egypt, the implications of those successful revolts are being felt across the continent. Trevor Ncube, publisher of South Africa’s exemplary Mail and Guardian, and publisher of two opposition newspapers in his native Zimbabwe, has been reflecting on the possibility of a popular revolt against the Mugabe regime. In an interview two weeks back, Ncube argued that it was unlikely that Zimbabweans would follow in Egyptians footsteps, in part because the army was so closely identified with the ruling party, and not with the country as a whole. Today, Ncube continued along these lines, arguing that the long history of state-sanctioned violence against the general populus makes it harder for Zimbabweans to decide to take to the streets in protest. While he wasn’t directly addressing Bahrain or Libya, I can’t but help read these comments in that light – when does evidence that a government will use deadly force against dissent convince people to stay at home, rather than taking to the streets?

Committee to Protect Journalists points out that Zimbabwe’s state controlled media has been scrupulous about avoiding mention of protests in Egypt and Tunisia… except to criticize the US’s role in “interfering” with those protests…! The protests are a sensitive matter in Ethiopia as well, where a prominent government critic was taken in for questioning after writing about matters in Egypt and Tunisia.


If so much of the world weren’t on fire, Uganda’s elections would likely be a more high-profile affair. Yoweri Museveni, who came to power as a rebel leader in 1986, is seeking a fourth presidential term, challenged by his former physician, Kizza Besigye. Polling went relatively smoothly today, though controversy is possible when the results are announced this weekend. (No one expects Museveni to lose – the question is whether protests about the fairness of the elections will erupt into a serious challenge to his re-appointment.)

Again, if we weren’t all watching North Africa and the Gulf, I suspect this story about Uganda blocking certain keywords in SMS messages would have gotten more attention:

The Uganda Communications Commission Friday released 18 words and names that it has instructed mobile phone short message service (SMS) to flag if they are contained in any text message. They are then supposed to read the rest of the content of the message and if it is deemed to be “controversial or advanced to incite the public”, will be blocked.

The words are ‘Tunisia’, ‘Egypt’, ‘Ben Ali’, ‘Mubarak’, ‘dictator’, ‘teargas’, ‘kafu’ (it is dead), ‘yakabbadda’ (he/she cried long time ago), ‘emuudu/emundu’ (gun), ‘gasiya’ (rubbish), ‘army/ police/UPDF’, ‘people power’, and ‘gun/bullet’.

I got a fascinating tweet from a Ugandan friend, who reported that SMS was also being used in a viral campaign to support the President. “Another Rap. Vote Museveni. Send this 2 7 pple 2 receive 7000 worth of airtime” If the “another rap” part of that message is obscure to you, I point you to this wonderfully absurd video:

Museveni is reciting a pair of traditional Kinyankole rhymes – between the two, he announces, “You want another rap?” It’s been remixed into a catchy song that now serves as his campaign anthem. I suspect that his “re-election” will have more to do with crackdowns on the press and intimidation of the opposition than his musical skills.


And, in matters of a world on fire, let’s not forget the Ivory Coast, still locked in a battle between an elected president and one who won’t let go. Desperate to continue paying the soldiers who are keeping him in power, Laurent Gbagbo has nationalized the banks, many of which were in the process of shutting down or pulling out of the country. Not a good sign, but it might point to the beginning of the end for a standoff that’s seemed intractable up until now.


Wikileaks, analysis and speculative fiction

It’s a few days before Christmas, ten days before the end of 2010, and there’s the wonderful sense of deceleration as I flip through my browser tabs. The blizzard of email has slowed to a few, scant flakes, the roaring river of Twitter updates is a trickle. There’s time to read, and evidently, time to reflect and write at more length.

In the past couple of days, a couple of excellent essays – and some flawed, but interesting ones – have been posted reflecting on Wikileaks, Anonymous and the philosophical motives behind these projects. For me, they’re a reminder that the opinions offered the most rapidly aren’t always (aren’t often?) the most insightful. Wikileaks’s release of diplomatic cables and the actions taken by individuals, organizations, corporations and governments in response have implications for dozens of ongoing debates, about transparency, privacy, internet architecture and ownership, free speech, human rights. It’s not a surprise to me that very smart people have needed a while to think through what’s happened before offering their analysis.

Much of the best writing I’ve read has been either published on or linked to via The Atlantic. Alexis Madrigal is maintaining a great collection of links to commentary on different facets of the case, and he’s also edited a few of the most interesting pieces I’ve recently had the time to read.

One – which I’d put in the “interesting but flawed” pile – is Jaron Lanier’s “The Hazards of Nerd Supremacy: The Case of WikiLeaks“. As one respondent to the piece notes, it’s not really an essay about Wikileaks. Instead, Lanier connects some of his recent thinking on the internet as a threat to individual creativity, expressed at length in his recent book, “You Are Not A Gadget“. (This review is a sympathetic overview of the book.) Lanier sees a philosophical stance implicit in Wikileaks’s actions and Assange’s motives – the belief that a huge accumulation of data leads towards understanding or truth. Openness by itself isn’t necessarily productie, he argues – it’s possible that openness leads to the breakdown of trust, in each other and in institutions.

In the most interesting part of the essay, Lanier connects Wikileaks to the early days of the Electronic Frontier Foundation, where very smart crytographers and digital pioneers explored the idea that hackers could change history, leveling the playing field with a superior understanding of technology. He sees this perspective as overly romantic and tells us he made the decision to step away at that point. In turn, he’s critiquing current Wikileaks supporters, and especially the Anonymous DDoSers as ineffective and potentiall dangerous romantics, a critique that might be better received had he not slammed them as “nerd supremicists” in his title.

Lanier asked Madrigal to disallow comments on his essay, as he wanted people to engage with the text and not skip ahead to refutations or responses. Madrigal agreed, but evidently didn’t understand how to actually shut off commenting within the Atlantic’s publishing system – the story began accumulating comments, and Madrigal felt compelled to step in and shut down the thread. This, in turn, led to tough questioning by smart folks like Jay Rosen about the wisdom of disallowing comments on a controversial essay. I found Madrigal’s post explaining what happened, why he acted as he did – and the open comment thread that followed his explanation – to be one of the best examples of an online community manager engaging with criticism and looking for a solution going forwards.

Madrigal also gets my respect for featuring an excellent essay from Zeynep Tufekci responding to Lanier’s missive. (Hers is the observation that Lanier isn’t writing about Wikileaks, but about his own framing of issues about technology, privacy and individuality.) She offers a thorough critique of Lanier, pivoting on the idea that Lanier errs in blurring the line between individuals and organizations, especially governments, and ends up trying to protect the privacy of powerful institutions that don’t have the same rights as individuals, no matter what the Supreme Court may have said in Citizens United.

In a neat rhetorical move, Tufekci accuses Lanier of using Wikileaks to promote his own agenda before explaining that Wikileaks really tells us something important about the tension between public and private spaces online (which happens to be her agenda… :-) I share her concerns, and though I don’t come to the same conclusion she does – don’t fear Anonymous; fear corporate control over the Internet – it’s an excellent essay and a great summary of important concerns about the challenges of public discourse in private spaces.

The essay I found most useful in thinking through Wikileaks early in Cablegate was Aaron Bady’s “Julian Assange and the Computer Conspiracy“, which took a close read of a 2006 essay by Assange to elucidate a possble set of motivations behind the release of diplomatic cables. Bruce Sterling takes a very different approach – he uses his knowledge of geek culture and his gift for speculative fiction to map Julian Assange and Bradley Manning onto hacker architypes and declares the situation surrounding Wikileaks inevitable and melancholy. It’s far from fair – we’re dealing with an Assange who’s a projection of Sterling’s understanding of hacker culture rather than a real individual – but it offers insights that are often easier to deliver in fiction.

Specifically, Sterling does a beautiful job of unpacking the lure of encryption, the romance of the cypherpunks, the tension of “secrets” that aren’t especially secret or exciting, the difference between leaks and journalism. Some of the commenters on the essay challenge Sterling’s understanding of the facts – I think that misses the larger point, which is that Sterling offers a picture of Assange and the logic behind Wikileaks that falls short as a work of biography, but is extremely helpful in understanding why he and his project have captured the attention of so many geeks.

Looking forward to more reflections on Wikileaks and its implications, and to the best part of the year – some extended reading about topics that have nothing at all to do with the internet… Happy holidays, everyone.


« Previous Entries

Powered by WordPress | Designed by Elegant Themes