… My heart’s in Accra Ethan Zuckerman’s online home, since 2003

June 21, 2013

The price of life on Florida’s Death Row

Filed under: long bookmark — Ethan @ 5:35 pm

The world is slowly moving to abolish the death penalty. Around the world, 140 countries have either abolished the punishment in law or in practice, not executing a prisoner in the past ten years. The majority of US states still permit the death penalty, but the total people sentenced to death in 2012 dropped below 100 for the first time since the late 1970s, and executions are slowing as well.

But not in Florida. The State of Florida has an unusual approach to the death penalty. They are the only state where a simple majority on a jury can vote to sentence a person to death. (In most states, unanimous agreement is required.) And they lead the nation in exonerations, where lawyers and activists uncover evidence that someone sentenced to death is innocent, according to an editorial in the Tampa Bay Times. (The figures in the editorial come from the Death Penalty Information Center, which lists 142 exonerations, with 24 from Florida.) In other words, Florida sentences a lot of people to death, and they seem to get it wrong quite often.

This situation is about to get worse. Emily Bazelon wrote a powerful article for Slate examining Florida’s new law, the “Timely Justice Act”, which requires the governor to sign death warrants within 30 days of an inmate’s final appeal, and requires the state to execute the condemned within 180 days of that warrant. That’s a lot quicker than executions are generally carried out. Inmates remain on death row in Florida for 13.2 years on average, less that the nationwide average of 14.8 years.

What’s the rush? The purpose of the bill, sponsors say, is to ensure that executions are carried out in a timely fashion, to increase public confidence in the judicial system. One of the sponsors of the bill, Florida Republican Matt Gaetz quipped, “Only God can judge. But we sure can set up the meeting.” But, as Bazelon points out, Florida’s death penalty system is so flawed that it often requires years to uncover evidence that would exonerate a death row inmate.

There’s a brutal logic behind Florida’s bill. The Death Penalty Information Center calculates that it costs Florida $51 million a year more than holding them for life, given the extra costs of extra security and maintenance costs for death row facilities. Shorter stays on death row equal lower costs – the only downside is the likelihood of killing people who might well be found innocent with years to explore their cases.

Consider the case of Clement Aguirre, on death row in Florida since 2006 for the 2004 murders of a mother and daughter found dead in their trailer home. DNA evidence obtained by the Innocence Project in 2011 strongly suggests that Aguirre is innocent of the murders, and he is still fighting to overturn his conviction.

Fixing Florida’s criminal justice system requires more than building opposition to the death penalty or funding reviews of death penalty cases through the Innocence Project. It requires providing high quality public defenders to those accused of crimes. Bazelon reports that Florida’s death penalty defenders are some of the worst in the nation, and have allowed clients to go to death row without ever meeting them or responding to their letters.

Unfortunately, this means spending more money on criminal justice, not less. Organizations like Gideon’s Promise are helping young lawyers become public defenders and trying to improve the profession. One modest saving grace in Florida’s atrocious law is modest funding for public defense in northern Florida, but it’s far less support than the state needs to ensure that people facing the death penalty get a fair trial.

I had a conversation the other day with advisors to Northeastern University’s NuLawLab, which is dedicated to the idea of providing affordable legal services to all 7 billion people on the planet. One of the advisors expressed interest in the idea that new data sets could help make the case that failing to provide people with adequate representation has higher costs than representing them well – i.e., someone who might have fought for their home with legal counsel ends up creating societal costs through needing housing assistance. I’m supportive of the concept, but I worry that such an economic analysis needs to incorporate human rights. It’s cheaper for Florida to fail to represent indigent defendants and rapidly push them to execution than it is to represent them well and give time for the Innocence Project and others to try to establish their innocence. The only cost is the lives of people unlucky enough to be innocent but convicted of murder in Florida.


I encountered Bazelon’s story through This American Life, which ran an excellent set of short, timely stories around the theme, “This Week”. I’m normally grumpy when TAL denies me the long-form stories I so love, but grateful they featured this story.

May 8, 2013

Big stories and little details: what Charles Mann misses

Filed under: ideas,long bookmark,Media — Ethan @ 11:33 am

Charles Mann offers a big story in the latest issue of the Atlantic. It’s 11,000 words, and it’s based around an audacious premise: the end of energy scarcity. The peg for the story is Japan’s ongoing research on methane hydrate, an amalgam of natural gas trapped in water ice that occurs in oceans around the world. If methane hydrate can be harvested, Mann tell us, the global supply of hydrocarbon fuels are virtually unlimited. This, he argues, would have massive geopolitical and strategic implications, as the history of the twentieth century can be read in part through the lens of wealthy nations without oil seeking the black stuff in less developed lands. New forms of power might center on who can extract ice that burns like natural gas.

The bulk of the Mann piece is a debate over “peak oil”, an idea put forward by M. King Hubbert in the 1950s, when he correctly predicted that US oil production would slow. Mann’s piece pits Hubbert against Vincent E. McKelvey, his boss at the US Geological Survey for years, who argued that energy supplies are virtually inexhaustible, though the costs to extract them increase as we use up the “easy” oil ready to burst above the surface. While Hubbert’s predictions about US oil production were initially right, Mann argues, the rise of techniques like horizontal drilling and hydrofracking means McKelvey is right in the long run. If we need methane hydrate – and Japan does, as it lacks other hydrocarbon resources – we’ll find a way to pay for it. The argument only looks like a contradiction, Mann argues, because it’s an argument between geologists on one side and social scientists on the other, and from the social scientists’ point of view, so long as there’s economic demands for hydrocarbons and the means to extract it, we should expect these fuels to keep flowing.

There’s something very attractive about Mann’s argument. He writes as an insider who’s going to let you in on what the smart guys know that poor, dumb saps like me would never imagine. It’s a tone you hear a lot in Washington policy circles, a realpolitik view of the world that suggests you can entertain yourself with solar panels as long as you’d like, but the adults in the room are deciding who gets invaded for their petrochemical wealth and whose civilizations will collapse into a new Medieval period.

Fortunately, there are some smart responses to Mann’s article, some vitriolic, some patient and thoughtful. (To the Atlantic’s credit, they published both Mann’s piece and Chris Nelder’s excellent response.) The essence of the responses is this: yes, there’s a whole lot of methane trapped in ice. Yes, if we could extract it, we’d have a whole lot of fuel that burns with half the carbon emissions of coal. But it’s unclear we can ever extract this at an affordable cost. (Canada just dropped out of the methane hydrate race, perhaps because they see extracting oil from tar sands as a more plausible source of hydrocarbons.) (more…)

March 24, 2011

In Soviet Russia, Google Researches You!

Filed under: ideas,long bookmark,Media — Ethan @ 6:23 pm

Martin Feuz, Matthew Fuller and Felix Stadler have a very clever paper in a recent edition of First Monday, titled “Personal Web searching in the age of semantic capitalism: Diagnosing the mechanisms of personalization.” In their study, they create three artificial search profiles on Google based on the topics of interest to three different philosophers (Foucault, Nietzsche and Kant, using terms from the indices of their books) and compare the results these personalized profiles receive to the results an “anonymous” profile – i.e., one without Google’s Web History service turned on – receives.

They see a very high degree of personalization – personalized search results appear in 50% of search queries for some of their profiles – and in the intensity of personalization – in some cases, 64% of results are different in content or rank from an anonymous profile. While there’s apparently lots of personalization going on, and personalized results emerge early in the training process, the authors don’t see the search algorithms reaching deep into the “long tail” of content. When personalized results differ from anonymous search results, 37% of the novel results can be found on the second page of anonymous results, while only 7% of novel results are found between results 100 – 1000 and 13% beyond result 1000. Finally, they are able to demonstrate that personalization is probably not based solely on the content an individual has searched for in the past – they see ample evidence that content on social networking is being heavily personalized for Nietzsche based only on his searches for power, morality and will, for instance.

That last example gives a bit of the flavor for the paper – it’s both a serious and methodologically defensible piece of research as well as a clever prank, demonstrating that Google will try to assign Immanuel Kant to a psycho/demographic group and target content based on those assumptions. This playful tone is accompanied by a willful naïvety that’s slightly frustrating – they start by taking Google’s descriptions of the effects of personalization at face value then offer the surprise that the hypotheses they derive from Google’s PR are invalidated. It’s not especially surprising for the reader, however, to discover that Google’s personalization is at least as much about helping advertisers target audiences as it is about helping users find the best possible content. It’s not a surprise to the authors, either – the term “semantic capitalism“, credited to Cristophe Bruno, implies that we’ve entered a world where words have market prices, with potentially different values to advertisers as to audiences.

While I find the levels of personalization the authors detect to be fascinating, I wonder whether their experiment correctly isolates the factors involved with personalization. Eli Pariser, in his talk last year at PDF and, presumably in his forthcoming book on the power and dangers of personalization, refers to 57 factors that allow Google to personalize results for users who are not using Web History (the “anonymous” users in this experiment.) The authors control for a key variable, conducting all searches from IP addresses in Central London. It’s unclear, though, whether Google is making other extrapolations – perhaps users who execute lots of searches at 3pm are more likely to be middle-aged businessmen than teenage girls, and results are targeted as a result? I’d be very interested to see the authors check to see if their anonymous search results are identical or nearly so – if not, there may be a great deal more personalization going on then they are accounting for outside of the experiment’s parameters.

I was struck by the apparent discontinuities in how often personalized search results appeared for the three different profiles. In one training session, there’s a sharp spike in personalization between sessions, a test of results where personalization appears three times as often as in other sessions. In another, there are two, smaller spikes, and in the third, a three-session long spike. With no easy way to explain what’s causing these spikes, it’s possible to speculate that Google’s algorithms for personalization are not just opaque and complex, but adaptive and changing. While the authors are experimenting with Google, it’s reasonable to assume that Google is experimenting with them, changing levels of personalization to see whether Google is able to achieve its desired result: clicks on ads.

I found the authors’ findings about the long tail particularly fascinating, though I’d frame them slightly differently than they do. They see the fact that most personalized results (results that differ between a query from a profiled and an anonymous user) that appear in the top 10 come from the top 100 results delivered to anonymous users as evidence that Google’s personalization is pretty shallow. I see the finding that 13% of personalized results in the top 10 come from outside of the top 1000 as downright remarkable – I’d thought that Google’s algorithm, both in terms of page rank and term relevancy, would resist such large reshufflings of the deck, bringing up pages considered irrelevant for an “anonymous” user to prominence for a profiled user. I see that finding as quite encouraging – even buried deep in the slag heap of low pagerank and low relevancy, personalization might occasionally bring a long-tail web page to the surface.

Of course, there’s another explanation: again, Google’s testing the experimenters as they’re testing the system. Google’s long said that they present different results to users as a way of testing result relevance – if a long-tail page appears in results and is widely clicked, perhaps it’s time to weight it more heavily or to tweak the algorithms that buried it in the first place.

This is the core problem of studying a system like Google. As the authors acknowledge, “How can we study a distributed machinery that is both wilfully opaque and highly dynamic? One which reacts to being studied and takes active steps to prevent such studies from being conducted on the automated, large-scale level required?” That second question is a reference to a methodological challenge the authors had – it’s deeply atypical behavior to click on every possible results page for a search query, which the authors needed to do, and Google periodically blocked their IPs for suspicion that they were bots attempting to scrape or game the search engine.

Google is not willfully opaque just out of spite or a desire to protect its secrets from Microsoft or other search engine builders. The sort of work their authors are conducting is exactly the sort of work search engine “optimizers” do by attempting to help their clients achieve a higher ranking in Google’s results. Were Google’s methods of personalization easy to understand, we would expect SEOfolk to take advantage of their newfound knowledge, as we’d expect them to use any knowledge about Google’s ranking algorithms. The more transparent those algorithms are, generally speaking, the more likely they are to be gamed, and the more gaming occurs, the less useful Google is for most users.

I wonder if there’s a provocative hypothesis the authors haven’t considered in analyzing the behaviors they saw – Google offers different results with a high frequency, in part because they’re trying to obfuscate their algorithms. The faster you poll the engine, the more variability you get, making it harder to profile the engine’s behavior. We can discard this hypothesis if the authors checked results of their anonymous searches against one another and got highly similar results – if not, then it’s possible that some of the hidden variables Eli Pariser talks about are in play… or that there’s an inherent amount of noise in the system, either for purposes of obfuscation or for allowing Google to try A/B tests with live users.

Researchers want to understand how Google works because it’s probably the most important node (at least at the moment) in our online information ecosystem. Whether we’re interested in driving attention or revenue, what Google points us towards becomes more powerful. But the better we understand Google, the more likely we are to break it. Security through obscurity is a dreadful strategy, but I’m hard-pressed to offer a better answer to Google for how they can prevent their engine from being gamed.

Deep in Feuz, Fuller and Stadler’s paper is the sense that there’s something unheimlich about the idea that something as important an influencer as Google being as mercurial as it is. Personalization is disturbing to the extent to which it separates us from the real, true, stable search results, the ur-results Google is withholding from us in the hopes of selling us ads for effectively… but even more disturbing is the idea that there’s no solid ground, no single set of best results Google could deliver, even if it wanted to.

February 19, 2011

A world roundup

Filed under: Developing world,Human Rights,long bookmark,Media — Ethan @ 1:35 pm

Some other stories I’m trying to follow, in addition to the news from Bahrain:

There’s very little news from Libya, as protesters take to the streets, especially in the eastern city of Benghazi. Libya tightly restricts press coverage, and the New York Times observes that while Libya hasn’t been able to prevent news from Tunisia and Egypt from inspiring protesters to take to the streets, it has been pretty effective at restricting news from Libya from reaching the global press. There are reports that Libya began blocking access to social media sites, and last evening, Libya disconnected from the internet.

libyapullsplug

This graphic from Arbor Networks showing two sharp drops in Libyan internet traffic during the day, and a thorough shutoff at night. Heading forward, we’re likely to see reporting via land line phones, and perhaps some computer users dialing into modem banks in Joran and elsewhere, but the shutdown is likely to make what little reporting from the ground we’ve had even harder to get.

I argued previously that there’s great danger for protesters who are inspired to take to the streets in countries where the media isn’t paying attention – Libya is a special case of this scenario, as it’s extremely difficult for anyone to report, via traditional or social media. As Twitter user @EnoughGaddafi puts it, “For all those frustrated by reporting on #libya understand this. There is Zero indpt media on the ground. Nothing at all.” In the absence of coverage, it sounds like suppression of the protests has been quite brutal, with a death toll of at least two dozen, perhaps as high as 70.


My friend and former colleague Dewitt Clinton offers a decidedly geeky perspective on the Libyan unrest – a reminder that the bit.ly URL shortener (which I’ve been trying out the past few weeks) is located on a Libyan domain name:

In case it isn’t obvious, I’m still not a fan of URL shorteners. They’re a bug, not a feature.

And then things like this happen: http://goo.gl/fx3iA. Bye bye bit.ly? That’d be a lot of dead links.

I felt a great disturbance in the Web, as if millions of URLs suddenly cried out in terror and were suddenly silenced.

As far as I can tell, Libya Telecom (http://goo.gl/SsMAi) runs .ly. Willing to bet that they’d shut it down plenty fast if Gaddafi said to.

He’s not the first to observe that bit.ly’s domain is connected to a country that’s not exactly amenable to free speech. is.gd advertises itself as an “ethical URL shortener“, in part because they’re not vulnerable to shutdown by the Libyan government, which has previously shut down vb.ly, a “sex-positive” URL shortener. I suspect that if bit.ly has trouble, they’ll rapidly move everyone over to j.mp, which uses a domain name from the Northern Mariana Islands, which as of yet don’t appear to be experiencing street protests.

Despite the Libyan internet shutdown, bit.ly is still working. The site’s not hosted in Libya, and according to the CEO of the company that runs bit.ly, only two of the five root servers that control .ly are in the country. So while we should worry about people being massacred outside of the eyes of the media, at least we don’t have to change URL shorteners.


Given the dramatic developments in Tunisia, Egypt and now throughout the Arab world, it can be hard to remember the extent to which Wikileaks dominated online conversation late last year. While there was an interesting conversation about whether Wikileaks could be blamed or credited for protests in Tunisia, Wikileaks appears to be releasing documents in reaction to protests these days. Today’s dump of cables includes a wealth of dispatches from the US Embassy in Manama. It’s helpful, as it gives reporters another possible angle in analyzing the situation on the ground, and an extremely media-savvy way to keep Wikileaks in the news, even if the releases in the cables are following, not moving, the news.


While Gabon and Sudan may be the first sub-Saharan African nations to hold protests inspired by events in Tunisia and Egypt, the implications of those successful revolts are being felt across the continent. Trevor Ncube, publisher of South Africa’s exemplary Mail and Guardian, and publisher of two opposition newspapers in his native Zimbabwe, has been reflecting on the possibility of a popular revolt against the Mugabe regime. In an interview two weeks back, Ncube argued that it was unlikely that Zimbabweans would follow in Egyptians footsteps, in part because the army was so closely identified with the ruling party, and not with the country as a whole. Today, Ncube continued along these lines, arguing that the long history of state-sanctioned violence against the general populus makes it harder for Zimbabweans to decide to take to the streets in protest. While he wasn’t directly addressing Bahrain or Libya, I can’t but help read these comments in that light – when does evidence that a government will use deadly force against dissent convince people to stay at home, rather than taking to the streets?

Committee to Protect Journalists points out that Zimbabwe’s state controlled media has been scrupulous about avoiding mention of protests in Egypt and Tunisia… except to criticize the US’s role in “interfering” with those protests…! The protests are a sensitive matter in Ethiopia as well, where a prominent government critic was taken in for questioning after writing about matters in Egypt and Tunisia.


If so much of the world weren’t on fire, Uganda’s elections would likely be a more high-profile affair. Yoweri Museveni, who came to power as a rebel leader in 1986, is seeking a fourth presidential term, challenged by his former physician, Kizza Besigye. Polling went relatively smoothly today, though controversy is possible when the results are announced this weekend. (No one expects Museveni to lose – the question is whether protests about the fairness of the elections will erupt into a serious challenge to his re-appointment.)

Again, if we weren’t all watching North Africa and the Gulf, I suspect this story about Uganda blocking certain keywords in SMS messages would have gotten more attention:

The Uganda Communications Commission Friday released 18 words and names that it has instructed mobile phone short message service (SMS) to flag if they are contained in any text message. They are then supposed to read the rest of the content of the message and if it is deemed to be “controversial or advanced to incite the public”, will be blocked.

The words are ‘Tunisia’, ‘Egypt’, ‘Ben Ali’, ‘Mubarak’, ‘dictator’, ‘teargas’, ‘kafu’ (it is dead), ‘yakabbadda’ (he/she cried long time ago), ’emuudu/emundu’ (gun), ‘gasiya’ (rubbish), ‘army/ police/UPDF’, ‘people power’, and ‘gun/bullet’.

I got a fascinating tweet from a Ugandan friend, who reported that SMS was also being used in a viral campaign to support the President. “Another Rap. Vote Museveni. Send this 2 7 pple 2 receive 7000 worth of airtime” If the “another rap” part of that message is obscure to you, I point you to this wonderfully absurd video:

Museveni is reciting a pair of traditional Kinyankole rhymes – between the two, he announces, “You want another rap?” It’s been remixed into a catchy song that now serves as his campaign anthem. I suspect that his “re-election” will have more to do with crackdowns on the press and intimidation of the opposition than his musical skills.


And, in matters of a world on fire, let’s not forget the Ivory Coast, still locked in a battle between an elected president and one who won’t let go. Desperate to continue paying the soldiers who are keeping him in power, Laurent Gbagbo has nationalized the banks, many of which were in the process of shutting down or pulling out of the country. Not a good sign, but it might point to the beginning of the end for a standoff that’s seemed intractable up until now.

December 23, 2010

Wikileaks, analysis and speculative fiction

Filed under: ideas,long bookmark,Media — Ethan @ 1:25 pm

It’s a few days before Christmas, ten days before the end of 2010, and there’s the wonderful sense of deceleration as I flip through my browser tabs. The blizzard of email has slowed to a few, scant flakes, the roaring river of Twitter updates is a trickle. There’s time to read, and evidently, time to reflect and write at more length.

In the past couple of days, a couple of excellent essays – and some flawed, but interesting ones – have been posted reflecting on Wikileaks, Anonymous and the philosophical motives behind these projects. For me, they’re a reminder that the opinions offered the most rapidly aren’t always (aren’t often?) the most insightful. Wikileaks’s release of diplomatic cables and the actions taken by individuals, organizations, corporations and governments in response have implications for dozens of ongoing debates, about transparency, privacy, internet architecture and ownership, free speech, human rights. It’s not a surprise to me that very smart people have needed a while to think through what’s happened before offering their analysis.

Much of the best writing I’ve read has been either published on or linked to via The Atlantic. Alexis Madrigal is maintaining a great collection of links to commentary on different facets of the case, and he’s also edited a few of the most interesting pieces I’ve recently had the time to read.

One – which I’d put in the “interesting but flawed” pile – is Jaron Lanier’s “The Hazards of Nerd Supremacy: The Case of WikiLeaks“. As one respondent to the piece notes, it’s not really an essay about Wikileaks. Instead, Lanier connects some of his recent thinking on the internet as a threat to individual creativity, expressed at length in his recent book, “You Are Not A Gadget“. (This review is a sympathetic overview of the book.) Lanier sees a philosophical stance implicit in Wikileaks’s actions and Assange’s motives – the belief that a huge accumulation of data leads towards understanding or truth. Openness by itself isn’t necessarily productie, he argues – it’s possible that openness leads to the breakdown of trust, in each other and in institutions.

In the most interesting part of the essay, Lanier connects Wikileaks to the early days of the Electronic Frontier Foundation, where very smart crytographers and digital pioneers explored the idea that hackers could change history, leveling the playing field with a superior understanding of technology. He sees this perspective as overly romantic and tells us he made the decision to step away at that point. In turn, he’s critiquing current Wikileaks supporters, and especially the Anonymous DDoSers as ineffective and potentiall dangerous romantics, a critique that might be better received had he not slammed them as “nerd supremicists” in his title.

Lanier asked Madrigal to disallow comments on his essay, as he wanted people to engage with the text and not skip ahead to refutations or responses. Madrigal agreed, but evidently didn’t understand how to actually shut off commenting within the Atlantic’s publishing system – the story began accumulating comments, and Madrigal felt compelled to step in and shut down the thread. This, in turn, led to tough questioning by smart folks like Jay Rosen about the wisdom of disallowing comments on a controversial essay. I found Madrigal’s post explaining what happened, why he acted as he did – and the open comment thread that followed his explanation – to be one of the best examples of an online community manager engaging with criticism and looking for a solution going forwards.

Madrigal also gets my respect for featuring an excellent essay from Zeynep Tufekci responding to Lanier’s missive. (Hers is the observation that Lanier isn’t writing about Wikileaks, but about his own framing of issues about technology, privacy and individuality.) She offers a thorough critique of Lanier, pivoting on the idea that Lanier errs in blurring the line between individuals and organizations, especially governments, and ends up trying to protect the privacy of powerful institutions that don’t have the same rights as individuals, no matter what the Supreme Court may have said in Citizens United.

In a neat rhetorical move, Tufekci accuses Lanier of using Wikileaks to promote his own agenda before explaining that Wikileaks really tells us something important about the tension between public and private spaces online (which happens to be her agenda… :-) I share her concerns, and though I don’t come to the same conclusion she does – don’t fear Anonymous; fear corporate control over the Internet – it’s an excellent essay and a great summary of important concerns about the challenges of public discourse in private spaces.

The essay I found most useful in thinking through Wikileaks early in Cablegate was Aaron Bady’s “Julian Assange and the Computer Conspiracy“, which took a close read of a 2006 essay by Assange to elucidate a possble set of motivations behind the release of diplomatic cables. Bruce Sterling takes a very different approach – he uses his knowledge of geek culture and his gift for speculative fiction to map Julian Assange and Bradley Manning onto hacker architypes and declares the situation surrounding Wikileaks inevitable and melancholy. It’s far from fair – we’re dealing with an Assange who’s a projection of Sterling’s understanding of hacker culture rather than a real individual – but it offers insights that are often easier to deliver in fiction.

Specifically, Sterling does a beautiful job of unpacking the lure of encryption, the romance of the cypherpunks, the tension of “secrets” that aren’t especially secret or exciting, the difference between leaks and journalism. Some of the commenters on the essay challenge Sterling’s understanding of the facts – I think that misses the larger point, which is that Sterling offers a picture of Assange and the logic behind Wikileaks that falls short as a work of biography, but is extremely helpful in understanding why he and his project have captured the attention of so many geeks.

Looking forward to more reflections on Wikileaks and its implications, and to the best part of the year – some extended reading about topics that have nothing at all to do with the internet… Happy holidays, everyone.

April 16, 2010

Too complex to report: Magnetar, Goldman Sachs and burning down the financial house

Filed under: long bookmark,Media — Ethan @ 6:24 pm

Two weeks ago, On The Media (my very favorite NPR show, which is saying something as it has stiff competition) interviewed NPR economics correspondent Adam Davidson about the challenges of reporting on financial reform. Davidson is part of the Planet Money team, which produced the amazing, indispensible “Giant Pool of Money” episode for This American Life (the major competitior for my NPR affections) which explained causes of the housing and financial crisis in terms non-experts can understand. He’s a very smart guy with a great track record of explaining tough financial stories. And he thinks that financial reform might be beyond what financial journalists can explain to their audiences:

We have used songs. We’ve used theater sketches. We’ve interviewed over a hundred economists, trying to find the ones who can just really nail an explanation in a clear, concise way. And, honestly, like we’ve picked off pieces, we’ve told little elements of it, but the big “here is what regulation is all about and we’re going to tell you how it works” we have not been able to crack.

I feel like I have seen the edge of what journalism can accomplish, of what journalism is capable of, and the bulk of financial regulatory reform is on the other side of that edge.

It’s possible that the combination of a brilliant piece of financial reporting and an important breaking story might pull Davidson back from the edge.

The breaking story is the SEC’s decision to charge Goldman Sachs with defrauding investors in failing to disclose the influence a hedge fund had in assembling a collateralized debt obligation. (Eyes glazed over yet? Now you understand Davidson’s problem.) But before you try to unpack that story, allow me to recommend another one.

Jesse Eisinger and Jake Bernstein of ProPublica put together a fascinating story called “The Magnetar Trade” about the role of a savvy – and likely unethical – hedge fund manager in inflating and profiting from the real estate bubble. The story’s available on ProPublica’s website, and it’s been turned into a surprisingly compelling radio story – “Eat My Shorts” – by This American Life’s Alex Blumberg.

Here’s the heart of the story:

A hedge fund called Magnetar made a set of very strange investments in 2006 and 2007. They bought into collaterized debt obligations, pools of mortgage-backed bonds. They bought the riskiest pieces of these investments – the equity tranche – for quite small amounts of money. But their willingness to take the riskiest pieces of these investments made it far easier for investment banks to sell the rest of these CDOs for very large sums of money. So far this is just an odd, high-risk strategy – if these CDOs suceeded, Magnetar would get high returns on what had been very modest investments… but the bottom was starting to fall out of the housing market in 2006, and it seemed more likely that these CDOs would default.

Magnetar did something very clever and, in my opinion, somewhat unethical. They bought credit default swaps – insurance, essentially – on these CDOs. The insurance isn’t all that expensive to buy, and it pays the face value of the asset if the asset proves to be worthless. But Magnetar didn’t “insure” the equity tranche positions they took – they bought insurance on the much more expensive, investment grade tranches of the CDOs. In effect, they found a way to convince people to build mansions in a dangerous neighborhood by building a shack there, then took out fire insurance on those very expensive homes, which they didn’t own.

To make sure the fire started – i.e., the CDOs collapsed, making the equity tranche worthless, but allowing the credit default swaps to pay off – Magnetar allegedly pressured the bankers who assembled these CDOs to include very risky assets in them (bonds made up of bad mortgages.) This is something you’d never do if you wanted the CDOs to succeed… and it’s precisely what you’d do if you’d bet on them failing. In other words, Magnetar didn’t just buy fire insurance on someone else’s mansion – they filled their shack with oily rags, knowing it would increase the chance of the neighborhood catching fire. (In my opinion, we’re now deep into the realm of unethical behavior, but that’s my opinion… and may not be regulators’ opinions.)

Magnetar made enormous amounts of money. Investment banks who build the CDOs made lots in fees for putting together the deals… though they lost lots investing in their own bad deals. And pension funds and other large institutional investors often lost their shirts on CDOs that had been packed with toxic crap by Magnetar, which was betting against those very investments. Those losses affect real people – people with state pensions – and the investment bank losses are now being paid for by taxpayer bailouts.

The amazing thing – it’s not clear that Magnetar broke any laws. They were allowed to hedge their CDO positions with credit default swaps… even if the hedge position wasn’t an exact hedge. (Remember, they didn’t ensure the crap they’d bought at the low end of a CDO – they bought insurance on the good stuff at the top of a CDO.) Eisinger and Bernstein suggest that what investment banks did, on the other hand, was questionable both in terms of ethics and securities law. They built new securities – CDOs – knowing that a hedge fund had pressured them to make these securities more risky… and that the hedge fund was making a bet that these securities would fail. They then sold the top tranches of these securities as triple-A investments to credulous investors.

And that’s what the SEC is charging Goldman Sachs with. The charges aren’t in relation to trades made by Magnetar, but by Paulson & Co., but the idea’s the same. The SEC alleges that Paulson told Goldman to build crappy CDOs, sell them to credulous investors while knowing that Paulson was betting against the success of the CDOs using credit default swaps. Needless to say, Goldman denies this – and if it’s demonstrated that they did in fact do it, they’ll allege that it was the action of a single bad VP, Fabrice Tourre – just as Magnetar denies doing anything untoward.

So here’s my question: does this change the landscape around financial reform? Magnetar and Goldman/Paulson demonstrate that unregulated derivatives like CDOs and CDSs can be “financial weapons of mass destruction“, particularly when wielded by banks that are apparently willing to screw over one group of investors to benefit another client, a hedge fund.

How the @#*%^$&#! can the GOP and the financial industry argue against strengthened financial regulation? President Obama has said that he won’t sign legislation that doesn’t strengthen the rules around derivatives like CDSs. That sure sounds like a good idea, when regulators can’t currently agree whether screwing over investors as Goldman is accused of is actually illegal.

The answer, of course – the argument won’t be about this complex story – it will be about a straw man, the assertion that financial regulation will consist of a policy of government bailouts extending from here to the horizon. And if folks like Adam Davidson and colleagues can’t explain these stories in language people understand, the narrative that Obama wants to bail out the banks may well be more compelling than the complex story of hedged bets and short trades.

In the meantime, everyone’s searching for metaphors. Mark Gimein, writing in Slate, explains the Goldman/Paulson deal in terms of a used car lot filled with hand-picked lemons. The ProPublica site uses a tower of champagne classes to explain the different tranches of CDOs. You’ve read my lame arson shack metaphor. Thankfully, the This American Life folks are professionals. They’ve explained it in terms of a Broadway showtune:

Bet Against the American Dream from Alexander Hotz on Vimeo.

It may be as incomprehensible as anything written in any of the excellent stories referenced here… but damn, it’s funny and something of an earworm.

March 4, 2010

Jonathan Stray on original reporting: imaginary abundance

Filed under: long bookmark,Media — Ethan @ 12:49 am

This evening, Google News tells me that I have my choice of 5,053 articles on conflicts between Congressional Republicans and Democrats over healthcare reform. (Oh goody.) How many of those stories contain original reporting? In a world with thousands of professional media outlets at our fingertips – as well as hundreds of thousands of amateurs – how much original material do we really have access to?

Pew’s annual State of the News Media report made one pass at answering this question in their 2006 edition. They did an exhaustive study across media of May 11, 2005 and concluded that, of the 14,000 stories posted on Google News that day, only 24 unique “news events” were represented. Here’s the quote: “The level of repetition in the 24-hour news cycle is one of the most striking features one finds in examining a day of news. Google News, for instance, offers consumers access to some 14,000 stories from its front page, yet on this day they were actually accounts of the same 24 news events. On cable, just half of the stories monitored across the 12 hours were new. The concept of news cycle is not really obsolete, and the notion of news 24-7 is something of an exaggeration.”

It’s a striking pair of numbers – 14,000 stories, but only 24 actual news events? – but I suspect it’s a bit deceptive. Grab snapshots of an unmodified Google News page today and you’re only going to get a few dozen story-clusters, each containing hundreds or thousands of similar stories. There are many, many more story clusters accessible within Google News… you’re just not going to get them unless you read beyond the front page or customize your front page to track different topics. The 24 news events is an artifact of the front page model, the decision by Google News to present a certain subset of possible stories during the day. It’s a relevant number, because it means that the average user probably won’t see stories outside of that narrow set, but it’s not a fair commentary on the depth of coverage accessible through the site.

What’s interesting in those numbers is the 14,000/24 ratio, implying 583 versions of each story. (That ratio is probably much higher today, with Google News following more news sources.) Jonathan Stray did a very smart analysis for Nieman Journalism Lab, looking at a universe of 800 stories about the alleged involvement of two Chinese universities in hacking attacks on Google. His findings were striking:

800 stories = 121 non-identical stories = 13 stories with original quotes = 7 fully independent stories

Stray coded the 121 non-identical stories that had been clustered together by Google (the clustering algorithms are good, but not perfect – nine stories were unrelated to the specific case of these two universities) and looked for the appearance of novel quotes, which he considered the “bare minimum” standard for original reporting. (Interesting – it’s the same logic that led Jure Leskovec to track quotes to track media flow in MemeTracker.) Only 13 of the stories contained quotes not taken from another media source’s report.

The essence of Stray’s piece is the question, “What were those other 100 reporters doing?” The answer, unfortunately, is that they were rewriting everyone else’s stories. Given the current shortfalls in American journalism, this seems like an almost criminal waste of time. Jeff Jarvis offered the advice, “Cover what you do best, link to the rest”, and Stray’s finding suggests that many outlines haven’t yet embraced this particular piece of wisdom.

I was more struck by Stray’s closing point – that even the mighty New York Times got the story wrong. Two schools are mentioned in the Times’s report, the well-known Shanghai Jiaotong University and the obscure Lanxiang Vocational School. As the Qilu Evening News reported, Lanxiang Vocational school is a school that primarily teaches motor vehicle repair and certifies operators of earth moving equipment – it’s an extremely unlikely hotbed of hacking activity. (Though it’s possible that freelance, nationalist hackers were based out of Lanxiang’s computer lab, Qilu’s account casts serious doubt on reports that a Ukrainian professor was teaching specific hacking courses… in part because no serious computer training is offered at the school.) Perhaps it’s a bit much to expect the Times – though it does have a Shanghai bureau – to be reading a Chinese language newspaper… but as Stray points out, the story had been generously translated by the indispensible Roland Soong and was available on his prominent English-language site.

I’d love to see Google remove or deprioritize those big numbers that run under every story cluster. Yep, they’re useful for visualizing media attention – Newsmap does a beautiful job of portraying what stories Google knows about in visual form using these cluster numbers. But they give an illusion of abundance, where there’s often scarcity. If we knew here were 13 stories, not 800, on the Chinese universities and the Google hacks, perhaps we’d be demanding more access to original reporting. Maybe we’d ask for translation and inclusion of journalism in other languages in these clusters. Maybe we’d become more acutely aware that – in the case of this particular story – the original reporting was done almost exclusively by large newspapers, entities whose ability to do this reporting is increasingly imperiled.

March 1, 2010

ChatRoulette survey (long bookmark)

Filed under: Geekery,long bookmark,Media — Ethan @ 3:46 pm

ChatRoulette: An Initial Survey

The fine folks at the Web Ecology Project pride themselves on researching web trends that are just starting to catch the attention of the media and other researchers. As such, we can count on them not only to offer insights into online, randomized chat site ChatRoulette, but into derivative works like CatRoulette. (Yes, I have considered surfing the site with Drew in front of the camera. Rachel told me not to.)

The survey – admittedly a first pass – has some big predictions from a fairly small data set. Alex Leavitt, Tim Hwang and friends sampled 201 sessions on the system, taking snapshots and logging off to see their potential correspondents. They also conducted 30 interviews, though were only able to talk to users who didn’t immediately close the connection, which may have skewed their sample set away from people using the system to find explicit content. (Someday, we’ll see a methodology section in a paper that debates the merits of logging onto a system while naked to get a more representative sample…)

The big takeaways: Yes, the folks using CR are male, 18-24. While some of them are looking for online sexual encounters, lots more are simply curious about the system or looking to chat. The authors frame the space as a “probabilistic online community”, with radically different dynamics than a traditional social network as it “mediates the encounters between its users, specifically by eliminating lasting connections in the framework of the platform”. It’s impossible within this framework to maintain traditional “friend” relationships – instead, we’d expect to see people creating online persona by wearing creative costumes/masks and developing that identity outside of the system, on blogs/tumblr/message boards. They further suggest that the fact that the majority of people on the site don’t appear to be seeking sexual imagery will lead towards a decline in explicit content. (That’s unclear to me – it’s quite possible that people not overtly seeking sexual content (i.e., focusing webcams on their genitals) are curious about what sort of explicit content they might come across as they switch cam partners.)

The paper includes my current candidate for “most enjoyable graph in a social science paper, 2010″:

Picture 1

Powered by WordPress