Charles Mann offers a big story in the latest issue of the Atlantic. It’s 11,000 words, and it’s based around an audacious premise: the end of energy scarcity. The peg for the story is Japan’s ongoing research on methane hydrate, an amalgam of natural gas trapped in water ice that occurs in oceans around the world. If methane hydrate can be harvested, Mann tell us, the global supply of hydrocarbon fuels are virtually unlimited. This, he argues, would have massive geopolitical and strategic implications, as the history of the twentieth century can be read in part through the lens of wealthy nations without oil seeking the black stuff in less developed lands. New forms of power might center on who can extract ice that burns like natural gas.
The bulk of the Mann piece is a debate over “peak oil”, an idea put forward by M. King Hubbert in the 1950s, when he correctly predicted that US oil production would slow. Mann’s piece pits Hubbert against Vincent E. McKelvey, his boss at the US Geological Survey for years, who argued that energy supplies are virtually inexhaustible, though the costs to extract them increase as we use up the “easy” oil ready to burst above the surface. While Hubbert’s predictions about US oil production were initially right, Mann argues, the rise of techniques like horizontal drilling and hydrofracking means McKelvey is right in the long run. If we need methane hydrate – and Japan does, as it lacks other hydrocarbon resources – we’ll find a way to pay for it. The argument only looks like a contradiction, Mann argues, because it’s an argument between geologists on one side and social scientists on the other, and from the social scientists’ point of view, so long as there’s economic demands for hydrocarbons and the means to extract it, we should expect these fuels to keep flowing.
There’s something very attractive about Mann’s argument. He writes as an insider who’s going to let you in on what the smart guys know that poor, dumb saps like me would never imagine. It’s a tone you hear a lot in Washington policy circles, a realpolitik view of the world that suggests you can entertain yourself with solar panels as long as you’d like, but the adults in the room are deciding who gets invaded for their petrochemical wealth and whose civilizations will collapse into a new Medieval period.
Fortunately, there are some smart responses to Mann’s article, some vitriolic, some patient and thoughtful. (To the Atlantic’s credit, they published both Mann’s piece and Chris Nelder’s excellent response.) The essence of the responses is this: yes, there’s a whole lot of methane trapped in ice. Yes, if we could extract it, we’d have a whole lot of fuel that burns with half the carbon emissions of coal. But it’s unclear we can ever extract this at an affordable cost. (Canada just dropped out of the methane hydrate race, perhaps because they see extracting oil from tar sands as a more plausible source of hydrocarbons.) (more…)
Martin Feuz, Matthew Fuller and Felix Stadler have a very clever paper in a recent edition of First Monday, titled “Personal Web searching in the age of semantic capitalism: Diagnosing the mechanisms of personalization.” In their study, they create three artificial search profiles on Google based on the topics of interest to three different philosophers (Foucault, Nietzsche and Kant, using terms from the indices of their books) and compare the results these personalized profiles receive to the results an “anonymous” profile – i.e., one without Google’s Web History service turned on – receives.
They see a very high degree of personalization – personalized search results appear in 50% of search queries for some of their profiles – and in the intensity of personalization – in some cases, 64% of results are different in content or rank from an anonymous profile. While there’s apparently lots of personalization going on, and personalized results emerge early in the training process, the authors don’t see the search algorithms reaching deep into the “long tail” of content. When personalized results differ from anonymous search results, 37% of the novel results can be found on the second page of anonymous results, while only 7% of novel results are found between results 100 – 1000 and 13% beyond result 1000. Finally, they are able to demonstrate that personalization is probably not based solely on the content an individual has searched for in the past – they see ample evidence that content on social networking is being heavily personalized for Nietzsche based only on his searches for power, morality and will, for instance.
That last example gives a bit of the flavor for the paper – it’s both a serious and methodologically defensible piece of research as well as a clever prank, demonstrating that Google will try to assign Immanuel Kant to a psycho/demographic group and target content based on those assumptions. This playful tone is accompanied by a willful naïvety that’s slightly frustrating – they start by taking Google’s descriptions of the effects of personalization at face value then offer the surprise that the hypotheses they derive from Google’s PR are invalidated. It’s not especially surprising for the reader, however, to discover that Google’s personalization is at least as much about helping advertisers target audiences as it is about helping users find the best possible content. It’s not a surprise to the authors, either – the term “semantic capitalism“, credited to Cristophe Bruno, implies that we’ve entered a world where words have market prices, with potentially different values to advertisers as to audiences.
While I find the levels of personalization the authors detect to be fascinating, I wonder whether their experiment correctly isolates the factors involved with personalization. Eli Pariser, in his talk last year at PDF and, presumably in his forthcoming book on the power and dangers of personalization, refers to 57 factors that allow Google to personalize results for users who are not using Web History (the “anonymous” users in this experiment.) The authors control for a key variable, conducting all searches from IP addresses in Central London. It’s unclear, though, whether Google is making other extrapolations – perhaps users who execute lots of searches at 3pm are more likely to be middle-aged businessmen than teenage girls, and results are targeted as a result? I’d be very interested to see the authors check to see if their anonymous search results are identical or nearly so – if not, there may be a great deal more personalization going on then they are accounting for outside of the experiment’s parameters.
I was struck by the apparent discontinuities in how often personalized search results appeared for the three different profiles. In one training session, there’s a sharp spike in personalization between sessions, a test of results where personalization appears three times as often as in other sessions. In another, there are two, smaller spikes, and in the third, a three-session long spike. With no easy way to explain what’s causing these spikes, it’s possible to speculate that Google’s algorithms for personalization are not just opaque and complex, but adaptive and changing. While the authors are experimenting with Google, it’s reasonable to assume that Google is experimenting with them, changing levels of personalization to see whether Google is able to achieve its desired result: clicks on ads.
I found the authors’ findings about the long tail particularly fascinating, though I’d frame them slightly differently than they do. They see the fact that most personalized results (results that differ between a query from a profiled and an anonymous user) that appear in the top 10 come from the top 100 results delivered to anonymous users as evidence that Google’s personalization is pretty shallow. I see the finding that 13% of personalized results in the top 10 come from outside of the top 1000 as downright remarkable – I’d thought that Google’s algorithm, both in terms of page rank and term relevancy, would resist such large reshufflings of the deck, bringing up pages considered irrelevant for an “anonymous” user to prominence for a profiled user. I see that finding as quite encouraging – even buried deep in the slag heap of low pagerank and low relevancy, personalization might occasionally bring a long-tail web page to the surface.
Of course, there’s another explanation: again, Google’s testing the experimenters as they’re testing the system. Google’s long said that they present different results to users as a way of testing result relevance – if a long-tail page appears in results and is widely clicked, perhaps it’s time to weight it more heavily or to tweak the algorithms that buried it in the first place.
This is the core problem of studying a system like Google. As the authors acknowledge, “How can we study a distributed machinery that is both wilfully opaque and highly dynamic? One which reacts to being studied and takes active steps to prevent such studies from being conducted on the automated, large-scale level required?” That second question is a reference to a methodological challenge the authors had – it’s deeply atypical behavior to click on every possible results page for a search query, which the authors needed to do, and Google periodically blocked their IPs for suspicion that they were bots attempting to scrape or game the search engine.
Google is not willfully opaque just out of spite or a desire to protect its secrets from Microsoft or other search engine builders. The sort of work their authors are conducting is exactly the sort of work search engine “optimizers” do by attempting to help their clients achieve a higher ranking in Google’s results. Were Google’s methods of personalization easy to understand, we would expect SEOfolk to take advantage of their newfound knowledge, as we’d expect them to use any knowledge about Google’s ranking algorithms. The more transparent those algorithms are, generally speaking, the more likely they are to be gamed, and the more gaming occurs, the less useful Google is for most users.
I wonder if there’s a provocative hypothesis the authors haven’t considered in analyzing the behaviors they saw – Google offers different results with a high frequency, in part because they’re trying to obfuscate their algorithms. The faster you poll the engine, the more variability you get, making it harder to profile the engine’s behavior. We can discard this hypothesis if the authors checked results of their anonymous searches against one another and got highly similar results – if not, then it’s possible that some of the hidden variables Eli Pariser talks about are in play… or that there’s an inherent amount of noise in the system, either for purposes of obfuscation or for allowing Google to try A/B tests with live users.
Researchers want to understand how Google works because it’s probably the most important node (at least at the moment) in our online information ecosystem. Whether we’re interested in driving attention or revenue, what Google points us towards becomes more powerful. But the better we understand Google, the more likely we are to break it. Security through obscurity is a dreadful strategy, but I’m hard-pressed to offer a better answer to Google for how they can prevent their engine from being gamed.
Deep in Feuz, Fuller and Stadler’s paper is the sense that there’s something unheimlich about the idea that something as important an influencer as Google being as mercurial as it is. Personalization is disturbing to the extent to which it separates us from the real, true, stable search results, the ur-results Google is withholding from us in the hopes of selling us ads for effectively… but even more disturbing is the idea that there’s no solid ground, no single set of best results Google could deliver, even if it wanted to.
Some other stories I’m trying to follow, in addition to the news from Bahrain:
There’s very little news from Libya, as protesters take to the streets, especially in the eastern city of Benghazi. Libya tightly restricts press coverage, and the New York Times observes that while Libya hasn’t been able to prevent news from Tunisia and Egypt from inspiring protesters to take to the streets, it has been pretty effective at restricting news from Libya from reaching the global press. There are reports that Libya began blocking access to social media sites, and last evening, Libya disconnected from the internet.
This graphic from Arbor Networks showing two sharp drops in Libyan internet traffic during the day, and a thorough shutoff at night. Heading forward, we’re likely to see reporting via land line phones, and perhaps some computer users dialing into modem banks in Joran and elsewhere, but the shutdown is likely to make what little reporting from the ground we’ve had even harder to get.
I argued previously that there’s great danger for protesters who are inspired to take to the streets in countries where the media isn’t paying attention – Libya is a special case of this scenario, as it’s extremely difficult for anyone to report, via traditional or social media. As Twitter user @EnoughGaddafi puts it, “For all those frustrated by reporting on #libya understand this. There is Zero indpt media on the ground. Nothing at all.” In the absence of coverage, it sounds like suppression of the protests has been quite brutal, with a death toll of at least two dozen, perhaps as high as 70.
My friend and former colleague Dewitt Clinton offers a decidedly geeky perspective on the Libyan unrest – a reminder that the bit.ly URL shortener (which I’ve been trying out the past few weeks) is located on a Libyan domain name:
In case it isn’t obvious, I’m still not a fan of URL shorteners. They’re a bug, not a feature.
And then things like this happen: http://goo.gl/fx3iA. Bye bye bit.ly? That’d be a lot of dead links.
I felt a great disturbance in the Web, as if millions of URLs suddenly cried out in terror and were suddenly silenced.
As far as I can tell, Libya Telecom (http://goo.gl/SsMAi) runs .ly. Willing to bet that they’d shut it down plenty fast if Gaddafi said to.
He’s not the first to observe that bit.ly’s domain is connected to a country that’s not exactly amenable to free speech. is.gd advertises itself as an “ethical URL shortener“, in part because they’re not vulnerable to shutdown by the Libyan government, which has previously shut down vb.ly, a “sex-positive” URL shortener. I suspect that if bit.ly has trouble, they’ll rapidly move everyone over to j.mp, which uses a domain name from the Northern Mariana Islands, which as of yet don’t appear to be experiencing street protests.
Despite the Libyan internet shutdown, bit.ly is still working. The site’s not hosted in Libya, and according to the CEO of the company that runs bit.ly, only two of the five root servers that control .ly are in the country. So while we should worry about people being massacred outside of the eyes of the media, at least we don’t have to change URL shorteners.
Given the dramatic developments in Tunisia, Egypt and now throughout the Arab world, it can be hard to remember the extent to which Wikileaks dominated online conversation late last year. While there was an interesting conversation about whether Wikileaks could be blamed or credited for protests in Tunisia, Wikileaks appears to be releasing documents in reaction to protests these days. Today’s dump of cables includes a wealth of dispatches from the US Embassy in Manama. It’s helpful, as it gives reporters another possible angle in analyzing the situation on the ground, and an extremely media-savvy way to keep Wikileaks in the news, even if the releases in the cables are following, not moving, the news.
While Gabon and Sudan may be the first sub-Saharan African nations to hold protests inspired by events in Tunisia and Egypt, the implications of those successful revolts are being felt across the continent. Trevor Ncube, publisher of South Africa’s exemplary Mail and Guardian, and publisher of two opposition newspapers in his native Zimbabwe, has been reflecting on the possibility of a popular revolt against the Mugabe regime. In an interview two weeks back, Ncube argued that it was unlikely that Zimbabweans would follow in Egyptians footsteps, in part because the army was so closely identified with the ruling party, and not with the country as a whole. Today, Ncube continued along these lines, arguing that the long history of state-sanctioned violence against the general populus makes it harder for Zimbabweans to decide to take to the streets in protest. While he wasn’t directly addressing Bahrain or Libya, I can’t but help read these comments in that light – when does evidence that a government will use deadly force against dissent convince people to stay at home, rather than taking to the streets?
Committee to Protect Journalists points out that Zimbabwe’s state controlled media has been scrupulous about avoiding mention of protests in Egypt and Tunisia… except to criticize the US’s role in “interfering” with those protests…! The protests are a sensitive matter in Ethiopia as well, where a prominent government critic was taken in for questioning after writing about matters in Egypt and Tunisia.
If so much of the world weren’t on fire, Uganda’s elections would likely be a more high-profile affair. Yoweri Museveni, who came to power as a rebel leader in 1986, is seeking a fourth presidential term, challenged by his former physician, Kizza Besigye. Polling went relatively smoothly today, though controversy is possible when the results are announced this weekend. (No one expects Museveni to lose – the question is whether protests about the fairness of the elections will erupt into a serious challenge to his re-appointment.)
Again, if we weren’t all watching North Africa and the Gulf, I suspect this story about Uganda blocking certain keywords in SMS messages would have gotten more attention:
The Uganda Communications Commission Friday released 18 words and names that it has instructed mobile phone short message service (SMS) to flag if they are contained in any text message. They are then supposed to read the rest of the content of the message and if it is deemed to be “controversial or advanced to incite the public”, will be blocked.
The words are ‘Tunisia’, ‘Egypt’, ‘Ben Ali’, ‘Mubarak’, ‘dictator’, ‘teargas’, ‘kafu’ (it is dead), ‘yakabbadda’ (he/she cried long time ago), ‘emuudu/emundu’ (gun), ‘gasiya’ (rubbish), ‘army/ police/UPDF’, ‘people power’, and ‘gun/bullet’.
I got a fascinating tweet from a Ugandan friend, who reported that SMS was also being used in a viral campaign to support the President. “Another Rap. Vote Museveni. Send this 2 7 pple 2 receive 7000 worth of airtime” If the “another rap” part of that message is obscure to you, I point you to this wonderfully absurd video:
Museveni is reciting a pair of traditional Kinyankole rhymes – between the two, he announces, “You want another rap?” It’s been remixed into a catchy song that now serves as his campaign anthem. I suspect that his “re-election” will have more to do with crackdowns on the press and intimidation of the opposition than his musical skills.
And, in matters of a world on fire, let’s not forget the Ivory Coast, still locked in a battle between an elected president and one who won’t let go. Desperate to continue paying the soldiers who are keeping him in power, Laurent Gbagbo has nationalized the banks, many of which were in the process of shutting down or pulling out of the country. Not a good sign, but it might point to the beginning of the end for a standoff that’s seemed intractable up until now.
It’s a few days before Christmas, ten days before the end of 2010, and there’s the wonderful sense of deceleration as I flip through my browser tabs. The blizzard of email has slowed to a few, scant flakes, the roaring river of Twitter updates is a trickle. There’s time to read, and evidently, time to reflect and write at more length.
In the past couple of days, a couple of excellent essays – and some flawed, but interesting ones – have been posted reflecting on Wikileaks, Anonymous and the philosophical motives behind these projects. For me, they’re a reminder that the opinions offered the most rapidly aren’t always (aren’t often?) the most insightful. Wikileaks’s release of diplomatic cables and the actions taken by individuals, organizations, corporations and governments in response have implications for dozens of ongoing debates, about transparency, privacy, internet architecture and ownership, free speech, human rights. It’s not a surprise to me that very smart people have needed a while to think through what’s happened before offering their analysis.
Much of the best writing I’ve read has been either published on or linked to via The Atlantic. Alexis Madrigal is maintaining a great collection of links to commentary on different facets of the case, and he’s also edited a few of the most interesting pieces I’ve recently had the time to read.
One – which I’d put in the “interesting but flawed” pile – is Jaron Lanier’s “The Hazards of Nerd Supremacy: The Case of WikiLeaks“. As one respondent to the piece notes, it’s not really an essay about Wikileaks. Instead, Lanier connects some of his recent thinking on the internet as a threat to individual creativity, expressed at length in his recent book, “You Are Not A Gadget“. (This review is a sympathetic overview of the book.) Lanier sees a philosophical stance implicit in Wikileaks’s actions and Assange’s motives – the belief that a huge accumulation of data leads towards understanding or truth. Openness by itself isn’t necessarily productie, he argues – it’s possible that openness leads to the breakdown of trust, in each other and in institutions.
In the most interesting part of the essay, Lanier connects Wikileaks to the early days of the Electronic Frontier Foundation, where very smart crytographers and digital pioneers explored the idea that hackers could change history, leveling the playing field with a superior understanding of technology. He sees this perspective as overly romantic and tells us he made the decision to step away at that point. In turn, he’s critiquing current Wikileaks supporters, and especially the Anonymous DDoSers as ineffective and potentiall dangerous romantics, a critique that might be better received had he not slammed them as “nerd supremicists” in his title.
Lanier asked Madrigal to disallow comments on his essay, as he wanted people to engage with the text and not skip ahead to refutations or responses. Madrigal agreed, but evidently didn’t understand how to actually shut off commenting within the Atlantic’s publishing system – the story began accumulating comments, and Madrigal felt compelled to step in and shut down the thread. This, in turn, led to tough questioning by smart folks like Jay Rosen about the wisdom of disallowing comments on a controversial essay. I found Madrigal’s post explaining what happened, why he acted as he did – and the open comment thread that followed his explanation – to be one of the best examples of an online community manager engaging with criticism and looking for a solution going forwards.
Madrigal also gets my respect for featuring an excellent essay from Zeynep Tufekci responding to Lanier’s missive. (Hers is the observation that Lanier isn’t writing about Wikileaks, but about his own framing of issues about technology, privacy and individuality.) She offers a thorough critique of Lanier, pivoting on the idea that Lanier errs in blurring the line between individuals and organizations, especially governments, and ends up trying to protect the privacy of powerful institutions that don’t have the same rights as individuals, no matter what the Supreme Court may have said in Citizens United.
In a neat rhetorical move, Tufekci accuses Lanier of using Wikileaks to promote his own agenda before explaining that Wikileaks really tells us something important about the tension between public and private spaces online (which happens to be her agenda… :-) I share her concerns, and though I don’t come to the same conclusion she does – don’t fear Anonymous; fear corporate control over the Internet – it’s an excellent essay and a great summary of important concerns about the challenges of public discourse in private spaces.
The essay I found most useful in thinking through Wikileaks early in Cablegate was Aaron Bady’s “Julian Assange and the Computer Conspiracy“, which took a close read of a 2006 essay by Assange to elucidate a possble set of motivations behind the release of diplomatic cables. Bruce Sterling takes a very different approach – he uses his knowledge of geek culture and his gift for speculative fiction to map Julian Assange and Bradley Manning onto hacker architypes and declares the situation surrounding Wikileaks inevitable and melancholy. It’s far from fair – we’re dealing with an Assange who’s a projection of Sterling’s understanding of hacker culture rather than a real individual – but it offers insights that are often easier to deliver in fiction.
Specifically, Sterling does a beautiful job of unpacking the lure of encryption, the romance of the cypherpunks, the tension of “secrets” that aren’t especially secret or exciting, the difference between leaks and journalism. Some of the commenters on the essay challenge Sterling’s understanding of the facts – I think that misses the larger point, which is that Sterling offers a picture of Assange and the logic behind Wikileaks that falls short as a work of biography, but is extremely helpful in understanding why he and his project have captured the attention of so many geeks.
Looking forward to more reflections on Wikileaks and its implications, and to the best part of the year – some extended reading about topics that have nothing at all to do with the internet… Happy holidays, everyone.
Two weeks ago, On The Media (my very favorite NPR show, which is saying something as it has stiff competition) interviewed NPR economics correspondent Adam Davidson about the challenges of reporting on financial reform. Davidson is part of the Planet Money team, which produced the amazing, indispensible “Giant Pool of Money” episode for This American Life (the major competitior for my NPR affections) which explained causes of the housing and financial crisis in terms non-experts can understand. He’s a very smart guy with a great track record of explaining tough financial stories. And he thinks that financial reform might be beyond what financial journalists can explain to their audiences:
We have used songs. We’ve used theater sketches. We’ve interviewed over a hundred economists, trying to find the ones who can just really nail an explanation in a clear, concise way. And, honestly, like we’ve picked off pieces, we’ve told little elements of it, but the big “here is what regulation is all about and we’re going to tell you how it works” we have not been able to crack.
I feel like I have seen the edge of what journalism can accomplish, of what journalism is capable of, and the bulk of financial regulatory reform is on the other side of that edge.
It’s possible that the combination of a brilliant piece of financial reporting and an important breaking story might pull Davidson back from the edge.
The breaking story is the SEC’s decision to charge Goldman Sachs with defrauding investors in failing to disclose the influence a hedge fund had in assembling a collateralized debt obligation. (Eyes glazed over yet? Now you understand Davidson’s problem.) But before you try to unpack that story, allow me to recommend another one.
Jesse Eisinger and Jake Bernstein of ProPublica put together a fascinating story called “The Magnetar Trade” about the role of a savvy – and likely unethical – hedge fund manager in inflating and profiting from the real estate bubble. The story’s available on ProPublica’s website, and it’s been turned into a surprisingly compelling radio story – “Eat My Shorts” – by This American Life’s Alex Blumberg.
Here’s the heart of the story:
A hedge fund called Magnetar made a set of very strange investments in 2006 and 2007. They bought into collaterized debt obligations, pools of mortgage-backed bonds. They bought the riskiest pieces of these investments – the equity tranche – for quite small amounts of money. But their willingness to take the riskiest pieces of these investments made it far easier for investment banks to sell the rest of these CDOs for very large sums of money. So far this is just an odd, high-risk strategy – if these CDOs suceeded, Magnetar would get high returns on what had been very modest investments… but the bottom was starting to fall out of the housing market in 2006, and it seemed more likely that these CDOs would default.
Magnetar did something very clever and, in my opinion, somewhat unethical. They bought credit default swaps – insurance, essentially – on these CDOs. The insurance isn’t all that expensive to buy, and it pays the face value of the asset if the asset proves to be worthless. But Magnetar didn’t “insure” the equity tranche positions they took – they bought insurance on the much more expensive, investment grade tranches of the CDOs. In effect, they found a way to convince people to build mansions in a dangerous neighborhood by building a shack there, then took out fire insurance on those very expensive homes, which they didn’t own.
To make sure the fire started – i.e., the CDOs collapsed, making the equity tranche worthless, but allowing the credit default swaps to pay off – Magnetar allegedly pressured the bankers who assembled these CDOs to include very risky assets in them (bonds made up of bad mortgages.) This is something you’d never do if you wanted the CDOs to succeed… and it’s precisely what you’d do if you’d bet on them failing. In other words, Magnetar didn’t just buy fire insurance on someone else’s mansion – they filled their shack with oily rags, knowing it would increase the chance of the neighborhood catching fire. (In my opinion, we’re now deep into the realm of unethical behavior, but that’s my opinion… and may not be regulators’ opinions.)
Magnetar made enormous amounts of money. Investment banks who build the CDOs made lots in fees for putting together the deals… though they lost lots investing in their own bad deals. And pension funds and other large institutional investors often lost their shirts on CDOs that had been packed with toxic crap by Magnetar, which was betting against those very investments. Those losses affect real people – people with state pensions – and the investment bank losses are now being paid for by taxpayer bailouts.
The amazing thing – it’s not clear that Magnetar broke any laws. They were allowed to hedge their CDO positions with credit default swaps… even if the hedge position wasn’t an exact hedge. (Remember, they didn’t ensure the crap they’d bought at the low end of a CDO – they bought insurance on the good stuff at the top of a CDO.) Eisinger and Bernstein suggest that what investment banks did, on the other hand, was questionable both in terms of ethics and securities law. They built new securities – CDOs – knowing that a hedge fund had pressured them to make these securities more risky… and that the hedge fund was making a bet that these securities would fail. They then sold the top tranches of these securities as triple-A investments to credulous investors.
And that’s what the SEC is charging Goldman Sachs with. The charges aren’t in relation to trades made by Magnetar, but by Paulson & Co., but the idea’s the same. The SEC alleges that Paulson told Goldman to build crappy CDOs, sell them to credulous investors while knowing that Paulson was betting against the success of the CDOs using credit default swaps. Needless to say, Goldman denies this – and if it’s demonstrated that they did in fact do it, they’ll allege that it was the action of a single bad VP, Fabrice Tourre – just as Magnetar denies doing anything untoward.
So here’s my question: does this change the landscape around financial reform? Magnetar and Goldman/Paulson demonstrate that unregulated derivatives like CDOs and CDSs can be “financial weapons of mass destruction“, particularly when wielded by banks that are apparently willing to screw over one group of investors to benefit another client, a hedge fund.
How the @#*%^$! can the GOP and the financial industry argue against strengthened financial regulation? President Obama has said that he won’t sign legislation that doesn’t strengthen the rules around derivatives like CDSs. That sure sounds like a good idea, when regulators can’t currently agree whether screwing over investors as Goldman is accused of is actually illegal.
The answer, of course – the argument won’t be about this complex story – it will be about a straw man, the assertion that financial regulation will consist of a policy of government bailouts extending from here to the horizon. And if folks like Adam Davidson and colleagues can’t explain these stories in language people understand, the narrative that Obama wants to bail out the banks may well be more compelling than the complex story of hedged bets and short trades.
In the meantime, everyone’s searching for metaphors. Mark Gimein, writing in Slate, explains the Goldman/Paulson deal in terms of a used car lot filled with hand-picked lemons. The ProPublica site uses a tower of champagne classes to explain the different tranches of CDOs. You’ve read my lame arson shack metaphor. Thankfully, the This American Life folks are professionals. They’ve explained it in terms of a Broadway showtune:
It may be as incomprehensible as anything written in any of the excellent stories referenced here… but damn, it’s funny and something of an earworm.