Crowdsourced Journalism

After reading the Mason & Suri piece and commenting on this week's discussion board thread about crowdsourcing, I got to thinking/doing some research about it from a more in-depth journalism perspective and how it can be used in the field. 

One of the most interesting ways I found was that of the Guardian's crowdsourcing experiment in 2009. The Guardian is one the United Kingdom's most respected and well-read national newspapers. It prints daily and has a website similar in scale to that of the New York Times. In 2009, the Guardian digitally published 700,00 receipts from Members of Parliament, many of which were evidence of wide-scale political corruption and expense fraud. 

In an attempt to quickly gather information and to determine which receipts/which expenses held promise of scandal, the Guardian asked its readers to digitally comb through the 700,000 documents. They asked users to comment, highlight, provide notes, and analyze any expenses of interest and note why it was of interest. Professional journalists then went through and focused on the pinpointed documents. Guardian readers were not asleep at the wheel and managed to review 170,000 documents in the first four days. Crowdsourcing at its finest. 

A great write-up of the process was published by Nieman Lab and details that the Guardian achieved its success by making users feel like they were playing a game by drawing them into the action. The engagement the Guardian received is incredible and helped bring light to many important political issues in England at the time. But I believe there to be some cons involved in this process as well. 

For one, while crowdsourcing brought lightning fast speed to the project, it also discounted the value of trained and professional journalists. Much like the lack of quality one can run into on Mechanical Turk, I'm sure the Guardian ran the risk of users/trolls wanting to fool with the system or get a laugh and make ridiculous and inappropriate comments. I'm sure also that, had professional journalists had the time and resources to delve into the project, they would have found more valuable information or at least been able to draw more valuable conclusions from the information, as they are trained in investigative reporting. But if speed is the desired goal, this was certainly an effective way to achieve it.

Because the Guardian instituted this project to seek a competitive advantage, I do think it can be rule a large-scale success, and I wonder if, in the future, they will try this again should the need arise. If they do, I'm curious to see how five plus years of web development and technology development will influence the process and how credible the results will be. 

One thing I wonder about is the ethics involved. If User X makes a big find in one of the documents that leads the Guardian to a certain big conclusion or piece of evidence, does User X receive credit? Should they? Can the Guardian argue that User X shouldn't receive credit because they used documents provided by the Guardian granting the Guardian 'ownership' over any finds? With viral fame as it is on the internet, I wonder if this will be an aspect of crowdsourced journalism that will take center stage at some point - if at some point User X will want their name associated with the breaking news, no longer content to just be a face in the crowd. 


Comments

Popular posts from this blog

Longform.org and Web 2.0 Tool Integration

Snap Map Exploration

Confessions from a Former Twitter Addict