Yesterday was the DARPA Challenge, a contest for anyone (or team) in the USA to find 10 red balloons placed anywhere in the continental United States by DARPA for 9 hours.
This type of contest first excited me even if it was sponsored by the DARPA, an organization under the Department of Defense.
As a part of a team from Metafilter;, what I did observe and learn ?
– There was a lot of misinformation (incorrect information of balloon sightings, speculation whether or reported sightings were true, speculation of the exact location of the balloon) on twitter as I expected.
– Team metafilter’s strategy included observing new tweets based on keywords. Twitter was used by multiple teams because team members used their audience (which was anyone subscribing to their twitter feed or searched for information also inside their ) to promote the contest to their twitter feed subscribers on the chance they would find a balloon.
– The level of misinformation and the amount of fake balloons was not as much of a coordinated or comprehensive effect as I expected. There were only 2 false balloons with pictures (Royal Oak, MI and one in the Boston area) that gained some traction over the internet. Cynically, I thought there would be many more false balloons coordinated through fark, 4chan, or somethingawful.
A mefi member had coded up a pretty spiffy page as our homebase which monitored Twitter feeds (I believe it searched for tweets containing Darpa, Balloon, and a couple other keywords).
Reflections on trust and incentives
I wonder much trust should you put in other users’ information on the internet and DARPA wanted to see how much people did.
Before Saturday, I thought a team based on an existing social community (cough, like metafilter) would win the DARPA Challenge
because the information submitted of balloon sightings would be more accurate because giving wrong information would erode their standing and trust in their social community or outcast them in that community.
In the absence of an existing social community, the MIT team offered any individual (and those who referred the person to MIT) monetary rewards for correct coordinates. Given the monetary incentive (and no disincentive to give MIT false submissions), I am very curious to know: how did MIT quickly decipher all of their submissions to determine if they were correct ?
Metafilter only received 3 submissions from our form (only 1 of them actually was right) which asked Metafilter members to submit any balloon sightings there.
Most of our information for locations ended up coming from Twitter. At our online homebase Possible locations were based on the amount of tweets in a certain geographic area (I think this geographic area was based on a combination of factors, on whether the name of the place was location, coordinates). This sort of sampling is based on a ‘wisdom of the crowds’ philosophy.
Interestingly, locations that did not have any balloons (Seattle; Arlington, VA; Albany, NY; Royal Oak, MI; Iowa) generated the most tweets. At least in our configuration of the twitter filter, the wisdom of the crowds failed.
Congratulations to the MIT team.