The Ansari X Prize, Kay says, built on research already conducted in supersonic and exoatmospheric flight, most notably by the North American X-15. Similarly, for the Lunar Lander challenge, “something already existed, from Apollo and the McDonnell Douglas DC-X. It was a low-budget competition to increase the efficiency of something that was already in place.”
The Google Lunar X Prize, meanwhile, has been designed “to set project management conditions rather than targets for technology,” Kay says. “The constraints of budget, source of funding and time set new conditions for managing a project that are in line with an open-source, low-budget approach to spaceflight.” This is encouraging competitors to “use open-source technology, things they can find, and not try to do something fancy.”
Technology-demonstration competitions like the U.S. Defense Advanced Research Project Agency's (Darpa) Grand and Urban Challenges for autonomous ground vehicles, and its latest Robotics Challenge for disaster-response robots, are so complex that to compete requires large, multidisciplinary teams. “Our Urban Challenge team involved a very large group of people, the largest we have assembled to work on one particular project because we had to tap so many types of expertise,” says Tony Stentz, director of the National Robotics Engineering Center at Carnegie Mellon University (CMU).
“The key to a successful challenge is to establish a lofty yet still achievable goal—a high bar to aim for, but not impossibly high,” he says. Advancing robotic technology is suited to a prize approach. “At CMU, robotics projects are fast-paced and challenging to begin with, and the challenges breed some good practices in getting down to the nuts and bolts,” Stentz says. “Competition really breeds a culture of risk-taking. It's more acceptable to try and fail than in the normal day-to-day business of research.”
Setting the bar high “encourages thinking outside the box,” he says. “Schedules tend to be accelerated, which forces participants to have a very clear focus, to cut through red tape and get over the usual petty impediments to progress and cut to the chase.” He also cites the team-building and “excitement factor” in a challenge —“everyone loves a competition with lofty objectives.”
CMU won the Urban Challenge in 2007, and has two teams competing in the Darpa Robotics Challenge with “many faculty involved, with all types of expertise.” The cost of such technology-demonstration competitions makes finding the resources “a challenge in itself,” says Stentz. While Darpa provided some funding for the Urban Challenge, “we had to raise substantial money to round out the work,” he says, adding that publicity, hands-on experience and intellectual property (IP) were reasons sponsors came on board.
“Darpa challenges are structurally different,” Kay says. “They have an initial round to identify qualified participants, which increases the chances of finding a better solution.” The agency then provides funding to continue development of technologies for a second round of competition—an approach used with both the Urban and Robotics Challenges.
“It is important for this kind of organization to have a set of people with different technologies able to compose different approaches to the same problem, even if they don't get any particular technology or IP at the end,” he says. “It's a more expensive approach. With a second round, and judges, the cost can be twice the prize money.”
Darpa runs some of the bigger government challenges. The America Competes authorization limits the size of individual prizes to $50 million, but most government challenges offer prizes of $10 million or less. The authority also limits participation to U.S.-based entities. To compete in NASA's Green Flight Challenge, Slovakia's Pipistrel and Germany's University of Stuttgart had to find U.S. partners.