Pittsburgh-based Carnegie Mellon University (CMU) has teams on both the hardware and software tracks of the DRC. “We are not in the business of taking on every challenge. They have to match with our expertise,” says Tony Stentz, director of the National Robotics Engineering Center at CMU.
The university competed in both Grand Challenges and won the Urban Challenge, but had been working in vehicle autonomy since the early 1980s. “We're no stranger to mobile machines able to manipulate things, so the Robotics Challenge is a match, but more of a stretch because we don't have direct experience in humanoid robots,” he says.
For CMU, the Grand and Urban challenges were valuable exercises, both for published papers and intellectual property (IP). “The two major teams, Carnegie Mellon and Stanford, have seen members hired by Google, which has continued the work,” says Stentz.
Darpa's Robotics Challenge “is achievable, but very challenging,” he says. “It's hard enough to build a robot and a car that can drive itself, but really hard to develop a robot that can drive a car.” But the challenge is on the right technology path, he thinks.
“When robotics got started, there was an opinion that in unstructured environments the machines would be tele-operated. Then the pendulum swung the other way, to fully autonomous systems with no human intervention,” Stentz says. “That's a tough problem, when the robot can't turn to a human for help. There is a more recent trend towards mixed systems, with some level of human control and some level of autonomy, and all the research is into what is the right mix.”
Not all Darpa challenges have succeeded. UAVForge, a contest to crowd-design a small perch-and-stare unmanned aircraft, did not produce a winner, but generated useful lessons, the agency says. The challenge involved submitting designs for online voting, with the highest-ranked UAVs going forward to a fly-off. Northwest UAV (NWUAV) was to build a batch of the winning design for use in a military exercise.
“UAVForge demonstrated the willingness of a global community of non-traditional developers to participate in a compelling challenge,” says program manager Jim McCormick. “The ability to build a community, foster collaboration and overcome inhibitions such as IP concerns proved valuable. The ability to recognize and filter sources of bias in crowd voting was surprisingly important and well-received.”
Of the finalists in the May 2012 fly-off, Team Halo from the U.K.'s Middlesex University scored highest, but none could complete the mission. “Elements of UAVForge were always going to be very difficult, if not impossible: for example, a 2-mi. ingress, observation for up to 3 hr. then a 2-mi. egress, all non-line-of-sight,” says Stephen Prior, who led the team.
“One rule was you could not score any points for advanced behaviors if the baseline objectives weren't met. However, these baseline objectives were already difficult to complete,” says Ruud Knoops, with another finalist, TeamAtmos from Delft University of Technology in the Netherlands. “The fact that eight out of the 12 teams weren't even able to reach the observation site says a lot about the overall difficulty.”