.comment-link {margin-left:.6em;}

Alchemical Musings

This blog has moved to a new location - http://alchemicalmusings.org

Monday, November 21, 2005

"Michael, are you sure you want to do that?"

Pull over Kitt - you've just been lapped.

On Monday November 14th I attended a presentation by Sebastian Thrun, an AI researcher at Stanford U. whose team recently won the Darpa Grand Challenge.

The idea behind the Grand Challenge is to accomplish something that seems impossible, along the lines of crossing the Atlantic, the X-prize, etc. Darpa had previously funded cars that drive themselves, but after numerous failures decided to turn the task into a contest and see how far teams would get in a competitive setting. Last year none of the entrants managed to finish the course, but this year 5 finished, 4 within the alloted time.

The difference between last year and this year was primarily improvements in software, not hardware. In fact, once the software has been developed, outfitting a car with the necessary equipment to drive itself (the perceptual apparatus - laser, radar, and video guidance, the gps, the inertial motion systems, the general purpose computing servers, and the fly-by-wire control systems), were estimated by Sebastian to cost < $5k once they are mass produced.

Sebastian spent a long time conveying how hard it is to teach a computer to answer the question "What is a road?" The entire time the audience was left wondering - how the @(*^*#@ do we do that?

One of the issues that tripped up the lasers is the systematic pitching forward and backwards of the car as it bumps over the terrain - this causes the perceptual systems to jerk back and forth, perceiving parts of the ground over again, and mistaking these discrepancies for obstacles. No inertial guidance system is precise enough to correct for these errors, and their team won by discovering a systematic regularity in the errors themselves (and a probabilistic model to capture them). Instead of finding theoretically precise values for the constants in their equations, the Stanford team tuned these parameters by actually driving the vehicle, and "tagging" safe terrain. During the training, they also took calculated risks, let bad things happen, and looked back at the earlier optical flow.

Sebastian is aware of the military applications driving (sic) the development of this technology, but is personally motivated by the lives he thinks can be saved in civilian applications. In fact, boasted that he was now planning on having a vehicle drive itself from San Francisco to Los Angeles by 2007! I am beginning to wonder how much longer it will be legal for humans to operate motor vehicles.

When asked if any of the team's research findings would be applicable elsewhere in CS, Sebastian replied that he had no idea, yet. His philosophy is to first build the system, and then afterwards spend years pouring over the data to figure out what happened. In case you hadn't realized it yet, the robots are already here (some of them killer)!


Blogger akhi003 said...

I wonder how much credit he gave to Red Whittaker from the CMU Red Team in his talk. In it's annual race, CMU was the team to go the farthest, although it was only 7 miles.

I spoke with some partners on the team during my time at CMU and on the Red Team, and they said that a lot of the technologies are in the works to be implemented shortly. One thing that comes to mind is a coating device that was used to keep the sensors clean in the desert. This device would automatically shoot a specific coating substance on the lenses of the sensors and would automatically evaporate without leaving a trace.

I know that Caterpillar and GM are already using some of the technologies developed on the Red Team Hummers in their vehicle lineups next year.

It's good to see that a great focus is made on civilian safety as a result of the race.


11/21/2005 11:33:00 AM  

Post a Comment

Links to this post:

Create a Link

<< Home