"Michael, are you sure you want to do that?"
Pull over Kitt - you've just been lapped.
On Monday November 14th I attended a presentation by Sebastian Thrun, an AI researcher at Stanford U. whose team recently won the Darpa Grand Challenge.
The idea behind the Grand Challenge is to accomplish something that seems impossible, along the lines of crossing the Atlantic, the X-prize, etc. Darpa had previously funded cars that drive themselves, but after numerous failures decided to turn the task into a contest and see how far teams would get in a competitive setting. Last year none of the entrants managed to finish the course, but this year 5 finished, 4 within the alloted time.
The difference between last year and this year was primarily improvements in software, not hardware. In fact, once the software has been developed, outfitting a car with the necessary equipment to drive itself (the perceptual apparatus - laser, radar, and video guidance, the gps, the inertial motion systems, the general purpose computing servers, and the fly-by-wire control systems), were estimated by Sebastian to cost < $5k once they are mass produced.
Sebastian spent a long time conveying how hard it is to teach a computer to answer the question "What is a road?" The entire time the audience was left wondering - how the @(*^*#@ do we do that?
One of the issues that tripped up the lasers is the systematic pitching forward and backwards of the car as it bumps over the terrain - this causes the perceptual systems to jerk back and forth, perceiving parts of the ground over again, and mistaking these discrepancies for obstacles. No inertial guidance system is precise enough to correct for these errors, and their team won by discovering a systematic regularity in the errors themselves (and a probabilistic model to capture them). Instead of finding theoretically precise values for the constants in their equations, the Stanford team tuned these parameters by actually driving the vehicle, and "tagging" safe terrain. During the training, they also took calculated risks, let bad things happen, and looked back at the earlier optical flow.
Sebastian is aware of the military applications driving (sic) the development of this technology, but is personally motivated by the lives he thinks can be saved in civilian applications. In fact, boasted that he was now planning on having a vehicle drive itself from San Francisco to Los Angeles by 2007! I am beginning to wonder how much longer it will be legal for humans to operate motor vehicles.
When asked if any of the team's research findings would be applicable elsewhere in CS, Sebastian replied that he had no idea, yet. His philosophy is to first build the system, and then afterwards spend years pouring over the data to figure out what happened. In case you hadn't realized it yet, the robots are already here (some of them killer)!
On Monday November 14th I attended a presentation by Sebastian Thrun, an AI researcher at Stanford U. whose team recently won the Darpa Grand Challenge.
The idea behind the Grand Challenge is to accomplish something that seems impossible, along the lines of crossing the Atlantic, the X-prize, etc. Darpa had previously funded cars that drive themselves, but after numerous failures decided to turn the task into a contest and see how far teams would get in a competitive setting. Last year none of the entrants managed to finish the course, but this year 5 finished, 4 within the alloted time.
The difference between last year and this year was primarily improvements in software, not hardware. In fact, once the software has been developed, outfitting a car with the necessary equipment to drive itself (the perceptual apparatus - laser, radar, and video guidance, the gps, the inertial motion systems, the general purpose computing servers, and the fly-by-wire control systems), were estimated by Sebastian to cost < $5k once they are mass produced.
Sebastian spent a long time conveying how hard it is to teach a computer to answer the question "What is a road?" The entire time the audience was left wondering - how the @(*^*#@ do we do that?
One of the issues that tripped up the lasers is the systematic pitching forward and backwards of the car as it bumps over the terrain - this causes the perceptual systems to jerk back and forth, perceiving parts of the ground over again, and mistaking these discrepancies for obstacles. No inertial guidance system is precise enough to correct for these errors, and their team won by discovering a systematic regularity in the errors themselves (and a probabilistic model to capture them). Instead of finding theoretically precise values for the constants in their equations, the Stanford team tuned these parameters by actually driving the vehicle, and "tagging" safe terrain. During the training, they also took calculated risks, let bad things happen, and looked back at the earlier optical flow.
Sebastian is aware of the military applications driving (sic) the development of this technology, but is personally motivated by the lives he thinks can be saved in civilian applications. In fact, boasted that he was now planning on having a vehicle drive itself from San Francisco to Los Angeles by 2007! I am beginning to wonder how much longer it will be legal for humans to operate motor vehicles.
When asked if any of the team's research findings would be applicable elsewhere in CS, Sebastian replied that he had no idea, yet. His philosophy is to first build the system, and then afterwards spend years pouring over the data to figure out what happened. In case you hadn't realized it yet, the robots are already here (some of them killer)!