Friday, April 09, 2010

.

And who's driving?

I’ve often written about technology that will help us do everyday things, and I almost always advocate technology that helps, while leaving the choices with us, the control in our hands. Mostly, I think that’s what works best.

But what about when the technology is meant to improve safety in cases where we, ourselves, fail? The very point, there, is that our own choices are faulty, and the technology must fill in for us. Where’s the line between “manual override” and preventing us from casually defeating important safety protections?

We got one version of that with anti-lock braking systems. The system “knows” that people are not good at emergency braking, and that when we jam hard on the brake pedal we’re likely to throw the vehicle into a dangerous skid. Further, the ABS has information that we don’t, about exactly how the wheels are rotating and how the car is moving — at a millisecond-by-millisecond level that we can’t hope to match. And, so, apart from the fact that we control whether the brakes are on or not, the ABS controls how the brakes are applied. There is no manual override.

But what happens if the system malfunctions? What happens when it takes over the task of applying the brakes, and it doesn’t do it correctly? We’ve recently seen issues with both the electronic throttle and braking systems in Toyotas, and there have been problems with fuel injectors, cruise controls and other computerized car systems. High-end cars have detection mechanisms that augment the cruise control to keep you from getting too close to the car in front, and that warn you if there’s something behind you when you’re backing up, lest you run over something unseen. What happens when we rely on those systems and they fail?

Does that keep us from relying on such systems? Should it? If something makes us safer 99.99% of the time, does that net out better, despite what happens in the one time in ten thousand when it doesn’t work? That depends, of course, upon what it’s saving us from, and how catastrophic the failure is.

For some time, researchers have been experimenting with cars with more and more computer control — even cars that drive themselves, for long periods. That research is becoming quite mature now, and looks ready to deploy in the real world soon.

What fully autonomous vehicles will be like is hinted at by an experimental car called Boss. Built by a team of engineering students at Carnegie Mellon University in Pittsburgh, Pennsylvania, and backed by General Motors, this robotic car scooped a $2 million prize by outperforming 10 other autonomous vehicles in a simulated urban environment created for the DARPA Urban Challenge in 2007. To win, Boss had to execute complex manoeuvres such as merging into flowing traffic, overtaking, parking and negotiating intersections, while interacting with other autonomous vehicles and 30 human-driven ones.

Boss’s computer builds a model of the immediate environment by processing data from radar, laser sensors, cameras and GPS. It then uses this model, along with information such as local traffic rules, to plan the best route and provide the situational awareness the vehicle needs for manoeuvres such as changing lanes safely, or to determine whether it has priority at an intersection.

[...]

At Stanford University in California, the Volkswagen Automotive Innovation Lab has shown what might be possible. VAIL engineers have fitted a VW Passat with cameras, cruise control radar and laser sensors, allowing it to navigate a parking lot, spot an empty space and park perfectly, with or without a driver.

The claims for such technology include not only greater safety — fewer accidents, fewer deaths — but also better throughput, better fuel efficiency, lower stress (at best, human “drivers” will be able to read, work, or even sleep, as the car takes over the controls), and other such benefits. Cars will network cooperatively to share information, creating their own infrastructure.

Will we trust all that? Should we?

Could a malfunctioning — or maliciously programmed — car send false information that causes tie-ups or collisions? Could, perhaps, a malicious base station mimic dozens of cars to create a massive problem? Could radio interference isolate a vehicle that’s relying on contact with others for its information?

On the other side, though, such cars could help us navigate safely during storms and fog that confound human drivers. They could get a sleepy driver home safely. They would avoid the normal, everyday mistakes that we make on the road that cause some 50,000 deaths and two and a half million injuries each year, in addition to the property damage.

How do we balance the risks and concerns against the huge benefit we could get from “smarter” cars?

No comments: