Before cars start driving themselves completely, they'll most likely start helping humans behave better on the road, politely ignoring instructions to run a red light or noticing traffic cones or other obstacles a driver might not see. A new system developed at MIT could help cars have our backs, letting them serve as semi-autonomous co-pilots.
The safety system includes a laser rangefinder and a camera to spot obstacles, and an algorithm that computes these items' locations and identifies "safe zones." It allows a car to share control with the driver, says MIT student Sterling Anderson, who has been testing the system.
It's simpler than the suite of sensors and programs involved in Google's driverless car fleet, which could make this one cheaper and easier to actually implement, according to MIT News. The researchers have been testing it along with a company called Quantum Signal, LLC.
Several automakers and research groups are trying to bring computer intelligence into the auto cabin, with varying levels of complexity and success. Volkswagen proposed its own temporary autopilot system, which can assume control of the car in certain situations. General Motors is building a system that can detect other cars and automatically apply the brakes in crowded highway traffic. The European road train program connects cars wirelessly and allows them to follow each other. But these response systems follow a specific set of guidelines for certain tasks.
This system tries to make the car think more like a human — it can make a series of decisions based on several factors, rather than following a preprogrammed set of instructions or robot rules.
"Typically you and I see a lane or a parking lot, and we say, 'Here is the field of safe travel, here's the entire region of the roadway I can use, and I'm not going to worry about remaining on a specific line, as long as I'm safely on the roadway and I avoid collisions,'" as Anderson puts it to MIT News. The vehicle's entire environment is divided into triangles, which define safe and unsafe areas according to what the car sees in the camera and laser rangefinder. In the safe triangle zones, a human can control the car and make his or her own choices; in areas where the triangles mark obstacles, the system takes over and keeps the car in the safe zone. Anderson and MIT research scientist Karl Iagnemma have run more than 1,200 tests of the system, and for the most part, it works (with the exception of a few collisions caused by quirks in the experimental hardware).
In tests, drivers remotely steer a small utility vehicle through an obstacle course, watching the car's field of view through computer monitors. People who trust it tend to perform better. If it were installed in a regular car, drivers may not even notice it, Anderson claims: "You would likely just think you're a talented driver," he says. "You'd say, 'Hey, I pulled this off,' and you wouldn't know that the car is changing things behind the scenes to make sure the vehicle remains safe, even if your inputs are not."
But this may actually be detrimental to a driver's performance, he adds — drivers learn from their mistakes, and a system that corrects them preemptively wouldn't teach those lessons, allowing humans to become too dependent on automation. Training humans to work alongside robotic help, and to become better overall drivers, is also important.
I'm not so sure that we nee more people who THINK they're good drivers. You might end up creating more actual good drivers if they are reprimanded when the system is correcting their unsafe behavior.
This reminds me of the cars that have lights to keep us informed for fuel efficiency on your driving, ie. when to shift gears or how soft to accelerate. As great as these concepts are, I think the general market frowns on it.
See life in all its beautiful colors, and
from different perspectives too!
Just what we need, a vehicle that MIGHT ignore driver input.
They've run 1200 tests in a highly simplified environment and they're predicting world domination. Good Grief
I vote for automated vehicles -- to be implemented in controlled areas and gradually brought into 90% of the roadways. And to be really successful, the areas in which they're used should have 99% automated vehicles.
One if the ways that the Mark One eyeball and the biological computer excel is in the ability to recognize information when it is presented in new and 'noisy' environments.
I really like these guys' idea. Humans have proven they're good at decision making, yet bad under pressure. Computers are bad at decision making, yet good under pressure. The nexus of the two ought to hold some serious synergies.
For @ford2go's benefit:
1) Anti-lock brakes and stability controllers *already* selectively ignore or modify driver input, and have been so effective (http://www.iihs.org/research/topics/pdf/r1139.pdf) that all vehicles sold in the US as of this year are required to have them. If such systems have proven effective in practice, why not continue to develop them?
2) I just Googled these guys and see no reference to world domination. In fact, seems like a pretty stand-up move to admit that not all of their *over 1,200* tests have been successful. Such is the nature of experimentation. Given the accolades they've received and the places they've published, I'd be careful before I haphazardly criticized their work from the peanut gallery.