MIT’s Smart Automotive Co-Pilot Secretly Helps You Drive Better
Picasa
SHARE

Before cars start driving themselves completely, they’ll most likely start helping humans behave better on the road, politely ignoring instructions to run a red light or noticing traffic cones or other obstacles a driver might not see. A new system developed at MIT could help cars have our backs, letting them serve as semi-autonomous co-pilots.

The safety system includes a laser rangefinder and a camera to spot obstacles, and an algorithm that computes these items’ locations and identifies “safe zones.” It allows a car to share control with the driver, says MIT student Sterling Anderson, who has been testing the system.

It’s simpler than the suite of sensors and programs involved in Google’s driverless car fleet, which could make this one cheaper and easier to actually implement, according to MIT News. The researchers have been testing it along with a company called Quantum Signal, LLC.

Several automakers and research groups are trying to bring computer intelligence into the auto cabin, with varying levels of complexity and success. Volkswagen proposed its own temporary autopilot system, which can assume control of the car in certain situations. General Motors is building a system that can detect other cars and automatically apply the brakes in crowded highway traffic. The European road train program connects cars wirelessly and allows them to follow each other. But these response systems follow a specific set of guidelines for certain tasks.

This system tries to make the car think more like a human — it can make a series of decisions based on several factors, rather than following a preprogrammed set of instructions or robot rules.

“Typically you and I see a lane or a parking lot, and we say, ‘Here is the field of safe travel, here’s the entire region of the roadway I can use, and I’m not going to worry about remaining on a specific line, as long as I’m safely on the roadway and I avoid collisions,'” as Anderson puts it to MIT News. The vehicle’s entire environment is divided into triangles, which define safe and unsafe areas according to what the car sees in the camera and laser rangefinder. In the safe triangle zones, a human can control the car and make his or her own choices; in areas where the triangles mark obstacles, the system takes over and keeps the car in the safe zone. Anderson and MIT research scientist Karl Iagnemma have run more than 1,200 tests of the system, and for the most part, it works (with the exception of a few collisions caused by quirks in the experimental hardware).

In tests, drivers remotely steer a small utility vehicle through an obstacle course, watching the car’s field of view through computer monitors. People who trust it tend to perform better. If it were installed in a regular car, drivers may not even notice it, Anderson claims: “You would likely just think you’re a talented driver,” he says. “You’d say, ‘Hey, I pulled this off,’ and you wouldn’t know that the car is changing things behind the scenes to make sure the vehicle remains safe, even if your inputs are not.”

But this may actually be detrimental to a driver’s performance, he adds — drivers learn from their mistakes, and a system that corrects them preemptively wouldn’t teach those lessons, allowing humans to become too dependent on automation. Training humans to work alongside robotic help, and to become better overall drivers, is also important.

This setup tests human responses to the semi-autonomous co-pilot system.

Co-Pilot Setup

This setup tests human responses to the semi-autonomous co-pilot system.

MIT News