Everyone is talking about self-driving cars these days, with Ford claiming that they’ll have 100 autonomous vehicles on the road by the end of 2019. When we say that a car is self-driving, though, what do we really mean? What many people don’t realize is that engineers have a 6-level process for ranking vehicle autonomy, and we’re still much further from the top level than the news would lead you to believe.
Autonomous Vehicle Levels
The 6-level autonomous vehicle scale was developed by SAE International, a group of automotive engineers. And though the levels are currently assigned by the vehicle manufacturer, not any sort of formal body, it’s fairly easy to understand where a car falls in the rankings.
At the bottom of the scale – level 0 – are standard cars. They may have anti-lock brakes and cruise control, but the car can’t actually do anything on its own. However, once you move up to adaptive cruise control and lane assist, in which the car can make limited acceleration and braking decisions on its own, then the car is considered to be at level 1.
Right now, most autonomous car developers aren’t even aiming for the top of the scale; their hope is to reach about level 4, which is fully autonomous under limited circumstances. In other words, the car will still need to have a driver on board and may only be able to be operated within a geofenced area.
The Legal Problem of Vehicle Autonomy
One of the reasons that car manufacturers aren’t aiming for level 5 autonomy, besides the technological challenges, is that there are too many legal issues that arise as vehicles become more fully self-driving. In particular, if a self-driving car is involved in an accident, who is legally at fault? In a recent lawsuit, California judges were forced to grapple with exactly this question. In the particular case, a self-driving Chevrolet Bolt collided with a motorcycle driver, injuring the motorcyclist. So is the backup driver at fault, or is the manufacturer to blame?
As more level 3 driverless cars hit the road, this question is appearing before the courts again and again? Some lawyers argue that human drivers are still in charge of the vehicle, even when it’s running an automated system. On the other side, though, are a group that say this is fundamentally a product liability issue. Whatever side the courts come down on now, that fact is that as more advanced cars come to market, the argument for product liability will likely get stronger.
Improved Tech and The Liability Question
Beyond the independence of a self-operating vehicle are concerns about what kind of technology undergirds these systems. Right now, self-driving cars rely on LIDAR, cameras, and other sensors to identify pedestrians and street signs and navigate the roads safely. Tesla, however, is committed to developing non-LIDAR based self-driving cars, and the research seems to support this approach. By changing the technology that allows cars to navigate independently, these cars could become safer – though still not fully independent.
Consumers are increasingly excited about self-driving cars; we’re still a long way from full autonomy, and even full autonomy isn’t what it seems. Drivers are still in charge, even when the cars seem to be able to drive without any outside input, and we can’t afford to forget that.
Article Submitted By Community Writer