Self-driving, driverless, autonomous or robot cars have been appearing more and more in the news lately, and not always in a positive light. Some consider their widespread implementation the next logical step for transportation and artificial intelligence technology, while as many as 60% of drivers are still reticent towards the idea and would not trust such a system. So naturally one might find themselves wondering: What exactly does it mean for a car to be self-driving, how does it work and are there different types? In this blog, we will take a look at the instinctual technology behind self-driving cars.
make travelling more accessible
Self-driving cars are seen by many as a means to make travelling more accessible: from making it possible to people who don’t have a driver’s permit to make use of a car; to be the only chance at independent travel for people with disabilities or elderly people who are no longer fit to drive.
Furthermore, a fully autonomous car would remove human error, effectively reducing the number of car accidents. Unlike humans, when presented with the same environment and circumstance an automated car reacts the same way every time, without getting tired, distracted, or unsure of its actions. It also eliminates unpredictable driving patterns or different skill levels.
Lets discuss the drawbacks
On the other hand, all of these benefits would be realized with the introduction of a fully automated car, and before we achieve this, as with any disruptive innovation, there must be some drawbacks. Self-driving cars are based on machine learning technology and thus evolve over time. This means that there must be a starting point and naturally, this is not immediately more secure, or even just as secure, as a human driver. As different manufacturers release their own versions of a self-driving vehicle based on similar but distinct technologies that are then put through different methods of testing and improvement, the results of the various models are also extremely varied. And yet, they all are based on the same principles and guided by largely the same goals.
Some milestones and history
- 1920s Mentions of experiments involving automated cars, such as Achen Motor’s “Phantom Auto”.
- 1980s World’s first autonomous cars appeared, for example, Carnegie Mellon University’s Navlab.
- 1990s ParkShuttle – the first driverless vehicle that can be “aware” of its own position through magnets embedded into the road surface. This is functional today, transporting people in the city of Capelle aan den IJssel.
- 2013 Testing self-driving cars on public roads became possible in some US states as well as in Germany, the Netherlands and Spain. Since then various manufacturers, some with backing from researcher institutes or universities, have begun testing driverless car systems backed.
- 2015 Alphabet’s Waymo (previously part of google) completed the world’s first fully self-driving driving trip.
- 2016 Singapore launched the first self-driving taxi service.
- Since 2016 All Tesla models have the necessary hardware for full self-driving abilities. However, these cannot be engaged and are used to send data back to Tesla for further research.
- 2017 Audi A8 first level 3 autonomous car which doesn’t require the driver to pay attention to the environment while the car system is driving.
- 2018 First human fatality from an accident involving a self-driving car.
How does it work?
A question that appears rather often from the general public regarding the AI mechanism is “How can we be sure that the car will react appropriately to every single possible scenario it will be put in?”. And the answer, although not entirely comforting, is that we certainly can’t. That would go against the very nature – and purpose of machine learning. Knowing would require us to have the right answer to every potential driving scenario.
Take your normal commute to work
Take your normal commute to work. To be able to surely know every possible outcome would involve calculating alternative routes in case at any point along the way a traffic light breaks, an accident blocks a road, any of the trees you pass by falls on the ground during a storm, a squirrel decides to pass the road in front of you, or an ambassadorial brigade needs to be given priority during an unexpected and sudden peace summit. Firstly, it is impossible to ever say with 100% confidence that every possible scenario like this has been thought of. Going a step further, to compute a contingency plan for every single one of them and then either program this in a machine or give it as a list of instructions to a human driver would be hopeless and a little bit absurd. Imagine receiving 200 volumes of booklets with the purchase of your car containing instructions such as
“It’s a Thursday evening, the sunlight is at 65% intensity, there is light wind and between 7 and 8 cars in front of you and one of them is red then before taking any left turns to decrease your speed by exactly 13,4% and pay attention to any little puddles of water in case you registered them during your previous commute.”
And then multiply this sentence to cover every possible scenario. We never would have to think about this before leaving our house. Should a car be expected to?
Common sense guidelines
Enter machine learning. Much like humans, ML allows a system to learn based on its experienced and an initial set of “common sense” guidelines. By driving or even simulating driving within different conditions the system observes the effect its actions have in the moment and on the overall journey and it corrects itself by continuously creating new rules or as humans would call them “habits”. Currently, the progress of autonomous vehicles, or how much “they learn” is measured by average miles between disengagements – or how long they can travel before human intervention is needed.
Reverse engineer the human approach
In conclusion, in order to explain how autonomous cars use machine learning we need to reverse engineer the human approach to driving and reproduce this. This is summed up in three general steps: observe environment, make a decision based on what you register, the rules of the game, and your own experience, and lastly execute an action.
Receive sensory data of the environment
Instead of seeing and hearing a digital system can make use of radar technologies, GPS, motion sensors, laser light, or computer vision (like in Facebook’s face recognition software). For example: how many “objects” are in the vicinity of the car and what is their position? Or what are the weather conditions?
- Receive sensory data of the environment
Instead of seeing and hearing a digital system can make use of radar technologies, GPS, motion sensors, laser light, or computer vision (like in Facebook’s face recognition software). For example: how many “objects” are in the vicinity of the car and what is their position? Or what are the weather conditions - Process information and make sense of it
The way this sensory data is registered is usually in non-human-comprehensible-way like large-scale numeric matrixes or binary strings. The system makes sense of this and identifies the components of its environment. Now it knows the objects it “saw” are roads, trees, other cars, or pedestrians. It can also calculate probabilities of different events (e.g. is it likely that the car it saw will switch lanes, or are my chances of catching the green light higher if I switch lanes?) - Act upon information
Finally combining the received information, taking into account past experiences, and the rules that were programmed in like the meaning of road signs and traffic law the car can steer, regulate speed, or perform any other function that a driver would also be available to the driver
What are the level of autonomy?
Now that we have established a level of familiarity with machine learning it is time to consider that not all machine learning models are created equal. While the previous section outlined the potential and approach of ML to autonomous driving, it is important to acknowledge that this is usually implemented on a much smaller scale in the current state of the art.
An all or nothing super system
Too often when talking about autonomous cars it is regarded as an all or nothing super system that through the switch of a button can get a robot taking control of your car in a split second. However, it is important to be aware that driverless cars fall on a rather extensive spectrum. There are no less than 5 possible levels of automation. This classification was introduced in 2014 by SAE international. What this means is that there are different expectations, advantages, and risks for each of these levels. The levels can be simply described as “no automation”, “hands on”, “hands off”, “eyes off”, “mind off”, and finally level 5 would be “steering wheel optional “. In state of the art today, no car has achieved the last two levels, and only a few are even considered level 3.
The handoff moment
And yet, most times when discussing the morality, regulations, risks, and potential of autonomous driving it is usually on a conceptual level and based on the idea of a level 4 or 5 car. Instead, a more commonly recognized and current problem that is currently being addressed by the self-driving car manufacturers would be “the Handoff” moment. This is referring to the moment a human would need to intervene by taking control of a car that what previously driving autonomously. As most available systems are level 2 it is necessary that the driver remain engaged and attentive during travel even if the car is “driving itself”. However, it has been observed that the longer a person is exposed to such a car they become increasingly more comfortable with the machine and tend to act as in a fully automated vehicle.
Welcome to the future
So welcome to the future…in actually a few years from now! Autonomous driving is still a few updates, paradigm shifts, outraged news articles, and groundbreaking discoveries away; but every day we are getting closer.