The production of electric vehicles is on the rise and while many may be led to believe that their invention was the result of emerging environmentally responsible practices, the truth is that their conception dates back to the 19th century.
The first electric carriage was developed in 1891. Nevertheless, further development was dropped because back then, batteries could not supply enough power for long-distance driving. Internal Combustion Engines replaced electric vehicles because they were more efficient and allowed drivers to reach higher speeds.
Unfortunately, higher speeds are often linked with a higher accident rate. Sebastian Thrun, former VP of Google, was inspired to found Google X and lead the development of Google’s self-driving car after losing his best friend in a car accident and committing to save the 1 million people who die in car wrecks each year. His goal was to invent a car capable of driving itself better than a human could.
Aside from making driving a safer experience, self-driving vehicles aim to make commutes a more enjoyable experience by allowing passengers to shift their attention to other activities, and although it is not their primary goal, they could also provide private transportation for the elderly or disabled.
Self-driving appeared to be for many years an unattainable technology. In fact, similar technology has been gradually introduced since the 1970s. Back then it was called cruise control, which Ford implemented in its new models. Cruise control eventually grew smoother and smarter over the years until engineers at Google were able to deploy more advanced features on their pod-like vehicle.So how does it work? It may not be visually appealing but all the technology serves a purpose: data collection:
- Radar Sensors:
- Recognizes fast highway traffic
- Can “see” in all directions and surpass a human’s capacity of situational awareness.
- Video Cameras
- Interprets traffic signals
- Generates a three-dimensional map of the surrounding environment
- Contrasts collected information with hi-res maps
Despite a US$150,000 investment on equipment, these devices are still limited in many aspects. For instance, cars that rely on cameras to acquire information on the environment may malfunction if bad weather blurs the camera’s vision. Since these cars are also programmed to function following standardized traffic signals, such as traffic lights or road markers, they are unable to operate in countries where infrastructure is not optimal or in situations where traffic is directed by humans.
Regarding public policy, data gathered by these vehicles could be provided to governments to improve road infrastructure. There are many issues concerning this technology, however. Many governments do not allow testing on public roads. Insurance companies are yet unsure how to proceed legally in terms of liability in case of an accident.
THREE LAWS OF ROBOTICS
Given the recent Tesla and Uber accidents that have made headlines, fear is rising among consumers. Perhaps Artificial Intelligence has not yet advanced as far as car manufacturing companies would like it to. If the line between reality and science fiction was blurred, one of the first people to look at would undoubtedly be Isaac Asimov.
Isaac Asimov was convinced that the Three Laws of Robotics should be the cornerstone of Artificial Intelligence, and when applied to the decisions a car should make when experiencing an accident, they make perfect sense.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
And a lesser known fourth law:
4. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The processes implemented by a car’s software when experiencing a crash are unclear to many consumers. Artificial Intelligence can make biased decisions if it is trained with biased data, and this is where the Trolley Dilemma comes in. From a utilitarian perspective, will a car choose to save the greater number of people or its driver? And if so, would a differently programmed AI affect a car’s marketing and purchase choices?
In practice, the Three Laws are not applied to artificial intelligence applications since they fail even in fiction. Asimov himself crafted stories that displayed the loopholes within these laws. For instance, a robot, or car in this case, which does not possess enough information may not be able to perform a “judgement” that complies with these laws.
Alan Turing, considered by many the father of AI and theoretical computer science, designed a test to evaluate whether a machine exhibits intelligent behavior. Machines that pass the test are those that provide responses indistinguishable from those given by a human. The application of these principles to the road also relates to the idea that unlike humans, driverless cars do not have the option of not following traffic laws, which were supposedly designed to make roads safer.
Nevertheless, according to a multinational study performed by Cisco Systems, people are still reluctant to place their lives in the hands of machines.
This trend is closely linked to the media coverage regarding autonomous vehicles involved in accidents. Tesla was in the public eye after a man was killed in an accident in March. A later inspection of the vehicle revealed that his death was the result of the man disregarding the car’s signal to place his hands on the wheel. A blog published by Tesla has stated their concern for people, noting that the public is not informed of the accidents that did not occur as a result of Autopilot stepping in. Media warning people over the use of Autopilot may prove counterintuitive and dangerous to consumers. According to Tesla, if the safety level of Autopilot were to be applied in the 1.25 million yearly automotive deaths, 900,000 people would be saved. Even though Autopilot is not able to prevent all accidents, it makes them less likely to occur.
With the current technology, human drivers still need to be able to step in. The autonomous Uber accident reported in March, where a pedestrian was killed, happened because the backup driver got overconfident with the technology and decided to watch America’s Got Talent behind the wheel rather than paying attention to the road.
Engineers believe that once vehicles stop relying on humans as backup, the accident rate will decrease, but technology has not yet reached that point. Ethical dilemmas aside, driverless-vehicles still need to sort many issues before being accepted by governments and consumers. Today, there is less skepticism about the limits of technology but machine learning still needs to be able to respond to all kinds of road infrastructure and situations. Machines will still kill people but the toll will still be a fraction of the deaths resulting from human error.