Autonomous Vehicles Crashes: Who Is Accountable?

Parnika Sharma is a final year B.A. LL.B student at Jindal Global Law School. She has developed a keen explorative interest in the areas of ADR, Environment Law, Human Rights Law, and Family Law.
- Mon September 20 2021

Introduction 
Hundred years ago, autonomous vehicles or driverless vehicles seemed like a mind-boggling invention to many, but now numerous companies are testing and developing autonomous cars. Designed with the objectives of reducing accidents caused by human error, saving money spent on expensive insurance schemes, expediting commutation, reducing emissions, and increasing fuel efficiency, autonomous vehicles are purported to be better alternatives to conventional human-driven vehicles. These vehicles, powered and trained by artificial intelligence (hereinafter, “AI”), rely on the usage of a mix of cameras, sensors, image recognition systems, neural networks and machine learning algorithms wherein camera images, sensor data regarding car’s position, traffic lights, sidewalks, trees, other obstacles etc. serve as data sets. Neural networks use this data to identify patterns, which serve as input data for machine learning algorithms to calculate and learn what action needs to be taken while driving in different situations.

Though the seemingly impossible advancement in automotive technology is becoming possible, newer risks and challenges are also cropping up. Besides the numerable practical concerns, including the infrastructural challenges posed by obstacles like stray animals, tree branches, lane diversions due to construction work/calamities or interference with the GPS, and the cybersecurity risks, several crashes have been reported due to the vulnerability of autonomous vehicles to technical errors. Though invented to ensure safety and reduce accidents caused by human drivers, autonomous vehicles continue to be a risky proposition for society as a whole with regard to safety and accountability. The author contends that even if there are state-of-art advances to the level of inventing and operating fully autonomous vehicles, the risks would persist, requiring a case-to-case determination of fault and liability. 
It is to this effect that this article makes a preliminary attempt at understanding how liability can be ascertained in the cases of crashes involving autonomous vehicles. 

Typologies 
While some assert that the primary source of danger lies when the situation necessitates a transition from the autopilot mode to human driving due to human negligence, some other instances highlight dangers resulting from the malfunctioning of technology. Since the present-day autonomous vehicles have achieved the third level of automation at the maximum as opposed to the fourth/five, i.e. high or full automation levels (wherein the vehicle can perform all driving functions), a human driver continues to be a necessity even though they may not be required to monitor the environment or take control at all times unless prompted. As a result, the first typology of accidents involves those incidents where negligence has been witnessed on part of the backup human driver. For instance, Tesla’s autonomous car had crashed into a lane divider in autopilot mode due to the apparent lack of care on part of the human driver. The second typology of accidents would be those wherein the automated system is at fault. For example, Tesla’s autonomous car had crashed into a truck in another instance due to AI’s identification mistake. It is also possible to have both typologies present in one incident, like the Uber collision. 

Analyzing Self-Driving Car’s Collisions for Socio-legal Facets  
A recent accident in Arizona involved Uber’s self-driving car, which killed a pedestrian lady crossing the road while she was wheeling her bicycle in the dark. The human operator was found to be distracted as she was watching a reality talent show. According to investigative assessments, the incident could have been avoided had she taken control of the car during the six seconds she had to react when the car detected an unknown object and later a bicycle. However, it is argued that since the vehicle failed to accurately identify the “pedestrian” as an imminent obstacle until the crash, it should fall within typology two, whereby technological malfunction has been an equal contributor to the negligence. While the automated system had determined the necessity for applying emergency brakes 1.3 seconds before the crash, it failed to stop the car on time as that function by design could not have been executed under computer control, as per a statement of the corporation. Here, it is imperative to contemplate that while the technology couldn’t have applied the brake due to its design, would it be possible for a human to use the brake within 1.3 seconds. It is believed that it generally takes 3.5 seconds for non-alert drivers to react, and here, since the backup driver’s attention was focused on her gadget, she couldn’t have applied the brakes within the 1.3-second bracket detected by the self-driving system.

While the company wasn’t charged with criminal liability for the pedestrian’s death, the human backup operator was charged with negligent homicide. Given that it was an ongoing test whereby the responsibility of the activity and its supervision lay in the hands of the corporation, shifting the whole onus to the back-up driver, solely due to distraction, whilst disregarding other contributing factors like failure of the automated driving system to detect the pedestrian, the human’s brake reaction time, night darkness etc., seems unreasonable. Besides human negligence, technological malfunction in the corporation’s automated system seemed to be an equal contributor to the crash. Observably the factor of power hierarchy comes to the fore wherein a corporation, being a capitalistic organized entity, holds more real power than an individual, thereby steering major decisions regarding accountability in its favour. While the corporations dust off their responsibility by mandating in their user’s manual that safety drivers should pay full attention during test activities, it is argued that corporations could still be held accountable for causing the negligent homicide through vicarious liability as they have a non-delegable duty and should thereby assume responsibility. Due to their inability to maintain and inspect the self-driving system for its defects, they should perhaps assume the resultant strict or product liability when the AI commits a mistake. The backup driver’s liability could also be suitably lightened since the pedestrian was at fault herself for not using the crosswalk besides being under the influence of cannabis and methamphetamine. 

Two male passengers died when their Tesla 2019 Model S crashed into a tree and caught fire. Upon preliminary investigation, it was believed to be due to the absence of any driver behind the wheel. According to the corporation’s CEO statement, neither had the autopilot feature been enabled nor had the users purchased the FSD (Full Self-Driving) feature. While this incident shows how the company impliedly passes the responsibility to the purchaser-passenger, it also highlights the glaring gap between the way their technology is advertised and its actual capabilities. In contrast to the way self-driving cars are advertised for providing users with an experience of futuristic autonomous driving, these corporations expect users to view the self-driving features as mere assistance than a fully autonomous vehicle. While from the standpoint of liability, this alludes to the company’s intention of passing the responsibility to the users themselves, from the user’s perspective, investing millions in buying a vehicle without being fully equipped to control technological capabilities seems puzzling. Moreover, contracts and user manuals tend to aid this process of transferring potential liability to the users from the developers who claim that the users had failed to adhere with some guidelines or to purchase additional technical features required for preventing crashes. With such a transfer of liability, there is an exchange of (un)informed consent, and there is also a transference of the onus on the owners regarding the ramifications resulting from the dynamic functioning of algorithms and automated functions of the technology. It is imperative to ponder whether the law or the corporation (as part of their due diligence) should envisage a specific test for autonomous vehicle purchasers as an equivalent of the driver’s license test conducted for conventional vehicles to gauge if the users are fully equipped in understanding the operative technology and are up to date with the constant upgradations. 
Besides regulating technology users, there is also a need to regulate technology itself as some additional concerns persist. According to a study, autonomous vehicles have been marred with algorithmic bias as the machine is more prone to not detect dark-skinned pedestrians than light-tone ones. Seeming to be a mere ethical challenge of prejudice, it ultimately ends up determining someone’s life or death, and therefore due attention needs to be paid to the colossal effect it can have on pedestrian mobility. At the core of this issue lie several dilemmas. First, who should be held accountable for potential damages if algorithms’ effective logics are unpredictable due to their black-boxed nature. Second, whether such vehicles should even be allowed to operate on public roads as no safety standard or legal framework will be able to plug every loophole, at least for the time being. One thing is for sure that such incidents have amplified skepticism for going fully autonomous

Way Forward
There is a need to regulate autonomous vehicles. Some apprehend that any legal framework laid down will always remain a risky proposition given the undefined outlines of this emerging and dynamic domain of technology. It is nevertheless vital to begin conceptualizing some uniform standards-regulations for autonomous vehicles, machine learning, algorithms and AI, and keep evolving them with time to take into account the dynamic technological developments. Further, though technologies like AI-machine learning would be the ones driving the vehicle, it is important to discern that humans are the ones who are driving these technologies. As a result, it is essential to regulate technology-making & usage as there would be many possibilities that developers themselves wouldn’t have considered. For now, there seems to be a global void in terms of legal regulation of AI and ML models’ development and implementation. In India, there exists merely a few “guiding” policy documents like the National Strategy for Artificial Intelligence 2021, according to which emphasis has been laid on self-regulation of AI whereby developers are required to self-assess, audit and regulate primary concerns like that of safety, reliability and accountability amongst many others. Further, in the Indian scenario, neither is testing permitted by law nor does autonomous driving fall under the purview of the Motor Vehicles Act as human drivers are considered pre-requisites for transportation. 

Further, the government apprehends the loss of livelihood of 1 crore people if autonomous vehicles are allowed in India. But there still seem to be several startups that are working to bring autonomous vehicles to Indian roads. Therefore, it is imperative to start thinking of a legal framework that uniformly facilitates associated determinations besides regulating the specifics of autonomous vehicles, as is done for motor vehicles through special legislation.

 

Views expressed above are solely of the author.