Articles
Bench & Bar of Minnesota is the official publication of the Minnesota State Bar Association.

Car Minus Driver: Autonomous vehicle regulation, liability, and policy, part II

PART II

The advent of autonomous vehicles points to a seemingly inescapable shift in historical standards for auto crashes—from driver/owner liability to a product-liability regime. The emerging technology may change how traffic laws are enforced, and it will also implicate other privacy, criminal, insurance, and ethical quandaries.

SUMMARY: SAE-defined Levels of Automation

L0 – No automation. Human drivers are fully responsible, even if enhanced by warning
(e.g., check engine) or intervention systems.

L1 – Driver assistance. System can sometimes assist with discrete tasks (e.g., steering, speed); humans are otherwise fully responsible.

L2 – Partial automation. System can perform discrete tasks (e.g., steering, speed); humans monitor and are otherwise fully responsible.

L3 – Conditional automation. System drives and monitors in some instances; humans intervene when the system requests assistance.

L4 – High automation. System drives and monitors, even absent human response — though only under certain environments and conditions.

L5 – Full automation. System does everything a human driver can do — in all conditions.

1016-robotcar

Part One of this article discussed the state and federal governments’ legislative and regulatory approaches to automated driving. But given the federal government’s apparent appetite to permit the technology’s quick deployment, before exhaustive regulations could even be crafted—and because state regulations are either non-uniform or non-existent—litigators and courts will likely play a large role in shaping the common law’s approach to automated driving.

Part Two, below, will discuss potential liability shifts, as well as automated driving’s potential effects on insurance, criminal law, privacy law, and ethics.

From human negligence to product liability

Automated-driving crashes are inevitable, so courts will likely need to determine the legal and factual standards for determining liability. Depending on the level of automation, the legal and factual analyses will likely differ substantially from today’s legal analyses.

Fully manual Level 0 is the status today, and Level 5 will be our autos’ self-driving future. But in the interim, much of the litigation will be focused on Levels 1–4. As long as humans and computers are co-pilots, determining “cause” will become increasingly complex.

During this time of transitional human-machine cooperation, insurance and litigation will likely develop criteria for determining a crash’s cause(s). Did the human override the computer? Was the human paying attention—and to a sufficient degree? Did the computer properly alert the human? Did the computer notify the human early enough? Did the computer misinterpret data? The fact-specific queries and aggregate test cases to determine liability will likely be crafted by insurers, litigators, and regulators.

Today, one factor that insurers and litigators consider when determining potential liability is a vehicle’s make and model. As automated systems assume more control, an increasingly important factor will be the automated vehicles’ software—in terms of both the version’s components and the timeliness of its updates. Did the software company sufficiently update the algorithms to reflect evolving best practices? If so, did the manufacturer push the algorithm updates to all vehicles? If so, did the vehicle owner accept the update? Should the onus be on the owner to update the algorithms—or should the manufacturers design the systems to push updates automatically? (Modern software, operating systems, and browsers increasingly push updates without user intervention, ensuring that users integrate the most-recent versions, and their security updates, automatically.)

One insurance industry group contemplates a future where burdens of proof might shift from drivers to manufacturers, who may be required to prove that their automation did not cause a crash. The website Law of the Newly Possible provides a helpful graph mapping potential responsibility for automated-vehicle crashes, noting that a significant actor in the civil liability analysis will be insurance companies.

Steering toward liable parties

As with any emerging technology, the list of potential defendants may be initially broad. But as the law develops, liability may coalesce to certain defendant categories. Professor Bryant Walker Smith assesses that in a single crash, potential defendants might include vehicle owners, drivers, manufacturers, sensor suppliers, and data providers.1

Vehicle owners: Override, failure to maintain

Of course, a vehicle owner might bear liability in several contexts. In Levels 1–4, owners might override the automation negligently, purposefully disable all automation, disable certain features, or fail to maintain the system (for example, by failing to accept software updates). Because owners are ostensibly the closest to the vehicle post-sale, courts and juries may find that they share potential liability.

Drivers: Disabling features, improper use

If a human “driver” is not the owner, that driver may share a distinct liability. For example, even if the owner has maintained and enabled all safety features, the driver might disable them. Or the driver may use the automation features improperly. Or the driver might override the automation, contributing to a crash. Indeed, Google reports that nearly all of that company’s automated-driving crashes have been caused by human error. So in Levels 1 through 4, the potential liability of drivers and owners can be distinct.

Manufacturers and software creators: Failure to warn

Some vehicle manufacturers, like Tesla, are developing their automated-driving technologies themselves; other manufacturers are working with an outside software vendor (Fiat and Google, for example, announced a partnership in May 2016). So potential liability for an automated vehicle’s design may be split.

Today’s software disputes often hinge upon language in end-user license agreements (EULAs). For automated-driving disputes, liability may well be tied to what the manufacturers’ EULAs claim and disclaim. Do automated-vehicle purchasers own the entire vehicle—hardware and software? Or does the software remain the sole property of manufacturers? The likelier scenario is one where manufacturers own the vehicle’s software (General Motors and John Deere presently claim ownership) and customer owners merely license it.2 Despite manufacturers’ claims of ownership, the U.S. Copyright Office currently allows customer owners to break software encryption to repair vehicles3—potentially indicating that, even absent software ownership, consumers might be able to exert some control.

The interaction between software and hardware in automated vehicles raises interesting questions. Does a manufacturer that claims continued ownership of the software adopt a heightened duty to update, as well as potential liability from failure to warn?

The issue of software ownership also relates to future auto sales. Could manufacturers who claim ownership over the software prevent consumers from selling automated vehicles—as Google’s license might have prohibited sales of Google Glass on eBay?4 Would a manufacturer’s attempted prohibition of automated-vehicle sales conform to copyright’s first-sale doctrine, which permits a purchaser of a copyrighted work such as a DVD to sell it? Does it matter that without the automated-driving software, the hardware (the vehicle itself) would be rendered useless?

Data providers

In some instances, software designers (like Google) and other data providers might provide data independently, potentially resulting in a liability split. For example, automated-driving software might use a third party’s map data. If the automated vehicle follows incorrect map data— if, say, that data reflects the wrong direction on a one-way street—then the map-data provider might be liable, at least in part.

Autonomous vehicles will likely be programmed to follow the laws for any given GPS position, and they might also use third-party data to determine applicable laws. This is a complicated proposition, since any GPS point is usually governed by myriad, often-overlapping laws and regulations—local and municipal laws, county laws, state laws, federal statutes, federal regulations, and potentially others. If a data provider gets any one of those legal concentric circles wrong, or fails to timely reflect changes to legislation or case law, then the data providers could also bear some liability.

Potential Claims

During the transition to a driverless future, the legal standards associated with auto crashes—which have remained largely stable for decades—will likely evolve more quickly. As automated driving becomes more prevalent, improved safety may decrease bodily injury claims, while sensor damage and AI failures will likely increase property damage claims and product liability claims.5

Lawsuits about automated driving might involve several types of traditional claims: negligence, product-liability claims, failure to warn, breach of warranty, and foreseeability. But injured plaintiffs’ claims will likely differ depending upon their personal standing: Automated vehicles’ owners, “drivers,” passengers, and pedestrians will likely have distinct claims.

Negligence

Negligence claims in product liability cases often hinge upon whether a defendant used reasonable care to design products so they are safely used in reasonably foreseeable ways. For example, Minnesota law requires the following elements of negligence: (1) the existence of a duty of care, (2) breach of that duty, (3) injury, and (4) proximate cause.6 For manufacturers, “a supplier has a duty to warn end users of a dangerous product if it is reasonably foreseeable that an injury could occur in its use.”7

If an automated vehicle crashes on a wet road, plaintiffs will likely argue that manufacturers either foresaw or should have reasonably foreseen that automated vehicles would navigate wet roads, and the vehicle’s design caused injury. But in such a case, one salient question might be, “To what extent was the manufacturer’s design of the automated-driving system ‘reasonable’?” When encountering wet roads, should the system have been designed to automatically reduce vehicle speed? Prevent driving altogether?

Strict Liability

Even if a manufacturer is not negligent, having exercised all requisite care, it might still ship vehicles with unsafe defects that trigger strict liability, regardless of manufacturer negligence. Strict liability cases generally fall into three categories: manufacturing defects, design defects, and failure to warn. The Second Restatement of Torts holds a manufacturer liable for “unreasonably dangerous” defects even if it “exercised all possible care” to prepare and sell the product—and even without a contractual or purchasing relationship with the user.8 For automated-driving cases, the vehicles’ “users” (including owners, passengers, and potentially third parties) may make similar claims. The Third Restatement9 slightly shifts liability to failure to warn of “foreseeable risks”—and “foreseeability” in the fast-moving area of automated driving will likely be a moving target.

Manufacturing defects

In automated vehicles, manufacturing defects might involve either the software or the hardware. To prove a manufacturing defect under Minnesota law, plaintiffs must establish: (1) the product’s defective condition was unreasonably dangerous for its intended use, (2) the defect’s existence when the product left the defendant’s control, and (3) proximate cause.10 Hardware manufacturing defects are more familiar to product liability attorneys today, but in a world where algorithmic tweaks can make the difference between life and death, software manufacturing defects may play a larger role. What if a beta algorithm ships with a flaw that interprets a white semi-trailer as the sky? Or a typo-induced bug in the governing law (25 mph vs. 255 mph)? Are those manufacturing defects or design defects?

Design defects

A more common claim might be that an automated vehicle has a defective design. In Minnesota, a manufacturer must take “reasonable care” when designing a product to “avoid any unreasonable risk of harm to anyone who is likely to be exposed to the danger when the product is used in the manner for which the product was intended, as well as an unintended yet reasonably foreseeable use.”11 Courts understand that the “reasonable care” analysis balances likelihood and gravity of harm with the burden of effective precaution.12

Design, security, and safety updates. Because automated-driving algorithms are improving daily, plaintiffs will likely argue that yesterday’s state-of-the-art algorithm is today’s defective design. If today’s state-of-the-art algorithms can make more subtle distinctions—but a manufacturer does not push the algorithmic improvements to a 10-year-old car’s system—is that a design “defect”? What is the burden of pushing the updated algorithm to an obsolete system? Does it matter that manufacturers could push improvements quickly and inexpensively through over-the-air updates?

Even today, this question is not hypothetical. In September 2016, Chinese researchers discovered security vulnerabilities in Tesla vehicles, permitting unauthorized remote activation of moving vehicles’ brakes. Tesla reportedly pushed a fix to vehicles within 10 days.13

Sunsetting and planned obsolescence? Consumer technology manufacturers face a similar situation with software and operating system (OS) updates. As Microsoft, Apple, and Google improve their OSes, they “sunset” the operating systems—often leaving devices inoperable or vulnerable to malicious hacking. Microsoft, for example, stopped supporting Windows XP in 2014, leaving users of that ancient, 12-year-old OS vulnerable to security attacks. Smartphones and tablets have even shorter supported-OS lifespans. But while consumers might accept that a $300 tablet they bought in 2012 reached its end-of-supported-life in 2016, they will likely demand a higher standard from a $100,000 automated vehicle. Even with today’s smartphones, the FCC has pressured devicemakers to provide details about the frequency and timeliness of their security updates.14 Manufacturers, regulators, and the courts will likely make similar determinations for automated vehicles. What security-update and algorithm-update frequency is “reasonable”? And can manufacturers permissibly choose to “brick” obsolete vehicles?

Failure to warn

While automated vehicles promise great safety improvements, plaintiffs will surely argue that they remain dangerous products that require warnings. In Minnesota, suppliers have a duty to warn end users of a dangerous product if “it is reasonably foreseeable that an injury could occur in its use.”15 If that duty is triggered, the supplier has two duties:

(1) the duty to give adequate instructions for safe use; and

(2) the duty to warn of dangers inherent in improper usage.16

Legally adequate warnings should:

(1) attract the attention of those that the product could harm;

(2) explain the mechanism and mode of injury; and

(3) provide instructions on ways to safely use the product to avoid injury.17

When determining whether the duty to warn exists, the “linchpin” is foreseeability. Minnesota courts analyze an allegedly negligent act, as well as the event causing the damage, by determining the following:

Court determines no duty: Connection is too remote to impose liability as a matter of public policy.

Court determines duty exists: Consequence was direct, the type of occurrence was or should have been reasonably foreseeable.

Jury considerations: Adequacy of the warning, breach of duty, causation.18

For Minnesota cases involving automated driving’s Levels 1–4, parties will likely cite the Minnesota Supreme Court’s decision in Glorvigen, which involved an airplane’s autopilot mode and the manufacturer’s alleged failure to warn. That court held that providing the pilot with written instructions was sufficient: “[T]here is no duty for suppliers or manufacturers to train users in the safe use of their product.”19

Automated-driving vehicles will likely raise distinct legal questions. For partial automation, where NHTSA Level 2 requires operator intervention on “short notice,” what type of warning sufficiently defines “short”? May the manufacturers engage in a cost-benefit analysis, opting not to spend millions of dollars to provide faster notice where any user performance improvement would be negligible?

Full automation like Level 5 will likely raise even more interesting questions. In contrast to “autopiloted” airplanes, which still require training, expertise, and shared control, what standard would apply to fully autonomous vehicles, which might require no training? What type of written warning might be sufficient? Given automated driving’s potential benefits to the disabled, must manufacturers provide warning sufficient to accommodate “drivers” of fully automated vehicles who are deaf, blind, or both?

Misrepresentation

Injured plaintiffs will likely argue that automated-vehicle manufacturers’ advertising and statements reflect fraudulent or negligent misrepresentations. For example, for human-machine cooperation in Levels 1–4, to what extent can a manufacturer permissibly assert that human control is “rare”? What if the manufacturer cites testing data—regarding automation/human handoffs, crash data, injuries, or other parameters—and real-world data varies dramatically? Can a manufacturer tout the benefits of reading email or watching movies? (In a 2016 fatal crash involving Tesla’s Autopilot, the driver was reportedly watching a Harry Potter movie.20) What evidentiary support must manufacturers compile to support claims that their automation is “safe”?

Breach of warranty

Plaintiffs will also likely argue that automated-vehicle manufacturers have breached their warranties of quality that were created through sales and marketing. The UCC’s applicable provisions include those on express warranties and implied warranties.21

Express warranties. What if a manufacturer expressly advertises that its fully autonomous vehicle is “as safe as human drivers”? Under UCC section 2–313, does the statement set some standard of warranted performance? As safe as an average driver? The best drivers? Distracted drivers? The worst? Or is the claim of “safety” so ambiguous that it’s mere advertising puffery? What if that statement comes not from a manufacturer, but from a dealer who accompanies an oral statement with a written agreement’s merger clause?22

Implied warranties. Under UCC section 2–314(2)(a)-(f), what are potential implied warranties? In this new product category, what attributes make automated vehicles “fit for the ordinary purpose for which such goods are used”? And what of other categories, which require products to “conform to the promise or affirmations of fact” and “pass without objection in the trade”?

Third-party beneficiaries? Who beyond the immediate buyer does the warranty benefit? Many jurisdictions will extend warranties to third-party beneficiaries, potentially including family, household members, guests, or any other person, with different jurisdictions providing varying levels of protection.

Who sold to whom? Another consideration is privity: What if buyers purchase vehicles directly from manufacturers? Tesla, for example, eschews dealerships for “stores,” establishing a direct relationship with buyers. In those direct purchases, third-party and pass-through warranties under UCC § 2-318 may be inapplicable. But section 2–318 would likely apply to more traditional dealership models, where warranties would probably pass through to third-party beneficiaries.

Disclaimers beware. What if the manufacturer includes a disclaimer? Indeed, most implied warranties can be disclaimed unless barred by state statute or by the Magnuson Moss Warranty Act. Express warranties cannot be disclaimed, but manufacturers can include well-drafted merger clauses to limit potential liability to written warranties. As such, manufacturers’ and dealers’ disclaimers could well extinguish many potential warranty claims.

Contracts: effectiveness of EULAs, clickwrap, and browsewrap agreements

Agreements constitute another means for manufacturers to potentially limit or eliminate liability. Automated-driving cases might be complicated by issues that have long plagued the tech-law world: end-user license agreements (EULAs), licensing vs. ownership, and copyright.

Browsewrap and clickwrap

No doubt, some manufacturers and software developers will seek to avoid liability by requiring vehicle users to view a long license agreement (EULA) and click “I Agree.” While most software EULAs are enforceable, even if ordinary users do not actually read or understand the text, it remains to be seen whether courts will enforce them for automated vehicles. Courts may view enforcing a smartphone app’s privacy policy as wholly different from disclaiming damages in an auto crash. Could a manufacturer avoid liability by simply requiring a user, upon first “driving” an automated vehicle, to click “I Agree” as a prerequisite to movement?

Unconscionability

If EULA terms fully disclaim property damage, personal injury, or death, it’s unclear whether courts would enforce those terms, or instead strike them as unconscionable or against public policy.

Non-privity

Another issue is whether manufacturers could impose the EULA terms on non-signatories. That might include subsequent owners, non-owner “drivers,” passengers, or pedestrians.

Copyright’s first-sale doctrine

Beyond the contractual issues from preventing sales, another open question is whether a manufacturer’s disabling vehicle software—effectively preventing hardware (vehicle) sales—would violate copyright’s first-sale doctrine. That common-law-turned-statutory doctrine allows people to resell their purchased physical books, physical music (e.g., CDs), and physical videos (e.g., DVDs, Blu-Ray). Courts may also look to the first-sale doctrine when questioning manufacturers’ ability to prohibit automated vehicles’ hardware/software combination.

Licensing cars?

Some courts have upheld software makers’ ability to use EULAs to prevent end users from reselling software discs, since the original purchasers did not own the software (which is protected by copyright), but merely licensed it.23 But those cases relate to pure software (a copyright-protected intangible product), and an open question is whether courts would similarly prevent automated-car owners from reselling their vehicles (very expensive tangible products).

EULAs prevent used car sales?

Courts’ enforcement (or non-enforcement) of EULAs might affect consumers’ ability to sell automated vehicles. Removing a Level 5 self-driving car’s software would render the car useless. So if an automated-driving manufacturer or software developer disables the software — perhaps because of a EULA issue —then that would dramatically affect subsequent sales, as the car would be practically useless. A similar situation occurred in 2014, when Google Glass Explorers, a beta-test group, began selling the $1,500 gadgets on eBay—despite Google’s terms of service, which stated that the software license was nontransferable, tied solely to the first purchaser. Many wondered whether Google would remotely disable Glass devices sold to third parties, but Google later clarified that it would not “brick” them.24 Because the issue was never litigated, it’s unclear whether the courts would have agreed with Google’s initial position. Manufacturers’ ability to pull a software “kill switch” raises questions of device and vehicle “ownership,” when software and hardware cooperatively create the product, and remotely disabling the software (through EULA issues with privity, non-agreement, or other factors) would effectively render the hardware useless.

Global no-fault compensation act?

To address the thorny liability issues, the policy think tank RAND has suggested implementing, for automated driving, a no-fault insurance system—similar to the 1986 National Childhood Vaccine Injury Act (a no-fault system to compensate vaccine recipients with serious adverse reactions).25 Like the vaccine act, which sought to reduce the possibility of lawsuit-besieged manufacturers scaling back vaccine production, a no-fault auto-insurance act could strike a similar balance. Such a policy might encourage the development of life-saving technology, while minimizing market forces that might encourage technological stagnation. Existing no-fault laws like Minnesota’s might serve as a model.

Insurance implications: Autonomy discounts? Manual-driving penalties?

Few businesses have a greater interest in regulating automated driving than insurers. Large insurance groups have opined that, while traditional underwriting criteria (such as the driver’s number of accidents, miles driven, and parking location) will probably still apply, automated driving might lead to greater emphasis on the car’s make, model, and style.26 Also important will be the use of telematics devices (“black boxes”) that monitor driver and vehicle behavior, leading to potential premium discounts and increases.

Currently, the insurance industry offers discounts for cars with black boxes, but adoption has been lukewarm—likely because of the privacy implications. The National Association of Insurance Commissioners believes that in the next five years, usage of black boxes will increase to 20 percent of drivers.27 If the automated-driving industry’s forecasts are accurate, automated driving’s prevalence will explode. If automated vehicles achieve higher safety than today’s vehicles, insurers may well provide discounts for automation.

On the other side of the coin, Tesla CEO Elon Musk has opined that manual driving—which he views as objectively unsafe—might eventually be outlawed.28 Of course, the current vehicle fleet’s enormous economic value will make outlawing manual vehicles unlikely in the short term. But if that vision eventually proves true, we may see a world where only the wealthy can drive, and grandchildren  beg to hear stories about “when you used to drive yourself!”

Criminal implications: Algorithmic law enforcement?

Observance of today’s traffic laws is encouraged through fines and potential imprisonment; tomorrow’s laws may simply be programmed. Today’s enforcement of driving laws depends upon several factors: (1) observing the behavior, (2) determining that a law was broken, and (3) gathering evidence. As such, today’s human enforcement system has limited success, if gauged by the percentage of drivers who abide by posted speed limits. With automated vehicles, by contrast, laws might be enforced universally, automatically, and algorithmically. Any potential “recklessness” can be virtually eliminated by algorithm.

Self-driving cars could be programmed to obey the law. Automated vehicles could recognize laws of overlapping jurisdictions (city, county, state, federal), import traffic rules (e.g., speed limits, turn on red), and implement those laws through the automated-driving system. In our fully automated (Level 5) future, the number of traffic violations could theoretically be reduced to nearly zero.

Of course, the systems could also be programmed to recognize legal exceptions currently permitted by human judgment. For example, ambulances would likely be permitted to speed, and vehicles might be permitted to cross a double center line to avoid stalled cars or fallen trees.

But in partially automated Levels 1–4, where the human and computer co-operate the vehicle, to what extent will (or should) manufacturers be required to implement and enforce driving laws? For example, should a Level 4 or Level 5 system permit a human driver to speed in order to take an injured child to the emergency room? To avoid an attacker? To make a flight? To meet a client?

In addition, if humans override the laws and systems’ algorithms, will those deviations be logged as violations? And for whom? Should manufacturers be compelled to compile and report infractions to requesting governmental entities (police, federal agencies), corporations (insurance companies, employers), or individuals (spouses, parents of teens)? Of course, these questions raise significant privacy implications.

No doubt, implementing legal code through algorithmic code is tempting, and could well bring significant benefits. But if regulators choose to implement legal codes through computer code, the questions about potential effects of those programmed laws—and their permitted exceptions and privacy implications—will be many.

Requiem for pretextual stops?

Today, many criminal charges stem from police stopping a vehicle for an alleged infraction (such as speeding, or an incomplete stop)—and then discovering evidence of a more serious crime. In 1996, a unanimous Supreme Court held that traffic stops do not violate the 4th Amendment “even if a reasonable officer would not have stopped the motorist absent some additional law enforcement objective.”29 But public defender and technologist David Collarusso correctly questions whether pretextual stops will be rendered extinct in a world where automated vehicles obey every traffic law.30 Since automated vehicles’ routes will likely be logged, will courts instead permit police to use data analytics (based on driving patterns from “flight plans”) to justify an automated vehicle’s detention? (“You probably realize that I pulled you over because you made nine five-minute stops in a neighborhood with statistically high rate of drug crimes.”)

Privacy implications: Why did my car drive me to McDonalds?

The promise of automated driving is accompanied by potential privacy concerns. Automation’s benefits include a vehicle’s ability to take users on the most-efficient routes, as well as the vehicles’ communication with other vehicles to both expedite trips and increase safety. But those activities all involve data, and that data could be tracked. As such, automated driving implicates several privacy concerns.

If automated vehicles log trip routes (as Google Maps and Apple Maps currently log users’ routes, depending upon privacy settings), and if those routes are readily shared with manufacturers, will riders still have a “reasonable expectation of privacy”? Would a governmental subpoena of Google’s location logs differ from the warrantless GPS tracking of a vehicle, which the Supreme Court in 2012 held unconstitutional?31 The following year, police officers used the Stored Communications Act to request mobile-phone carriers’ location data for three suspects—without a warrant or probable cause—and the 5th Circuit held that the request was not a per se violation of the 4th Amendment.32 The 7th Circuit has held similarly.33

Would the warrantless subpoena of automated-driving logs be different? The Supreme Court has held that smartphones are “a digital record of nearly every aspect of [Americans’] lives—from the mundane to the intimate,” but would the same hold true for a utilitarian, single-purpose vehicle? Also, as the public becomes increasingly aware of the ability of software companies and governmental entities to track them, does the “reasonable expectation of privacy” change over time on a sliding scale?

Advertising may also play a role in any privacy debate. One company that has invested heavily in automated driving is Google, which derives significant revenue from advertising. Any advertising company’s business plan might include leveraging automated driving by advertising nearby services. Could an advertising company permissibly extend advertising benefits to include unplanned stops at advertisers?34 Encourage fuel-ups at advertisers? Track users’ trips to sensitive destinations (e.g., psychiatrist, mosque, gay bar, abortion clinic)? To some extent, the ability to track is already present in today’s smartphones. But automated driving—and the ability to physically change users’ locations—may heighten any perceived privacy implications.

Ethical implications: Algorithms as God

As automated vehicles make more driving decisions, developers will program algorithms to decrease the probability of injury or death. But to avoid injury, the algorithms might well cause other injuries. Consider this “trolley problem” variation:35 A single-passenger fully automated car on an icy mountain road encounters 10 pedestrians. The car calculates two options:

Option 1: Stay on the icy road, save its passenger, and kill the
10 pedestrians.

Option 2: Swerve off the mountain road, kill its passenger, but save the 10 pedestrians.

Option 2 is the most utilitarian: It results in one death, not 10. But the car’s driver and purchaser would obviously prefer Option 1, which kills more people but saves the purchaser. In this way, algorithms can and will determine ethical quandaries beforehand—something the law has considered in matters of human liability (e.g., premeditation), but more rarely with machines.

If a manufacturer chooses Option 1, might lawyers representing the 10 dead pedestrians claim that the fault lies with:

(1) the owner for enabling Option 1 (if it was optional)?

(2) the vehicle manufacturer for implementing Option 1 (as a default, or even as an option)?

(3) the software developer for even considering Option 1 as a factor in the first place?

Even if the manufacturer chooses to implement Option 2, lawyers will likely make similar arguments. Of course, one could argue against programming any algorithm—omitting Option 1 and Option 2. But since the vehicle would continue on the road, killing the pedestrians, that non-choice is really a choice of Option 1.

Professor Bryant Walker Smith believes that under either Option 1 or Option 2, plaintiffs will likely sue—and under both scenarios, they will likely succeed. It’s easy to understand the view that for automated-vehicle manufacturers, liability will be an inevitability.

As such, it’s not difficult to consider that manufacturers may take that liability into account when creating algorithms. Besides calculating the number of injuries or deaths (utilitarianism), one can easily imagine the algorithm considering other potential factors:36

  • Save the youngest (i.e., children)?
  • Save the middle-aged (i.e., those with the most earning power)?
  • Save the healthiest? (Or forego
    the best organ donors?)
  • Save advertisers?
  • Save U.S. citizens?

These questions are, no doubt, uncomfortable, perhaps even ghoulish. But with our automated-driving future careening toward us, questions about algorithmic factors to consider (and exclude) will eventually need answering—by software designers, manufacturers, regulators, legislators, or courts.

Focus on utilitarian good — or on the means to the end?

Beyond an individual case, as discussed above, experts have had fascinating discussions37 about the murkiness of ethics on autonomous vehicles generally—including their societal effect. For example, if the current death rate from vehicles is 32,000 per year, and under autonomous vehicles, that net rate is 16,000 deaths, then society would have “saved” 16,000 lives. Which is good, right? But that is just the net rate. What if all 16,000 of the new deaths were computer-caused—and wouldn’t have happened pre-autonomy? If autonomy saves 16,000 lives, in other words, but the algorithms choose—Skynet-style—those who live and those who die, is that okay? Or does society (as reflected in our laws and regulations) prefer our pre-automation randomness and its 16,000 additional deaths? These are not easy questions.

Bringing it home

Automated driving is already here, and its development is progressing steadily. Gauging by the NHTSA’s enthusiasm for potential safety benefits, as well as manufacturers’ space race to deliver, automated driving may arrive sooner than many of us expect. If so, the question of how our laws and regulations should apply or adjust to automated driving will be firmly within the province of our legislators, regulators, insurers, litigators, and judges. It’s certain to be a bumpy ride.

DAMIEN A. RIEHL is a technology lawyer with a background in legal software design. After clerking for the chief judges of the Minnesota Court of Appeals and U.S. District Court in Minnesota, he litigated for a decade with Robins Kaplan. Damien practices in tech law, data privacy (CIPP/US), copyright, trademarks, business torts, breaches of contract, antitrust, financial litigation, and appeals.

The author is grateful for the insights and assistance of Professors Christina Kunz, Michael Steenson, and David Prince—who provided helpful guidance through the myriad contract, UCC, warranty, tort, and product liability issues.

Notes

1 http://cyberlaw.stanford.edu/blog/2015/05/tesla-and-liability

2 http://www.autoblog.com/2015/05/20/general-motors-says-owns-your-car-software/ 

3 https://www.eff.org/de/deeplinks/2015/04/help-eff-defend-right-tinker-your-car 

4 https://www.wired.com/2013/04/google-glass-resales/

5 http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2336234

6 Lubbers v. Anderson, 539 N.W.2d 398, 401 (Minn. 1995).

7 Gray v. Badger Min. Corp., 676 N.W.2d 268, 274 (Minn. 2004) (quotation omitted)

8 Restatement (Second) of Torts §402A (1965).

9 Restatement (Third) of Torts: Prod. Liab. §2 (1998)

10 E.g., Lauzon v. Senco Prods., Inc., 123 F. Supp. 2d 510, 513 (D. Minn. 2000), rev’d on other grounds, 270 F.3d 681 (8th Cir. 2001).

11 Mack v. Stryker Corp., 748 F.3d 845, 849 (8th Cir. 2014) (quoting Bilotta v. Kelley Co., Inc., 346 N.W.2d 616, 621 (Minn. 1984)).

12 Id.

13 https://www.wired.com/2016/09/tesla-responds-chinese-hack-major-security-upgrade/?mbid=nl_92716_p2&CNDID=25789008

14 http://www.recode.net/2016/5/9/11642640/ftc-fcc-mobile-device-security 

15 Glorvigen v. Cirrus Design Corp., 816 N.W.2d 572, 582 (Minn. 2012) (quotation omitted).

16 Glorvigen, 816 N.W.2d at 582 (quotation omitted, emphasis added).

17 Glorvigen, 816 N.W.2d at 582 (emphasis added).

18 Glorvigen, 816 N.W.2d at 582 (paraphrase).

19 Glorvigen, 816 N.W.2d at 582 (emphasis in original).

20 https://www.theguardian.com/technology/2016/jul/01/tesla-driver-killed- -self-driving-car-harry-potter

21 See U.C.C. §§ 2–313 (express warranty), 2–314 (implied warranty of merchantability), 2–316 (warranty disclaimers and limitations), 2–318 (third-party beneficiaries of warranties).

22 UCC §2–313.

23 Vernor v. Autodesk, Inc., 621 F.3d 1102, 1111 (9th Cir. 2010).

24 Matt McGee, Google: We Don’t Plan To Brick Google Glass If Bought On eBay, Glass Almanac http://glassalmanac.com/google-dont-plan-brick-google-glass-bought-ebay/1221/

25 42 U.S.C. §§300aa-1 to 300aa-34

26 http://www.iii.org/issue-update/self-driving-cars-and-insurance 

27 Id.

28 http://www.forbes.com/sites/roberthof/2015/03/17/elon-musk-eventually-cars-you-can-actually-drive-may-be-outlawed/ 

29 Whren v. United States, 517 U.S. 806 (1996).

30 https://lawyerist.com/119062/driverless-cars-undermine-war-drugs-dispatch-future/

31 United States v. Jones, 132 S. Ct. 945 (2012).

32 In re U.S. for Historical Cell Site Data, 724 F.3d 600, 602 (5th Cir. 2013).

33 US v. Thousand, 558 Fed. Appx. 666, 670 (7th Cir. 2014).

34 http://cyberlaw.stanford.edu/publications/what-if-your-autonomous-car-keeps-routing-you-past-krispy-kreme

35 Azim Shariff, & Iyad Rahwan, Autonomous Vehicles Need Experimental Ethics (10/13/2015) (unpublished manuscript), available at http://arxiv.org/pdf/1510.03346v1.pdf ; see also https://en.wikipedia.org/wiki/Trolley_problem. 

36 This article discusses some of these ethical quandaries: http://www.businessinsider.com/the-ethical-questions-facing-self-driving-cars-2015–10

37 http://cyberlaw.stanford.edu/blog/2013/07/ethics-saving-lives-autonomous-cars-are-far-murkier-you-think

Leave a Reply

Articles by Issue

Articles by Subject