Showing posts with label regulation. Show all posts
Showing posts with label regulation. Show all posts

Friday, June 21, 2024

Time to Formally Define Level 2+ Vehicle Automation

We should formally define SAE Level 2+ to be a feature that includes not only Level 2 abilities but also the ability to change its travel path via intersections and/or interchanges. Level 2+ should be regulated in the same bin as SAE Level 3 systems.

There is a lot to unpack here, but ultimately doing this matters for road safety, with much higher stakes over the next 10 years than regulating completely driverless (Level 4/5) robotaxi and robotruck safety. Because Level 2+ is already on the roads, doing real harm to real people today.

First, to address the definition folks who are losing it over me uttering the term "2+" right now, I am very well aware that SAE J3016 outlaws notation like "Level 2+". My suggestion is to change things to make it a defined term, since it is happening with or without SAE's blessing, and we urgently need consistently defined term for the things that everyone else calls Level 2+ or Level 2++. (Description and analysis of SAE Levels here. Myth 5 talks about Level 2+ in particular.)

From a safety point of view, we've known for decades that when you take away steering responsibility the human driver will drop out, suffering from automation complacency. There have been enough fatalities from plain features said to be Level 2 (automated lane keeping + automated speed), such as cars under-running crossing big rigs, that we know this is an issue.  But we also have ways of trying to address this by requiring a combination of operational design domain enforcement and camera-based driver monitoring. This will take a while to play out, but the process has started. Maybe regulatory intervention will eventually resolve the worst of those issues. Maybe not -- but let's leave that for another day.

What's left is the middle ground between next-gen-cruise-control features (lane centering + automated speed) and vehicles that aspire to be robotaxis or robotrucks but aren't quite there. That middle ground includes a human driver so the designers can keep the driver in the loop to avoid and/or blame for crashes. If you thought plain Level 2 had problems with automation complacency, Level 2+ says “hold my beer.” (Have a look at the concept of the moral crumple zone. And do not involve beer in driving in any way whatsoever.)

Expecting a normal human being to pay continuous hawk-like attention for hours while a car drives itself almost perfectly is beyond credibility. And dangerous, because things might seem fine for lots and lots of miles — until the crash comes out of the blue and the driver is blamed for not preventing it. Telling people to pay attention isn’t going to cut it. And I really have my doubts that driver monitoring will work well enough to ensure quick reaction time after hours of monotony.

People just suck at paying attention to boring tasks and reacting quickly to sudden life-threatening failures. And blaming them for sucking won’t stop the next crash. I think the car is going to have to be able to actively manage the human rather than the human managing the car, and the car will have to ensure safety until the human driver has time to re-engage with the driving task (10 seconds, 30 seconds, maybe longer sometimes). That sounds more like a Level 3 feature than a Level 2 feature from a regulatory point of view.

Tesla FSD is the poster child for Level 2+, but over the next 5 years we will see a lot more companies testing these waters as they give up on their robotaxi dreams and settle for something that almost drives itself -- but not quite.

The definition I propose is Level 2+ is a feature that meets the requirements for Level 2 but also is capable of changing roadways at an intersection and/or interchange. 

Put simply, if it drives you down a single road, it's Level 2. But if it can make turns or use an exit/entrance ramp it is Level 2+.

One might pick different criteria, but this has the advantage of being simple and relatively unambiguous. Lane changing on the same roadway is still Level 2. But you are at Level 2+ once you start doing intersections, or go down the road (ha!) of recognizing traffic lights, looking at traffic for unprotected left turns, and so on. In other words, almost a robotaxi -- but with a human trying to guess when the computer driver will make a mistake and then potentially getting blamed for a crash.

No doubt there will be minor edge cases to be clarified, probably having to do with the exact definition of “roadway”. Or someone can propose a good definition for that word that takes care of the edge cases. The point here is not to write detailed legal wording, but rather to get the idea across of making turns at an intersection being the litmus test for Level 2+.  

From a regulatory point of view, Level 2+ vehicles should be regulated the same as Level 3 vehicles. I realize Level 2+ is not necessarily a strict subset of Level 3, but the levels were never intended to be a deployment path, despite the use of a numbering system. I think they both share a concern of adequate driver engagement when needed in a system that is essentially guaranteed to create driver complacency and slow reaction times due to loss of situational awareness.

How does this look in practice? In the various bills floating around federal and state legislatures right now, they should include a definition of Level 2+ (Level 2 + intersection/interchange capability) and group it with Level 3 for whatever regulatory strategy they propose. Simple as that.

If SAE ORAD wants to take up this proposal for SAE J3016 that's fine too. (Bet some committee members are reading this — happy to discuss at the next meeting if you’re willing to entertain it.) But that document disclaims safety as being out of its scope, so what I care about a lot more are the regulatory frameworks that are currently near-toothless for the not-quite-robotaxi Level 2+ features already being driven on public roads.

Note: Based on proposed legislation I've seen, pulling Level 2+ into the Level 3 bin is the most urgent and viable path to improve regulatory oversight of this technology in the near to mid term. If you really want to do away with the levels I have a detailed way to do this, noting that the cut-line for Supervisory is at Level 2 rather than Level 2+, but is otherwise compatible with this essay. If you want to use the modes but change the cut line, let’s talk about how to do that without breaking anything.

Note: Tesla fans can react unfavorably to my essays and social media posts. To head off some of the “debate” — yes, navigate-on-autopilot counts as Level 2+ in my view. And we have the crashes to prove it. And no, Teslas are not dramatically safer than other cars by any credible analysis I’ve ever seen.

Wednesday, September 27, 2023

Five questions cities should ask when robotaxis come to town

If your city is getting robotaxis, here are five questions you should be asking yourself if you are in local government or an advocate for local stakeholders. Some of the answers to these questions might not be easy if you are operating under a state-imposed municipal preemption clause, which is quite common. But you should at least think through the situations up front instead of having to react later in crisis mode.

(There might well be benefits to your city. But the robotaxi companies will be plastering them over as much media as they can, so there is no real need to repeat them here. Instead, we're going to consider things that will matter if the rollout does not go ideally, informed by lessons San Francisco has learned the hard way.)

Local citizens have questions for the robotaxi / Dall-E 2

(1) How will you know that robotaxis are causing a disruption?

In San Francisco there has been a lot of concern about disruption of fire trucks, emergency response scenes, blocked traffic, and so on. The traffic blockages show up pretty quickly in other cities as well. While companies say "stopping the robotaxi is for safety" that is only half the story. A robotaxi that stays stopped for tens of minutes causes other safety issues, such as blocking emergency responders, as well as disrupts traffic flow.

How will you know this is happening? Do you plan to proactively collect data from emergency responders and traffic monitoring? Or wait for twitter blow-ups and irate citizens to start coning cars? What is your plan if companies cause excessive disruption?

(2) How will you share data with companies that they should be using to limit testing?

For example, you might wish to ask companies not to test near parades, first amendment events, active school zones, or construction areas. Is it easy for companies to access this information, preferably electronically? Are they interested in doing that? Will they follow your requested testing/operational exclusion areas? What do you plan to do if they ignore your requests to restrict testing in sensitive areas?

(3) How will you ensure testing and operational equity?

What if disruption, testing incidents, and other issues are concentrated in historically disadvantaged areas? (This might happen due to company policy, but might instead be an unintended emergent result due to higher-than-average population density and emergency response activity in such areas.)

How will you know whether exposure to testing risk is being imposed in an equitable manner, especially if robotaxi companies claim that where they test is a trade secret?

If robotaxis are being sold based on service to the disabled and for other social goods, how will you be able to measure whether companies are living up to their promises?

(4) How will you issue traffic tickets to a robotaxi?

Some states require a moving violation citation to be issued to a natural person, but robotaxis don't have a driver. Consider proactively moving to get state-level regulations fixed sooner rather than later to correct this. Without the ability to ticket robotaxis you might find yourself without viable enforcement tools for the worst robotaxi behaviors that might occur.

(5) How can you productively engage with companies despite municipal preemption laws?

Apparently in some cities there is a good working relationship with between city government and robotaxi operators. In San Francisco the city and the companies are practically at war. There are no magic solutions, but trying hard up front to build bridges before things get tense is better than reacting to bad news.

-----------------------------

Saturday, August 26, 2023

Autonomous Vehicle State Policy Issues (Talk Video)

The commercial deployment of robotaxis in San Francisco has made it apparent many issues remain to be resolved regarding the regulation and governance of autonomous vehicle technology at the state and local levels. This talk is directed at state and local stakeholders who are considering how to set policies and regulations governing this technology.

Topics:

  • Getting past Automated Vehicle (AV) safety rhetoric
  • AV safety in a nutshell
    • Safe as a human driver on average
    • Avoiding risk transfer to vulnerable populations
    • Avoiding negligent computer driving
    • Conforming to industry consensus safety standards
    • Addressing other ethical & equity concerns
  • Policy points:
    • Societal benefits
    • Public road testing
    • Municipal preemption
    • SAE Level 2/2+/3 issues
    • Federal vs. state regulation
    • Other policy issues
  • Revisiting common myths


Washington State policy meeting in which I give this talk and answer questions: https://avworkgroupwa.org/committee-meeting/executive-committee-meeting-15




Friday, May 12, 2023

A Liability (Duty of Care) Approach for Automated Vehicles in Three Parts

I'm delighted that months of collaboration with co-author and law professor William Widen have resulted in a trio of papers that together provide a framework for resolving the vast majority of automated vehicle legal questions. Product liability will still be a thing, but that should be reserved for its more usual role, and not be the sole means of recourse for everyday Computer Driver road mishaps that will displace the everyday Human Driver road mishaps. A tort law approach based on assigning a duty of care (negligence) is a far better fit and will require far less disruption to existing legal and regulatory systems while providing a fair basis for compensation for anyone harmed by this novel technology.

The three parts are in three separate SSRN papers intended to be used as a set, although each paper is self-contained. Below are very simplified summaries to give an overview.

25 minute video with overview of the concepts:  https://youtu.be/i0ZGSEFHwE8 or https://archive.org/details/l-139-computer-driver

Podcast discussion and summary to warm up with:  https://ojoyoshidareport.com/podcast-lets-talk-about-av-liability/

(1) Computer Driver: Define the concept of synthetic negligence for a Computer Driver. A Computer Driver should be held to the same standards of negligence for harm it causes as a Human Driver. The manufacturer should be the responsible party for any negligent behavior on the part of a Computer Driver because they are the ones who should be incentivized to produce safe automated driving systems. The behavioral standard is not an "average driver" but rather a "reasonable driver."

Winning the Imitation Game: Setting Safety Expectations for Automated Vehicles, 25 Minn. J.L. Sci. & Tech. 113 (2023)  https://scholarship.law.umn.edu/mjlst/vol25/iss1/5/

Also see this shorter summary paper of the same material from WAISE 2023 for a more general and technical audience: Koopman, P. & Widen, W., "A Reasonable Driver Standard for Automated Vehicle Safety," Safecomp WAISE workshop, Sept. 2023

Also see this Jurist piece on how this might work with criminal law, especially for a Level 3 vehicle in which the driver has been told it is OK not to watch the road: Widen, W. & Koopman, P., Level 3 Automated Vehicles and Criminal Law, Jurist, Aug. 2023

Video about why using product liability for computer drivers is likely to break the court system: https://www.youtube.com/watch?v=WhtxTDRvTOE

(2) Liability Transfer Rules: Define the rules of transfer of liability between the Human Driver and the Computer Driver depending on the operational mode per the summary figure below. Shared responsibility (Human Driver supervises safety of Computer Driver) requires special attention to avoid the person being used as a moral crumple zone. Two key rules come into play regarding the need for effective driver monitoring and the obligation of a person to intervene when it is reasonable that they would know to do so.

The Awkward Middle for Automated Vehicles: Liability Attribution Rules When Humans and Computers Share Driving Responsibilities
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4444854
https://www.americanbar.org/groups/science_technology/publications/jurimetrics/2024/jurimetrics-fall-2023/

(3) Definitions and Statute Outline: Create a set of definitions and statute-oriented rules to make the first two papers more actionable. We envision this as a robust starting point for state legislatures that find this approach useful.

Liability Rules for Automated Vehicles: Definitions & Detail
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4444848
https://scholar.smu.edu/scitech/vol27/iss1/5/





Friday, December 2, 2022

Blaming the autonomous vehicle computer as a regulatory strategy

The AV industry has been successfully pursuing state regulations to blame the computer for any crashes by saying that the Automated Driving System (the computer) is considered to be the driver of any AV operating on public roads. That way there is no person at fault for any harm to road users. Yes, really, that is what is going on.[1]

Person pointing a finger at a computer

The general AV industry tactic when lobbying for such rules is to argue that when fully automated driving is engaged the “driver” is the driving computer (the ADS). Any remote safety supervisor is just there to lend a hand. In some states a remote human support team member need not have an appropriate driver license, because it is said that the ADS that is the driver. Superficially this seems to make sense. After all, if you are a passenger who has paid for a retail robotaxi ride and the AV breaks a traffic law due to some flaw in the design, you as the passenger should not be the one to receive a ticket or go to jail.

But the tricky bit is that ADS computers are not afforded the legal status of being a “person” – nor should they be.[2] Corporations are held to be fictitious people in some legal circumstances, but a piece of equipment itself is not even a fictitious person.[3]

If a software defect or improper machine learning training procedures result in AV behavior that would count as criminally reckless driving if a human were driving, what happens for an AV? Perhaps nothing. If the ADS is the “driver” then there is nobody to put on trial or throw into jail. If you take away the driver’s license for the ADS, does it get its license back with the next software update?[4] Where are the repercussions for an ADS being a bad actor? Where are the consequences?

Blaming the ADS computer for a bad outcome removes a substantial amount of deterrence due to negative consequences because the ADS does not fear being harmed, destroyed, locked up in jail, fined, or having its driver’s license revoked. It does not feel anything at all.

A related tactic is to blame the “operator” or “owner” for any crash. In the early days of AV technology these roles tended to be either the technology developer or a support contractor, but that will change over time. Contractors perform testing operations for AV developers. Individual vehicle owners are operators for some AV technology road tests. Other AV operators might work through a transportation network service. Someone might buy an AV in the manner of a rental condo and let it run as a robotaxi while they sleep.

Imagine an arrangement in which an investor buys a share in a group of robotaxis as might be done for a timeshare condo. A coordinator lines up independent contractors to manage investment money, negotiate vehicle purchases, arrange maintenance contracts, and participate in a ride-hailing network. Each AV is the sole asset of a series LLC to act as a liability firewall between vehicles. The initial investor later sells their partial ownership shares to an investment bank. The investment bank puts those shares into a basket of AV ownership shares. Various municipal retirement funds buy shares of the basket. At this point, who owns the AV has gotten pretty complicated, and there is no substantive accountability link between the AV “owner” and its operation beyond the value of the shares.

Then a change to the underlying vehicle (which was not sold as an AV platform originally, but rather was adapted by an upfitter contractor) impairs functionality of the aftermarket add-on ADS manufactured by a company that is no longer in business. If there is a crash who is the “operator?” Who is the “owner?” Who should pay compensation for any harm done by the AV? If the resultant ADS behavior qualifies as criminally negligent reckless driving, who should go to jail? If the answer is that nobody goes to jail and that only the state minimum insurance of, say, $25K pays out, what is the incentive to ensure that such an arrangement is acceptably safe so long as the insurance is affordable compared to the profits being made?

While the usual reply to concerns about accountability is that insurance will take care of things, recall that we have taken some passes at discussing insurance and risk management can be insufficient incentive to ensure acceptable safety, especially when it only meets a low state minimum insurance requirement[5] originally set for human drivers that have skin in the game for any crashes.


[1] For a compilation of US state laws and legislative hearing materials see:        https://safeautonomy.blogspot.com/2022/02/kansas-av-regulation-bill-hearings.html

[2] Despite occasional hype to the contrary, machine learning-based systems are nowhere near achieving sentience, let alone being reasonably qualified to be a “person.”

[3] I am not a lawyer (IANAL/TINLA), so this is a lay understanding of the rules that apply and nothing in this should be considered as legal advice.

[4] In several states an ADS is automatically granted a driver’s license even though it is not a person. It might not even be possible to take that license away.

[5] IIHS/HLDI keeps a list of autonomous vehicle laws including required insurance minimums. The $1M to $5M numbers fall short of the $12M statistical value of human life, and are typically per incident (so multiple victims split that maximum). In other states the normal state insurance requirement can apply, which can be something like a maximum of $50,000 per incident and might permit self-insurance by the AV company, such as is the case in Kansas: https://insurance.kansas.gov/auto-insurance/ This insurance maximum payout requirement is less than the cost of a typical AV. In practice it might be the case that victims are limited to recovering insurance plus the scrap value of whatever is left of the AV after a crash, with everyone else being judgement-proof.

Tuesday, November 1, 2022

Update on 2022 PA AV bill

 PA AV legislation update: The PA Senate Transportation Committee passed the PA House bill on AVs to the PA Senate for a vote. This is an amended version of PA HB 2398 bill that has already passed the PA House.   https://www.legis.state.pa.us/cfdocs/billinfo/billinfo.cfm?syear=2021&sInd=0&body=H&type=B&bn=2398

Key aspects of this new version (PN3563) fixes many of the issues I've noted in previous bills, which is good news. But still some remaining issues. A summary:

- Permits operation of an AV without a driver.
- A responsible Certificate Holder must be a company (it being "a person" is struck out).
- Human safety driver, if any, must be an employee or contractor.
- Permits platooning, but seems to require a driver in each vehicle.
- Requires reports of crashes involving harm or damage to property to PennDOT
- Public posting of contact info for crash claims
- Registration requirement with PennDOT includes safety management plan
- $1M insurance requirement (not as high as it might be, but better than many other states)

Some not-so-great parts
- Municipal preemption clause (but at least now it allows local authorities to enforce existing laws)
- PennDOT appears to have very limited ability to reject registrations
- Any computer driver automatically gets a driver license with no testing and no independent assessment of driving skill required
- No requirement to follow industry safety standards (J3016 is mentioned, but is NOT a safety standard)
- An advisory committee that reports on economic benefits (good) -- but no apparent charter for safety concerns
- Looks really difficult to suspend or revoke a certificate in practice. It is unclear that a severe crash is enough to do that, at least immediately (it seems only after a criminal conviction of killing someone -- which might take years). I guess we'll have to see how soft law works in this area over time.
- The Certificate Holder (remember that is a company, not a person) is considered the driver, and is specifically called out to be cited by police for violations. So if there is a criminal driving offense committed by an automated driver (something a human driver would go to jail for) there is quite literally nobody (no natural person) held responsible. Will be interesting to see how PennDOT handles driver license points for moving violations, if at all.

Hearing video here starting at time 2:12:

More about various state bills including this one here:

Saturday, June 11, 2022

PA House HAV bill progress & issues

This past week the PA House Transportation Committee significantly revised and then passed a bill granting sweeping authority to operate Highly Automated Vehicles (HAVs) in the Commonwealth of Pennsylvania. That includes light vehicles, heavy trucks, and platoons of heavy trucks. This bill has evolved over time and seems a better candidate for a final law than the older, much more problematic Senate bill.

It has some good points compared to what we've seen in other states, such as an insurance minimum of $1M, and placing PennDOT in regulatory control instead of Public Safety. By way of contrast, in other states the State Police are in charge of regulating (they have no real ability to do so, and realize this, but the HAV industry pushed to have it this way), and insurance minimums are as low as $25K or $50K. So we're doing better than some other states. 

The PA bill establishes an advisory committee, but it is unclear whether it will have much power, and its current mandate is to report benefits of HAVs without being tasked to report on any public safety concerns (or benefits).

However, a great number of issues identified in earlier versions have not been addressed. A very significant concern is a municipal preemption clause. For example, cities are prevented from curtailing testing of experimental, immature HAVs in school zones, even with no safety driver in the vehicle. 

There are a number of other serious concerns unaddressed by this bill especially in the area of safety, but also with regard to compensation, transparency, inclusion, and non-discrimination: see Five Principles for Regulation of Highly Automated Vehicles.

A particular problematic issue boils down to who goes to jail if an HAV has a software defect that results in driving behavior that would, for a human driver, result in criminal penalties. This bill is at least clear about the "certificate holder" being on the hook, whereas other states are silent on this topic. However, it is unclear if a certificate holder who might have no understanding of HAV software and no ability to influence HAV operational safety is the right person to be sending to jail for reckless driving by an HAV that results in deaths. (Yes, this is a difficult problem. But the HAV industry has had years and years to address concerns such of this. Apparently their plan is to deflect blame away from the tech companies and onto whoever ends up holding the bag as a certificate holder.)

The manner of how HAV bills are being pushed through the legislature is also extremely disappointing. The Senate rammed through a bill that was not disclosed until the last minute with no public hearing. To its credit the House did have a public hearing on its initial bill. However, this very significant modification was kept secret until the Transportation Committee meeting voted it through along party lines. The industry certainly knows what is in the bills and amendments well in advance, because we have had public events in which they were thanked for helping author them. If they really believed that public safety was #1 and stakeholder engagement mattered, the industry would not be resorting to releasing legislation in the dead of night to ram it through votes.

I fully expect this will be pushed through both House and Senate in the most industry-friendly way that can be managed. The PA Governor has already promised to sign HAV legislation. We're going to be stuck with regulations that disproportionately favor the industry so that they can attempt to reap the IPO and SPAC compensation rewards of chasing a trillion dollar market while exporting risks of public road testing to other road users. (Some companies are doing better than others on safety, but the industry as a whole as, for example, represented by AVIA is quite clearly all about the $$$ and not really about public safety.)

It is sad to see legislators seduced by the "jobs and economic opportunity" mantra of the HAV industry while most companies are merely paying lip service to safety. But I guess this is how it will be until we have a sufficiently high number of crashes and other adverse newsworthy events to put on public pressure to do better.

Note: there is one clause that is a potentially HUGE issue. Page 29 lines 16-18 appear to exempt any vehicle that is not strictly commercial (in practice anything except heavy trucks) from the requirement for a PennDOT certificate. It is unclear whether this is an intentional loophole or just a drafting mistake. Either way it should be fixed. 

Monday, February 28, 2022

Law Commission report on Automated Vehicles

Comprehensive effort by Law Commissions (England, Wales, Scotland) on Automated Vehicle regulations. A lot of work, and some valuable insights.


Web site with all documents:   https://www.lawcom.gov.uk/project/automated-vehicles/

Summary of findings: https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2022/01/AV-Summary-25-01-22-2.pdf

Full report: https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2022/01/AV-Summary-25-01-22-2.pdf

Excerpt from the summary:

"Key recommendations

Throughout this project, we have strived to keep safety at the forefront of our proposals, while also retaining the flexibility required to accommodate future development.

Our recommendations cover initial approval and authorisation of self-driving vehicles, ongoing monitoring of their performance while they are on the road, misleading marketing, and both criminal and civil liability. They include:

    • Writing the test for self-driving into law, with a bright line distinguishing it from driver support features, a transparent process for setting a safety standard, and new offences to prevent misleading marketing.
    • A two-stage approval and authorisation process building on current international and domestic technical vehicle approval schemes and adding a new second stage to authorise vehicles for use as self-driving on GB roads.
    • A new in-use safety assurance scheme to provide regulatory oversight of automated vehicles throughout their lifetimes to ensure they continue to be safe and comply with road rules.
    • New legal roles for users, manufacturers and service operators, with removal of criminal responsibility for the person in the passenger seat.
    • Holding manufacturers and service operators criminally responsible for misrepresentation or non-disclosure of safety-relevant information."

Thursday, February 10, 2022

AV Regulation Bill Hearings: case study materials for regulations, democracy and how the sausage gets made

Kansas has been holding hearings about setting Autonomous Vehicle regulations. The situation provides an interesting, publicly accessible view into what's happening across the US. It also provides an exercise in transparency compared to the process used at about the same time in Pennsylvania.

Hand writing the word Regulations


Note: see update at end for pointers to more comprehensive treatment created after this blog post was written.

A simplified summary of the Kansas situation:

  • Walmart is using its lobbying weight to expand its "middle mile" automated delivery trials from Arkansas to also include Kansas. The bill is being considered at their request. But they confess the technology is beyond them.
  • Gatik is the Walmart partner doing the heavy lifting here. They tell a plausible story about care and diligence, phased incremental approach, etc., although without mentioning conformance to industry standards. However, regardless of the story they tell, the bill would permit other companies to also operate in the state, so what is in the bill matters beyond Gatik's statements.
  • The proposed bill is short (2 pages) and is aimed at narrow permission to do things that look like depot-to-store un-crewed logistics runs on Kansas public roads.
  • The bill omits a number of important things pointed out by various parties, so the question is how short can it be while still hitting the important points. It's clear it will be revised with various parties hammering out a revision, and we'll hear more about it later. But for now, the discussion itself is illuminating.
The testimony covers a lot of ground in terms of real issues that need to be resolved. It's worth listening to all three sessions to get a feel for what people care about.  

Kansas AV regulation case study materials:

  • The bill itself: http://kslegislature.com/li/b2021_22/measures/sb379/
  • Written testimony link
  • First day testimony (proponents -- 51 minutes):  https://youtu.be/VH-zSQDp3sk
    • 00:50: Briefing on bill
    • 03:35: Walmart
    • 14:50: Gatik government relations presentation
    • 24:20: Gatik technical presentation
    • 38:20: Q&A
    • Note: video corruption at approximately 48:30-49:10 is in original stream
  • Second day testimony (neutral/opponents -- 50 minutes): https://youtu.be/zdSwKPDicZQ
    (Times are from start of hearing. Add 1 minute for YouTube stream time)
    • 00:20: Briefing on bill
    • 01:45: Kansas DOT
    • 08:40: Kansas Police organizations
    • 19:50: Teamsters/organized labor
    • 26:35: League of Kansas Municipalities
    • 30:05: Ford Motor Company
    • 33:35: Michael DeKort 
    • 40:55: Alliance for Automotive Innovation
    • 47:15: General Q&A
  • Third day testimony (opponents -- 32 minutes): https://youtu.be/jHrkH0oSB5A
    (Times are from start of hearing. Add 3 minutes for YouTube stream time)
    • 00:40: Autonomous Vehicle Industry Association
    • 07:10: AUVSI
    • 10:30: Technet
    • 12:45: Trial Lawyers / American Association of Justice
    • 17:40: Trial Lawyers/ Hutton & Hutton  (voice only in original stream)
    • 24:40: Carnegie Mellon University
    • 30:15: General Q&A
  • A new bill was introduced in March 2022: SB 546
  • Kansas house hearing on SB 546, March 29, 2022
    • Be sure to listen to the exchange at about 2:02:00 about whether time pressure should be a reason to move the bill forward still in need of work.
  • Finally a Kansas bill was signed by the governor: KS SB 313
Suggested activities for educators regarding SB 379:
  • Warm up by reading a news article on the bill, such as: https://www.repairerdrivennews.com/2022/02/14/kansas-bill-would-bring-in-av-tech-for-middle-mile-goods-transportation/
  • All students should watch the proponent first day testimony and read the bill.
  • Assign each student to watch one speaker for the second/third day videos and summarize the position of that person in two to five points.
    • Re-enact the hearing with each student just giving the three points to boil it down to essentials.
    • Who was the speaker advocating for (general public, special interest group, government function, etc.)?
    • What do you think that person's testimony contributed to the conversation about the bill within the committee that was different than other speakers?
    • Did you find that person credible and effective at advocating for their position? Why?
  • Assign five groups within the class, and assign each group one of the five regulatory topic areas listed in this blog post.
    • Each group comments on strengths and weaknesses of the bill according to the five topic areas.
    • Which of the elements within the assigned group of regulatory issues is addressed by the Walmart/Gatik testimony?
    • Pick just one weakness of the bill in your group's assigned area and propose a sentence to add to the bill to fix that one point.
  • Discussion:
    1. Do you think the bill's scope is just right, too narrow, too broad, or should not be passed at all?
    2. Do you think Gatik's deployment will be acceptably safe? Why?
    3. Do you think deploying this technology on Kansas roads provides a reasonable tradeoff between economic benefit and jobs to the region vs. risk to other road users and potential issues?
    4. How strong a regulatory role should Kansas DOT be given?
    5. If you could only change one thing about the bill, what would you change?
    6. Look for your speaker/topic in the follow-up bill SB 546. Did your speaker's position on your topic change? Why do you think that happened?
Pennsylvania AV regulation case study materials:
Other bills from the 2021-2022 legislative season:
Background materials:
Updates added ad hoc: incomplete and not in the time capsule

Download time capsule for off-line educational use:   https://archive.org/details/2022-02-av-regulation-case-study

Notes:
  • My Kansas testimony is at the very end of the last day of the first set of senate hearings. It needs to be clearly stated that my testimony is in the specific context of this bill and the situation on the ground there. If it were a broader situation I'd have a lot more to say per the five topic list link.
  • You will notice some people, especially in earlier hearings, wearing masks due to the Covid19 pandemic. (This is obvious at the time I create this page, but in a few years it might not be as obvious what is going on with this.)
The Oklahoma bill has a definition of the driver being the computer, which is both problematic and illustrative of the outstanding state law challenges. I'm documenting it here for reference:

Quoted starting at bottom of page 7:
1.  The automated driving system is considered the driver or operator, for the purpose of assessing compliance with applicable traffic or motor vehicle laws, and shall be deemed to satisfy electronically all physical acts required by a driver or operator of the vehicle; and
2.  The automated driving system is considered to be licensed to operate the vehicle. 

Thursday, February 3, 2022

Five Principles for Regulation of Highly Automated Vehicles

February 1, 2022

FIVE PRINCIPLES FOR REGULATION OF HIGHLY AUTOMATED VEHICLES

Philip Koopman∗ & William H. Widen**

Providing economic opportunities and jobs is important, but these benefits alone are not an equitable and fair exchange for the use of public highways as a testing ground, and the exposure of the public to an increased risk of harm from highly automated vehicle (HAV) accidents during development. The principles below summarize concrete actions that HAV legislation should include to provide an appropriate balance.

Typeset Acrobat version here: https://archive.org/details/av-regulation-principles-koopman-widen-2022-02-01

1. Safety

Operational Safety: commit to a testing and deployment standard for automated driving system (ADS) performance of “substantially better than the average unimpaired human driver (AUHD) rather than the vague “sufficiently safe” criteria currently in use by HAV companies.

Metrics: state the metric(s) used to make the performance comparison between an ADS and the AUHD.

Industry Standards: commitment to follow published professional industry standards appropriate for the type of HAV operation (i.e., testing, trials or deployment) including: SAE J3018, UL 4600, ISO 21448 and ISO 26262, as informed by AVSC00001201911 AVSC Best Practice for safety operator selection, training, and oversight procedures for automated vehicles under test and AVSC0007202107 AVSC Information Report for Adapting a Safety Management System (SMS) for Automated Driving System (ADS) SAE Level 4 and 5 Testing and Evaluation.

Regulatory time-out: allow all regulators (federal, state, and local) to temporarily enjoin HAV operations (including testing) as a response to discrete events which raise safety concerns and to revoke testing permits for significant adverse events as well as for patterns of unsafe HAV operations or operations that violate law.

2. Responsibility for Loss (Compensation)

Duty of Care: Any company or person operating an AV should, by statue, owe the public a non-delegable duty of care for safe operation.

Insurance Levels: Use a required insurance amount not less than the US DOT’s statistical value of a human life ($11.6 million per person in 2020, adjusted annually). Do not allow self-insurance by HAV companies which are not cash flow positive. Establish clear single-risk limits and treatment of multiple injuries/fatalities in a single event, with separate treatment for special cases, such as truck platooning which might be expected to cause greater harm on a per incident basis.

Owner/Occupant Liability: Clarify the liability of an owner/occupant of an HAV for loss caused when an ADS is properly engaged. May an injured party sue the owner/occupant in the absence of negligence and collect against the owner’s insurance policy or are claims limited to ADS designers, manufacturers and upfitters for defects? Explain interaction with negligence claims for maintenance, upgrades, and installation failures.

Single Collection Point for Plaintiffs: Allow a plaintiff to collect from a single responsible defendant in full, with joint and several liability and a right of contribution from other responsible parties (regardless of whether the liability is based in negligence or product defect).

Insurance Policy with Plaintiffs in Position of a Named Insured: Allow plaintiffs to make a claim against HAV company insurance policies as if they were a named insured to facilitate prompt payment of medical bills and other amounts so plaintiffs do not face financial pressure to settle for less than full compensation.

Eliminate Barriers to Collection for Full Damages: As applicable, lift caps on damage collection by potential plaintiffs with lower cost insurance policies, such as limited tort option policies in Pennsylvania.

3. Transparency

Periodic reporting: require timely publication of HAV safety performance, including how actual performance of an HAV compares with promised performance relative to the AUHD.

Crash Data: require timely publication of all crash data and police reports for any incident involving a HAV.

Publicize testing, trials, and deployment plans: a requirement to issue publicly a testing plan prior to commencement of testing, trials or deployment which explain how the HAV company addresses safety, responsibility for loss, transparency, inclusion, and non-discrimination. Allow local regulators to review, comment upon, and approve the testing plan.

Identification of times, manner and locations for testing and trials: requirement to post information about the times and locations for testing and trials so the public understands areas of increased risk.

Don’t Promulgate Myths: advocates for an HAV law, rule or regulation, should not use marketing and outreach materials containing untruthful or misleading statements or material omissions, such as the myth that 94% of serious crashes are caused by human error.

Disclose a Harm Now, Benefits Later Justification for Deployment: If an HAV company plans to deploy HAVs at scale when the technology does not perform substantially better than an AUHD, or its level of performance cannot be determined with reasonable certainty, disclose this fact in the testing plan so the public can evaluate a policy which exposes the public to harms on the promise of future improvements to HAV technology.

4. Inclusion

IEEE 7000: require HAV companies to follow the IEEE 7000 Standard Model Process for Addressing Ethical Concerns procedures to identify all interested parties affected by the testing, trials, and deployment of HAV technology (including emergency service responders, hazardous material transporters, and those in a work zone). Address the concerns of all stakeholders in public plans and with respect to testing, trials, and deployments.

Adaptation to Local Conditions: give municipalities the power to limit HAV operations based on time, manner, and location to account for local conditions (such as preventing truck platooning in certain neighborhoods or driverless trials in school zones; account for local special events; account for recent incidents which present safety concerns); remove blanket pre-emption by state law to allow municipalities to exercise this power.

Review Global Approaches to HAV Regulation: as part of approving any bill, review the recommended approaches to HAV regulation taken in other jurisdictions, such as the joint report of the UK law commissions on automated vehicles, and the EU’s ethics guidelines for the development of trustworthy artificial intelligence.

5. Non-Discrimination:

No Concentrated Operations in Areas of Concern: Municipal review of both public testing and trial plans and the time, manner and locations for testing and trials to ensure that at-risk communities, such as low-income neighborhoods, do not experience a disproportionate increased risk of loss from HAV operations (without mitigating the identified risk) as opposed to other communities not deemed to be at special risk.

Mitigation Strategies: If a need arises to concentrate certain types of testing and trials in an at-risk community, use other means to mitigate the adverse impact of the concentration—such as conducting testing only with an on-board safety driver, use of two safety drivers; or elimination of uncrewed trials.

Special Review of Related Laws: review applicable existing state and local laws to identify situations in which HAV operations (including crashes) might adversely impact low-income and other at-risk persons, such as limited damage collection for low-cost policies, and the difficulty and expense of pursuing a product defect claim given legal complexity and number of possible defendants.

Justice40: situate and harmonize non-discrimination efforts within the broader framework of social justice and equity values, including those contained in the Federal Government’s Justice40 initiative.

Monday, January 17, 2022

Comments on PA SB-965 Regulating Autonomous Vehicles

This posting lists a number of major issues and concerns with the Pennsylvania legislation introduced in January 2022 to change how the Commonwealth regulates Highly Automated Vehicles (HAV). It deals with technical issues with the bill and especially aspects of safety. The short version is that this bill should NOT be passed without SIGNIFICANT changes.

PA SB 965 home page

  • News article: AV company's point of view 
    • Emphasizes: economic opportunity, jobs. Some talk about safety but no substance (just as there is no safety substance in the bill).
    • "incredibly collaborative" -- but collaborators limited to AV companies, bill sponsors, PennDOT -- not inclusive of Pittsburgh city government, safety advocates, consumer advocates stakeholders

Green text has been added or modified since initial post responsive to comments.
As of afternoon of 1/26/2022 the bill has been amended and committee sent to PA Senate. This writeup has not yet been updated to reflect any substantive changes.

OVERVIEW:

Overall, this bill suffers from a significant imbalance regarding the tradeoff between risks and benefits to Pennsylvania residents. Companies stand to benefit enormously by using public roads as a living laboratory to assist in developing automated vehicle technology. However, other road users are not afforded commensurate safety and compensation assurances for the risks they take from sharing the road with immature technology that presents a real and present danger to other road users. Especially of concern is the palpable risk to vulnerable road user (pedestrians, cyclists, etc.).

Additionally, much of the bill is drafted in a way that creates loopholes and exploitable ambiguities. Regardless of root cause and intent, the net effect is to weaken the bill dramatically, to the point that it amounts to not much more than a free pass of HAV companies to do whatever they want. (Even insurance requirements, such as they are, would be scant deterrent to a company chasing a trillion dollar market. Even a $5 million payout to an injured party is too easily characterized as a cost of doing business when development involves a multi-billion dollar war chest.)

This bill, if passed as-is, will dramatically weaken what safety protection Commonwealth residents have under the current HAV regulatory posture. Passing this bill as it is now will be actively harmful -- it is worse than doing nothing. 

The press conference made a big point that local HAV companies were consulted for the bill -- but not public interest stakeholders. This entire situation really brings into question whether those companies should be considered trustworthy in terms of sincere regard for the public interest vs. their own profit motives (see SSRN paper sections II, IV, and Conclusions).

There is no simple fix for this bill. It needs a major overhaul. 

EXAMPLE ISSUES:

Below are some (potentially aggressive) interpretations of this bill to point out how problematic it is in its current form:

  • This bill permits HAV testing with absolutely no oversight by PennDOT, no permit, and no "license test" of the HAV so long as an insurance minimum coverage of $1 per incident is met. 
  • This bill permits an unlicensed 12-year-old to act as a Level 3 fallback/safety driver in a heavy truck HAV transporting radioactive waste through local towns and urban centers, with no possibility for municipalities to prohibit that activity.
  • HAVs are not required to follow most traffic laws that apply to human drivers, and cannot be pulled over by anyone other than PA State Police for traffic violations. Except they don't have to pull over for even PA State Police, because they are not required to yield to emergency vehicles, nor to stop when pulled over.
  • An HAV operator could hire judgement proof remote vehicle operators outside the US who are immune to state law sanctions and incentives. Traffic ticket points would be meaningless, as would responsibility for criminal driving behavior such as driving under the influence. (For that matter, how would you give someone in a foreign country a breathalyzer test?)  Companies could just replace that driver after every infraction -- assuming that police can even identify who the remote operator is, which is not provided for in the bill.
  • A passenger in the back seat of a robotaxi might be found culpable for an injury or death caused by a crash. The mechanism would be designating the robotaxi as Level 3, with a click-through rider agreement (that is not read in practice) making passengers responsible for pressing an obscurely placed red panic button in event of a control malfunction. Such robotaxis could operate even if that passenger is a minor, impaired, or does not have a driver license.
It seems likely that at least some of the above is due to drafting errors and internal inconsistencies. For example, the $1 insurance limit might be intentional, or might just be worded incorrectly as a maximum instead of a minimum $5 million insurance requirement. One would hope that a responsible company would not act so egregiously -- but not all companies are responsible, and mere hope is not a responsible plan for public safety.

These issues illustrate the point that this bill doesn't just need a little clean-up of loose ends. It has major issues and should be entirely revisited.

DETAILED NOTES:

Below are notes on issues found in a review of the bill language:

  1. Page 2, lines 7-15: This invokes SAE J3016 levels 3, 4, and 5 for coverage of the bill. This leaves open the "Level 2 Loophole" by which any company can attempt to claim they are simply putting a driver assistance feature on the road when they are really testing dangerously immature highly automated driving features. Tesla is already doing this with FSD (SSRN paper Section I.B). The bill instead should also apply to any testing of pre-series production features that control vehicle steering and require a test driver.  See: SSRN paper section V.
  2. Page 2, lines 20-21: does not put any requirements on what it means to be an "authorized affiliate." For example, Tesla FSD beta testing using owners who are not trained as testers could be considered "authorized" affiliates. Anyone operating a test vehicle should be qualified per SAE J3018 and/or have a special tester license issued by PennDOT.
  3. Page 3 line 13: requires annotating "highly automated vehicle" status on a title. While that is reasonable, in the context of this bill it seems that the only requirement to begin testing is to find a way to get this on a title. There is no permitting or approval process mentioned for anything except truck platoons. There should be a formal permitting process with follow-up monitoring of operational safety given the immature state of the technology.
  4. Page 3 line 29 – page 4 line 3: authorizes platoons of no more than 2 total vehicles with second vehicle having no driver. At the moment this is more stringent than the rest of the bill, but should be revised per other notes to take into account that if the nonlead vehicle loses track of the lead vehicle (for example, lead vehicle exits the roadway for some reason) the nonlead vehicle is now an HAV operating solo.
  5. Page 4 lines 4-11: authorizes platoon operations by filing and reviewing a plan with PennDOT. All HAV operations should file a plan, not just platoons. PennDOT should be required to review all plans at submission and periodically (not less frequently than one year), with ability to revoke licenses at any time for safety concerns.
  6. Page 4 line 16 – Page 7 line 8: requires HAV to stop at accident scenes and owner/registrant report to police with insurance information. (This is repeated in different variations). An additional requirement should be added to ensure that the identity of any remote person who might be held responsible for or have contributed to an accident is promptly identified and made available (e.g., via video conference) to police at the scene just as if they were physically present in the car.
  7. Page 7 lines 12-19: Removes the requirement for following regulations that apply to a (human) driver in the vehicle without imposing an obligation for equivalent means for the ADS to meet the intent of any relevant laws or regulations.
    • As a simple example, autonomous trucks carrying fuel, explosives, or radioactive materials would not be required to stop at railroad crossings per PA Code Title 75 Ch 33, 3342, because that requires the driver to stop, not the vehicle. This issue seems likely to be pervasive, and might result in HAVs not being required to follow traffic laws in instances they are phrased as driver actions rather than vehicle actions.
    • As another example HAVs would not be required to yield right of way to emergency vehicles, because that is a driver responsibility per PA Code Title 75 Ch 33, 3325.
    • The bill also exempts any requirement "not relevant for an ADS," which is a subjective determination that impairs certainty of interpretation. If an ADS thinks it can ignore a red traffic signal or stop sign without a collision because it has a 360 degree field of view from a roof-mounted lidar, one could say that obeying such traffic signals (or performing full and complete stops) are "not relevant" for that ADS.
    • The wording is confusing and ambiguous. If the ADS computer box is physically located in the driver seat as a matter of convenience, is that a "driver seated in the vehicle" since the ADS is considered the driver? 
    • An issue that needs to be resolved is on the one hand saying an ADS does not need to follow human driver rules (pg 7 lines 17-18), but then saying the ADS might be the driver (pg 11 lines 20-24). So is the ADS the driver just for liability? Does it actually need to follow parts of traffic laws that apply to human drivers even though the bill says human driver rules don't apply to an ADS? Needs to be resolved.
  8. Page 8 line 25: quotes J3016 definition of DDT. Given the history of J3016 “lateral vehicle motion” is not the same as “turning" but rather can be interpreted to mean lane-keeping. This ambiguity can be exploited as part of the Level 2 Loophole. If "turning" at intersections is meant to be included in the DDT and permit Level 2 systems to act in a way indistinguishable from Level 3 systems, that should be stated. It would be better to limit Level 2 systems to those that are not capable of making turns at intersections. (See SSRN paper section V.)
  9. Page 9 lines 15-18: incorrectly characterizes the Level 3 intervention wording. SAE Level 3 does not require notification by the ADS for "evident" failures. So any assumption that a Level 3 ADS will always notify the driver when to take over is false. Nor does it deal with the issue of how a remote teleoperator is supposed to detect "kinesthetically apparent" failures if not in the vehicle.  This definition should be changed to require driver notification by the ADS of all failures relevant to ability to safely drive the vehicle. See SAE J3016 Myth #6 here.
  10. Page 9 line 29 - Page 10 line 3.: This section gives permission for operation with no human driver on board, but does so in an overly broad manner. Given the wording, one might assume that  the DDT Fallback operation is NOT considered a "driver" since DDT Fallback is by J3016 different distinct from performing the Dynamic Driving Task (DDT). 
  11. Page 9 line 29 - Page 10 line 3.: Part (1) requires "capable of operation" in compliance with regulations, but does not actually require it to operate in compliance with traffic laws, regulations, and relevant ordinances.
  12. Page 10 lines 11-15: requires a minimum risk condition (MRC) be achieved in case of an ADS failure. It does not require an MRC if a failure of non-ADS equipment renders the vehicle unsafe to drive, but should. It does not require an MRC if the vehicle is forced out of its ODD e.g., due to a sudden rain squall it is not designed to handle, or entering a construction zone it is not designed to handle. (This is related to the "evident failure" exclusion of Level 3 previously discussed.)
  13. Page 10 lines 11-15: does not require that the MRC be free of unreasonable risk. An MRC can do a panic stop in front of a heavy truck, or stop in the proverbial railroad crossing with an oncoming train. This should be changed to further require the MRC to be be free of unreasonable risk to the maximum degree practicable given vehicle and environmental conditions.
  14. Page 10 lines 16-18: require a licensed driver. However, that licensed driver might be abroad in a foreign labor market (for example, in Central America). There are numerous issues raised by this that need to be dealt with such as what happens if such a driver behaves in a reckless or intentionally malicious manner with a vehicle on Commonwealth roads. How do you give such a driver a breathalyzer test (even a teleoperator who is within PA)? How do you arrest such a driver when called for? At the very least, PennDOT should be given broad latitude to restrict and license teleoperator drivers, as well as require some mechanism to assure that companies will be held responsible for the behavior of their remote drivers.
    • The bill includes the phrase "The highly automated vehicle driver on board must be properly licensed under this title" which might or might not be interpreted to be a PA state license, but it is unclear if this is the case. Without clarification, AV operators might find it easier to argue in support of foreign teleoperators to exploit this situation. 
    • For Levels 4 & 5 there is no requirement for a driver, nor a requirement that the vehicle be safe (per SAE J3016), so the requirement for a a human driver to be licensed can be disclaimed simply by avoiding calling a vehicle a "test" vehicle.
    • For Level 3, a "highly automated vehicle driver" (page 2 lines 16-19) drives or is in physical control of a vehicle, which describes the DDT, but not necessarily the Fallback task required in a level 3 vehicle.
    • Since the bill does not distinguish test vehicles, the situation is further muddled.
  15. Page 10 line 24 – page 11 line 5: this exempts school buses, which is good. Placarded loads should also be excluded. Per SAE J3016 the ADS has no responsibility whatsoever to monitor vehicle condition or other aspects of vehicle safety (e.g., loose loads, cargo fires, tires on fire, lost wheels). Hazardous loads should not be carried without human supervision at this early stage of deploying technology.
  16. Page 11 lines 6-17: this authorizes transportation network services without safety drivers. Such services should require SAE Level 4 or 5, and exclude SAE Level 3. Operating a transportation network service at Level 3 creates significant risk of using passengers that are not able to ensure safety in practice as a "moral crumple zone" -- blaming them for failing to avoid a crash in situations for which it is unreasonable to expect them to do that. At the very least, minors should be prohibited from riding unescorted in a Level 3 transportation network vehicle.
  17. Page 11 lines 20-26: declares the ADS is the driver if there is no safety driver for the purpose of licensing drivers. This is an exceptionally bad idea.
    • Is PennDOT required to administer driver tests to an ADS?  This needs to be clarified. Especially since in practice there is likely to be a requirement to re-license after every software update, which might occur daily. A driver license test cannot ensure HAV safety. Rather conformance to industry standard safety requirements proposed by the NHTSA ANPRM on Framework for Automated Driving System Safety should be required (including at least ISO 26262, ISO 21448 and ANSI/UL 4600) for any production ADS (one without a human safety driver monitoring its operation as a test platform).
    •  Any unlicensed driver, including a minor, could load ADS software onto a cell phone and start driving themselves around the city with no need for a driver license, and no HAV testing permit. This is not just a theoretical possibility. Such systems are for sale now that advertise compatibility with over 150 vehicle types starting at $1100: https://comma.ai/ and could plausibly be claimed as SAE Level 3 systems by someone registering a vehicle in PA. (To be clear, these systems are likely to be unsafe if operated as Level 3 systems, but J3016 does not require safety operation or even driving competence for an ADS to be assigned a particular automation level. Such a Level 3 or even Level 4 claim could be made within the scope of SAE J3016 invoked by the bill either now or in the readily foreseeable future.)
    • It is unclear what effect this would have on insurance issues, but it might result in literally having no natural person and no company to pursue for recompense after a major injury or fatality to a vulnerable road users who does not carry automotive insurance (and indeed neither owns nor drives a car). I point out a concern here, and defer to legal experts on this matter.
  18. Page 11 lines 20-26: directs police to cite the HAV "owner or registrant".
    • Why should a vehicle owner who puts their current-unused vehicle into a transportation network pool be held responsible for traffic violations committed by an ADS, when they likely have no understanding of how it has been programmed and certainly have no control over its driving? Either this clause assumes only large sophisticated companies will own or register vehicles (which is likely to be false soon for Level 3), or this is a clause intended to transfer liability onto hapless vehicle owners who have no practical ability to control the actions of their HAV's software.
    • And what if the "registrant" is an anonymous series LLC in another state (or country) with no assets other than the car, while the citation is for a serious offense such as vehicular manslaughter? How are those whose actions, or perhaps even negligence, might have contributed to easily avoidable harm held accountable?
    • A different approach should be used to hold a party accountable for driving behavior who has an actual understanding and/or ability to control the behavior and software quality of the HAV.
    • Again, I point out concerns, but defer to legal experts on this matter.
  19. Page 11 line 27- page 12 line 3: If there is a remote safety driver, police citations are issued to that remote safety driver. Again, what if the safety driver is outside the US? What is to stop a company from employing semi-disposable safety drivers that are simply fired after an infraction with another driver hired to replace them after each traffic ticket? (This is another potential manifestation of moral crumple zone abuse.) PennDOT should be given broad latitude here to, for example, create a point assessment system book-kept against a large company designing and/or operating an AV rather than solely against a safety driver for illegal behavior.
  20. Page 12 lines 9-17:  $5M insurance seems grossly inadequate. (I'm aware other states do this, but they are historically dealing with testing with human safety drivers, not un-crewed HAVs.) A few issues that come to mind are listed here, and there are no doubt others.
    • The $5 million amount is a "not to exceed." This implies it can be lower, and the bill prohibits carrying more insurance than that. As written, this requirement permits either state minimums or $1 of insurance, depending on whether this is considered to preempt other insurance wording, and what other actions PennDOT might take.
    • There is no insurance requirement for the Fallback operator, who might be found at fault by police but not qualify as a "driver." Thus in a mishap there may be no insurance coverage to pursue.
    • There is no mechanism for vulnerable road users without their own automotive insurance policy to collect without initiating a lawsuit (e.g., for hospital co-pays for injuries they have suffered). Perhaps HAV policies should treat such crash victims as named insured parties.
    • The US DOT sets a statistical value of human life at $11.6 million for 2020, with yearly increases. The insurance minimum should be this amount, and should be per person, not per incident as seems implied by the bill wording.
  21. Page 12 lines 18-25: this is incredibly broad preemption language. It should be entirely removed so that municipalities such as the City of Pittsburgh can adopt further ordinances to ensure safety responsive to local conditions. Some concerns include:
    • This could be interpreted to mean that HAVs do not have to obey local traffic rules.
    • This could be interpreted to mean that HAVs do not have to obey traffic directions from local police or school crossing guards.
    • This could be interpreted to mean that HAVs do not have to obey local police stops.
    • In general this clause is an egregious example of autonomandering (see SSRN paper section IV)
  22. Page 12 lines 27-30: gives PennDOT permission to regulate or publish guidance “consistent with this title.” PennDOT should additionally have the ability to issue at least temporary rules more strict than this bill in the interest of public safety, and should be explicitly given permission to require operating permits. As it stands, since no permitting process is required, PennDOT would seem to have no authority to stop reckless testing being conducted by a bad actor, and neither would cities due to the preemption clause. PennDOT should be given authority to issue operating permits subject to quarterly review, along with establish data reporting requirements so that it can monitor that HAV operation is free from unreasonable risk.
  23. In contrast to a statement made at the press conference, SAE J3016 is not a safety standard in any way, shape, or form. Rather, conforming to the letter of the standard for assigning SAE Levels (and no more) is guaranteed to result in vehicles that are unsafe in practice. (For example, J3016 does not require driver monitoring for safety drivers.) Instead, conformance to SAE J3018 for safe road testing should be required.
  24. There is no requirement anywhere for HAVs to be safe. There should be a requirement that the company self-certify they have a credible safety case that their vehicles will be at least as safe as an average unimpaired human driver in a comparable ODD, and that they will update the safety case and re-certify that statement to PennDOT quarterly.
  25. Page 7 line 28 - page 8 line 7: a "set of devices or components" that produces a rear view image is considered a mirror for Federal laws. For context, this is referring at least to FHWA width requirements, which exclude mirrors from truck width. Big rig truck mirrors stick out at a height well above cars, and are not hugely threatening compared to say a wider truck, so as a practical matter this rule makes sense. Replacing side view mirrors with cameras is also sensible, but as drafted this provision seems to have issues:
    • It appears to over-ride Federal regulations. Is it really PA's place to tell FHWA what does and does not count as a mirror? Why not kick this to PennDOT and get a ruling from FHWA as a matter of enforcement policy rather than law?
    • As a practical matter, the FBI does not (as far as I know) enforce truck equipment regulations. So this seems to have the effect of telling PA State Police not to enforce truck width requirements so long as the driver can claim whatever sticking out from the truck is "similar" in function to a mirror -- without restriction. One wonders if this is an attempt to normalize evading FHWA regulations for truck size requirements at the state-by-state level. 
    • The apparent reason for a width rule is to set a boundary for switching to "wide load" procedures, which has to be set at some definite number -- 2.6 meters as it turns out. With this loophole, you could put two-foot-wide armor-plated camera box sticking out the side of a truck down where it is at eye level to passenger vehicle drivers (wouldn't want expensive cameras to get damaged), effectively making trucks wider than they would otherwise be. Even if somehow it is OK to override FHWA rules, there should be a restriction that such a device should be comparable in size, placement, and crash risk to a 3-D footprint to a mirror, or the like.
Some issues might be fixed easily, but others will likely require more pervasive changes. Be that what it may, a bill that has this much of an effect on public safety should be easy to understand for all stakeholders. And certainly should not be mispresented by its sponsors (as was done, for example, by claiming SAE safety standards had been incorporated).

It seems likely that bill advocates will argue that deficiencies in the bill might be made up for by PennDOT guidance and policies. Maybe so -- and maybe not.  

It is also possible other sections of the vehicle code interact with this bill in counter-intuitive ways. Still, since we've already heard clearly inaccurate statements about the bill, we need some more transparency as to the actual regulatory outcome will be, especially since the bill was developed without broad stakeholder input.

Regardless, it seems ill advised to put out a bill with known deficiencies in hopes that they might be made up for later. Why not get it right in the first place?