Monday, January 31, 2022

Simplified Proposal for Vehicle Automation Modes

 Vehicle Automation Modes emphasize the responsibilities of a self-driving vehicle user

By: Dr. Philip Koopman, Carnegie Mellon University

Now that the AV industry has backed away from SAE Levels, especially the highly problematic Level 3, it's time for a fresh look at operating modes of vehicle automation technology.

If you follow self-driving car technology it’s likely you’ve encountered the SAE Levels of automation. The SAE Levels range from 0 to 5, with higher numbers indicating driving automation technology with more control authority (but not a linear progression, and not necessarily higher levels of safety). Unfortunately, in public discussions there is significant confusion and misuse (even abuse) of that terminology. In large part that is because the SAE Levels are primarily based on an engineering view rather than the perspective of a person driving the car.

We need a different categorization approach. One that emphasizes how drivers and organizations will deploy these vehicles rather than the underlying technology. Such an approach needs to emphasize the practical aspects of the driver’s role in vehicle operation.

If you doubt that another set of terminology is needed, consider the common informal use of the term “Level 2+,” which is undefined by the underlying SAE J3016 standard that sets the SAE Levels. Consider also the fact that different companies mean significantly different things when they say “Level 3.” In some cases Level 3 follows SAE J3016, meaning that the driver is responsible for monitoring vehicle operation and being ready to jump in — even without any notice at all — to take over if something goes wrong. In other cases vehicles described as Level 3 are expected to safely bring themselves to a stop even if the driver does not notice a problem, which is more like a “Level 3+” concept (also undefined by SAE J3016).

Even more importantly, the SAE Levels say nothing about all the safety relevant tasks that a human driver does beyond actual driving. For example, someone has to make sure that the kids are buckled into their car seats. To actually deploy such vehicles, we need to cover the whole picture, in which driving is critical but only a piece of the safety puzzle.

With the recent apparent removal of support for the SAE J3016 level system by the Autonomous Vehicle Industry Association, the time is ripe for revisiting how we talk about the different operational modes for vehicle automation. 

We start with the premise that for practical purposes all new vehicles will have some sort of active safety system such as Automated Emergency Braking (AEB) and so skip a category specifically for vehicles with no driver assistance. (One could use a "No Assistance" mode if desired, but it adds unnecessary clutter for most purposes.) We also include a distinct category for testing to help close the SAE Level 2 Loophole which let companies test immature technology without regulatory oversight simply by (improperly) claiming the presence of a safety driver makes an autonomous driving feature testbed SAE Level 2. There is no mapping to the SAE Levels, because that would import baggage that could compromise safety.

The Four Operational Modes

In creating a driver-centric description of capabilities, the most important thing is not the details of the technology, but rather what role and responsibility the driver is assigned in overall vehicle operation. We propose four categories of vehicle operation: 
  • Driver Assistance
  • Supervised Automation
  • Autonomous Operation
  • Vehicle Testing.

Driver Assistance:

Woman driving with both hands on the wheel.

Driver Assistance: A licensed human driver drives, and the vehicle assists.
  • Human Role: Licensed driver performs driving task
  • Vehicle Role: Active Safety, Driver Support, Driving Convenience
The technology’s job is to help the driver do better by improving the vehicle’s ability to execute the driver’s commands and try to mitigate potential harm from some types of impending crashes. Convenience features might also be provided, excluding sustained automated steering.

Capabilities included as driver assistance might include anti-lock brakes, stability control, cruise control, adaptive cruise control, and automatic emergency braking. The driver always remains in the steering loop, exerting at least some form of sustained control over lane keeping and turns to ensure active engagement and situational awareness. 

Momentary intervention by active safety and driver support functions in the steering function such as a steering wheel bump at lane boundaries is considered driver support rather than steering automation. Active safety might momentarily intervene in steering in response to a specific situation but should not permit itself to be used in lieu of continuous driver control of steering. Completely automated speed control is permitted (e.g., adaptive cruise control).

Supervised Automation

Woman hands off the steering wheel. Eyes on the road monitoring the vehicle

Supervised Automation: The vehicle controls speed and lane keeping. A human driver handles things the system is not designed to address.
  • Human Role: Licensed driver keeps eyes on road, monitors for and intervenes in situations vehicle is not designed to handle, executes turns and other tasks beyond ordinary lane-keeping.
  • Vehicle Role: Provides steady cruise functions of lane-keeping and speed control.
Technology normally provides a speed and lane-keeping "cruise" capability when feature is activated. A licensed human driver is responsible for continuous monitoring of driving and intervening when a situation is encountered beyond the design scope of the system. Human driver is responsible for  situations outside the stated design capabilities of the system. The design capabilities exclude turning at intersections and other scenarios beyond traversing the current roadway. Automation might not be capable of handling situations outside its stated capability, which the driver is aware of and accounts for in supervision. Driver is able to take over full control whenever appropriate.

An effective driver monitoring system is required to ensure driver remains situationally aware and is capable of taking over when required for safety. This does not have to mean hands on the wheel. Keeping hands on the wheel might be required for testing, and might be required in vehicles that do not have camera-based driver monitoring systems to ensure driver engagement. But the requirement for Supervised Automation is simply that the driver must be able to respond when needed, and it is up to the feature developer to determine how to accomplish that in an effective manner. In practice with current technology this is likely to mean a camera-based Driver Monitoring System (DMS).

Supervised automation should make it reasonable to expect a civilian driver without specialized training to achieve at least as good a safety record as would be the case without steering automation given comparable other vehicle capabilities and operational conditions.  This means that any such vehicle that is not as safe as a human driver (including not only crashes, but violating traffic laws or exhibiting reckless driving at an elevated rate) should be considered to have a defective design. The scope of design relevant to safety is not only the car, but also the human/driver interface.

As a practical matter, this limits use to highway and straight road-following cruise-control style applications where the vehicle does both lane keeping and speed/separation control. If the vehicle can make turns at intersections, with current technology it is beyond what is reasonably safe for civilian driver supervision, and instead is likely to be a road test vehicle. (This paragraph might be considered controversial. However it is the author's best estimate of what is feasible for safe road use by the full demographic of drivers on public roads, assuming an effective DMS can be deployed.)

Autonomous Operation

Man reading with eyes off the road as vehicle performs driving tasksNo human behind the wheel.

Autonomous Operation: The whole vehicle is completely capable of operation with no human monitoring.
  • Human Role: No Human Driver; steering wheel optional depending on operational concept
  • Vehicle Role: Responsible for all aspects of driving and driving-related safety.
The vehicle can complete an entire driving mission under normal circumstances without human supervision. If the operational design domain (ODD) is restricted, the vehicle is responsible for safely handling any exit from the ODD that might occur. 

If something goes wrong, the vehicle is entirely responsible for alerting humans that it needs assistance, and for operating safely until that assistance is available. Things that might go wrong include not only encountering unforeseen situations and technology failures, but also flat tires, a battery fire, being hit by another vehicle, or all of these things at once. People in the vehicle, if there are any, might not be licensed drivers, and might not be capable of assuming the role of “captain of the ship."

Examples of Autonomous vehicles might include uncrewed robo-taxis, driverless last mile delivery vehicles, and heavy trucks in which the driver is permitted to be asleep. A vehicle that received remote assistance would still be exhibiting Autonomous Operation if (a) the vehicle requests assistance whenever needed without any person being responsible for noticing there is a problem, and (b) the vehicle retains responsibility for safety even with assistance.  In some cases autonomous operation might change mode to remotely supervised operation if a remote operator becomes responsible for safety.

Achieving safety will depend on the autonomous vehicle being able to handle everything that comes its way, for example according to the UL 4600 safety standard with additional conformance to ISO 26262 and ISO 21448.

Vehicle Testing

Photo by Alena Darmel from Pexels

Vehicle Testing: A trained safety driver supervises the operation of an automation testing platform.
  • Human Role: Trained safety driver performs mitigates dangerous behaviors, and at times might perform driving.
  • Vehicle Role: Automation being tested is expected to exhibit dangerous behaviors.
The vehicle is a test bed for vehicle automation features. Because it is immature technology, the driver must have specialized training and operating procedures to ensure public safety, for example according to the SAE J3018 road testing operator safety standard in accordance with a suitable Safety Management System (SMS).

Any vehicle which might exhibit dangerous behavior beyond the mitigation capability of an ordinary licensed driver encompassing the full driver demographic span, or that requires special qualification and care due to potentially dangerous behavior is an automation test platform. Anyone operating such a test platform is performing Vehicle Testing. (Alternately, such a platform is a defective Supervised Automation platform which should not be operating on public roads.) 

Driver Liability:

An advantage of this classification approach is that it provides a straightforward way to address driver liability.
  • Driver Assistance: As with conventional vehicles.
  • Supervised Automation: Absent vehicle defects, the driver is responsible for safe operation. Vehicle defects are activated when the automation does not perform as described to the driver, including incorrect responses to scenarios said to be handled automatically and also failure to respond to a situation the driver has been told is covered automatically. As an example, a vehicle that suddenly swerves into oncoming traffic while performing lane keeping is likely to have defective automation in the absence of other over-riding considerations.
  • Autonomous Operation: The vehicle automation is responsible for safety.
  • Vehicle Testing: The organization performing testing is responsible for safety in accordance with a Safety Management System that includes driver qualification, driver training, and testing protocols.

Other Considerations:

A single vehicle can operate in multiple modes during a single trip. For example a single trip can start in Driver Assistance mode on local roads, switch to Supervised Automation on a limited access highway, and then switch to Autonomous Operation on a designated portion of roads (federal highway, urban downtown, parking garage) as is compatible with its design restrictions.

All modes must have provisions for mitigating risk from foreseeable misuse and abuse. That includes ensuring operation of modes within their intended restrictions (e.g., enforcing the J3016 concept of an Operational Design Domain (ODD)).

Mode changes must be done safely. The principle should be that a human driver can take control in a situation for which that can be safely done, but a human driver can never be forced to assume control involuntarily. This implies, for example, that in Autonomous Operation the vehicle must safety stop in a reasonable location if it is unable to continue a mission without demanding human driver takeover. (A human driver, if present, might elect to assume control, but takeover cannot be required to ensure safety.)

Automation must make a best effort to ensure the highest level of safety it is capable of even without human intervention, but nonetheless is not responsible beyond best effort for dealing with aspects of vehicle and control beyond its currently active mode. The one exception is Vehicle Testing mode, which because it involves immature technology cannot be counted on to provide any automation function beyond a high integrity mechanism for the human test driver to assert vehicle control.

Mode confusion is a critical issue with system safety. There must be an effective scheme for ensuring that any driver is aware of the current vehicle mode. Changes in modes must also be safe. Mode changes should not be permitted without unambiguous determination that any human driver that might be involved has shifted their mental model of current mode to match the actual vehicle mode in effect after the transition and is capable of fulfilling the expected human driver mode for that role.

For an earlier version of this approach with more detail relevant to regulators, see Section V of this paper:

Updated 1/31/2022

Sunday, January 30, 2022

Trust & Governance for Autonomous Vehicle Deployment / Keynote talk

Speaker: Professor Philip Koopman, Carnegie Mellon University

Video (33 minutes):   YouTube |
Slides: pdf format

For a related pager with a lot more detail, see:


Governance of who decides when and where to test and deploy autonomous vehicle (AV) technology--and on what basis--is a pressing problem. At the moment, an essentially opaque governance model is being run by the AV industry in the US. With few exceptions, companies do not follow industry safety standards, and are not required to. Public sentiment has soured to a degree as timelines have stretched and high profile crashes with injuries and deaths attributed to the technology mount. The industry frequently states that they cannot succeed without public trust, and indeed trust will be essential to riding through future adverse news cycles that will inevitably arrive.

Nonetheless, the industry persists in practices that erode trust, including safety opacity, promoting misleading public talking points, adversarial relations with regulators, some reckless road testing practices, and over-hyped promises of a road safety utopia being just beyond an ever-receding horizon. Hopefully the industry will recognize the cognitive dissonance of their current approach and shift to a collaborative governance model to build trust before the current situation catches up with them.

Speaker information:

Professor Philip Koopman is an internationally recognized expert on Autonomous Vehicle (AV) safety whose work in that area spans over 25 years. He is also actively involved with AV policy and standards as well as more general embedded system design and software quality. His pioneering research work includes software robustness testing and run time monitoring of autonomous systems to identify how they break and how to fix them. He has extensive experience in software safety and software quality across numerous transportation, industrial, and defense application domains including conventional automotive software and hardware systems. He was the principal technical contributor to the UL 4600 standard for autonomous system safety issued in 2020. He is a faculty member of the Carnegie Mellon University ECE department where he teaches software skills for mission-critical systems. In 2018 he was awarded the highly selective IEEE-SSIT Carl Barus Award for outstanding service in the public interest for his work in promoting automotive computer-based system safety.

Monday, January 17, 2022

Comments on PA SB-965 Regulating Autonomous Vehicles

This posting lists a number of major issues and concerns with the Pennsylvania legislation introduced in January 2022 to change how the Commonwealth regulates Highly Automated Vehicles (HAV). It deals with technical issues with the bill and especially aspects of safety. The short version is that this bill should NOT be passed without SIGNIFICANT changes.

PA SB 965 home page

  • News article: AV company's point of view 
    • Emphasizes: economic opportunity, jobs. Some talk about safety but no substance (just as there is no safety substance in the bill).
    • "incredibly collaborative" -- but collaborators limited to AV companies, bill sponsors, PennDOT -- not inclusive of Pittsburgh city government, safety advocates, consumer advocates stakeholders

Green text has been added or modified since initial post responsive to comments.
As of afternoon of 1/26/2022 the bill has been amended and committee sent to PA Senate. This writeup has not yet been updated to reflect any substantive changes.


Overall, this bill suffers from a significant imbalance regarding the tradeoff between risks and benefits to Pennsylvania residents. Companies stand to benefit enormously by using public roads as a living laboratory to assist in developing automated vehicle technology. However, other road users are not afforded commensurate safety and compensation assurances for the risks they take from sharing the road with immature technology that presents a real and present danger to other road users. Especially of concern is the palpable risk to vulnerable road user (pedestrians, cyclists, etc.).

Additionally, much of the bill is drafted in a way that creates loopholes and exploitable ambiguities. Regardless of root cause and intent, the net effect is to weaken the bill dramatically, to the point that it amounts to not much more than a free pass of HAV companies to do whatever they want. (Even insurance requirements, such as they are, would be scant deterrent to a company chasing a trillion dollar market. Even a $5 million payout to an injured party is too easily characterized as a cost of doing business when development involves a multi-billion dollar war chest.)

This bill, if passed as-is, will dramatically weaken what safety protection Commonwealth residents have under the current HAV regulatory posture. Passing this bill as it is now will be actively harmful -- it is worse than doing nothing. 

The press conference made a big point that local HAV companies were consulted for the bill -- but not public interest stakeholders. This entire situation really brings into question whether those companies should be considered trustworthy in terms of sincere regard for the public interest vs. their own profit motives (see SSRN paper sections II, IV, and Conclusions).

There is no simple fix for this bill. It needs a major overhaul. 


Below are some (potentially aggressive) interpretations of this bill to point out how problematic it is in its current form:

  • This bill permits HAV testing with absolutely no oversight by PennDOT, no permit, and no "license test" of the HAV so long as an insurance minimum coverage of $1 per incident is met. 
  • This bill permits an unlicensed 12-year-old to act as a Level 3 fallback/safety driver in a heavy truck HAV transporting radioactive waste through local towns and urban centers, with no possibility for municipalities to prohibit that activity.
  • HAVs are not required to follow most traffic laws that apply to human drivers, and cannot be pulled over by anyone other than PA State Police for traffic violations. Except they don't have to pull over for even PA State Police, because they are not required to yield to emergency vehicles, nor to stop when pulled over.
  • An HAV operator could hire judgement proof remote vehicle operators outside the US who are immune to state law sanctions and incentives. Traffic ticket points would be meaningless, as would responsibility for criminal driving behavior such as driving under the influence. (For that matter, how would you give someone in a foreign country a breathalyzer test?)  Companies could just replace that driver after every infraction -- assuming that police can even identify who the remote operator is, which is not provided for in the bill.
  • A passenger in the back seat of a robotaxi might be found culpable for an injury or death caused by a crash. The mechanism would be designating the robotaxi as Level 3, with a click-through rider agreement (that is not read in practice) making passengers responsible for pressing an obscurely placed red panic button in event of a control malfunction. Such robotaxis could operate even if that passenger is a minor, impaired, or does not have a driver license.
It seems likely that at least some of the above is due to drafting errors and internal inconsistencies. For example, the $1 insurance limit might be intentional, or might just be worded incorrectly as a maximum instead of a minimum $5 million insurance requirement. One would hope that a responsible company would not act so egregiously -- but not all companies are responsible, and mere hope is not a responsible plan for public safety.

These issues illustrate the point that this bill doesn't just need a little clean-up of loose ends. It has major issues and should be entirely revisited.


Below are notes on issues found in a review of the bill language:

  1. Page 2, lines 7-15: This invokes SAE J3016 levels 3, 4, and 5 for coverage of the bill. This leaves open the "Level 2 Loophole" by which any company can attempt to claim they are simply putting a driver assistance feature on the road when they are really testing dangerously immature highly automated driving features. Tesla is already doing this with FSD (SSRN paper Section I.B). The bill instead should also apply to any testing of pre-series production features that control vehicle steering and require a test driver.  See: SSRN paper section V.
  2. Page 2, lines 20-21: does not put any requirements on what it means to be an "authorized affiliate." For example, Tesla FSD beta testing using owners who are not trained as testers could be considered "authorized" affiliates. Anyone operating a test vehicle should be qualified per SAE J3018 and/or have a special tester license issued by PennDOT.
  3. Page 3 line 13: requires annotating "highly automated vehicle" status on a title. While that is reasonable, in the context of this bill it seems that the only requirement to begin testing is to find a way to get this on a title. There is no permitting or approval process mentioned for anything except truck platoons. There should be a formal permitting process with follow-up monitoring of operational safety given the immature state of the technology.
  4. Page 3 line 29 – page 4 line 3: authorizes platoons of no more than 2 total vehicles with second vehicle having no driver. At the moment this is more stringent than the rest of the bill, but should be revised per other notes to take into account that if the nonlead vehicle loses track of the lead vehicle (for example, lead vehicle exits the roadway for some reason) the nonlead vehicle is now an HAV operating solo.
  5. Page 4 lines 4-11: authorizes platoon operations by filing and reviewing a plan with PennDOT. All HAV operations should file a plan, not just platoons. PennDOT should be required to review all plans at submission and periodically (not less frequently than one year), with ability to revoke licenses at any time for safety concerns.
  6. Page 4 line 16 – Page 7 line 8: requires HAV to stop at accident scenes and owner/registrant report to police with insurance information. (This is repeated in different variations). An additional requirement should be added to ensure that the identity of any remote person who might be held responsible for or have contributed to an accident is promptly identified and made available (e.g., via video conference) to police at the scene just as if they were physically present in the car.
  7. Page 7 lines 12-19: Removes the requirement for following regulations that apply to a (human) driver in the vehicle without imposing an obligation for equivalent means for the ADS to meet the intent of any relevant laws or regulations.
    • As a simple example, autonomous trucks carrying fuel, explosives, or radioactive materials would not be required to stop at railroad crossings per PA Code Title 75 Ch 33, 3342, because that requires the driver to stop, not the vehicle. This issue seems likely to be pervasive, and might result in HAVs not being required to follow traffic laws in instances they are phrased as driver actions rather than vehicle actions.
    • As another example HAVs would not be required to yield right of way to emergency vehicles, because that is a driver responsibility per PA Code Title 75 Ch 33, 3325.
    • The bill also exempts any requirement "not relevant for an ADS," which is a subjective determination that impairs certainty of interpretation. If an ADS thinks it can ignore a red traffic signal or stop sign without a collision because it has a 360 degree field of view from a roof-mounted lidar, one could say that obeying such traffic signals (or performing full and complete stops) are "not relevant" for that ADS.
    • The wording is confusing and ambiguous. If the ADS computer box is physically located in the driver seat as a matter of convenience, is that a "driver seated in the vehicle" since the ADS is considered the driver? 
    • An issue that needs to be resolved is on the one hand saying an ADS does not need to follow human driver rules (pg 7 lines 17-18), but then saying the ADS might be the driver (pg 11 lines 20-24). So is the ADS the driver just for liability? Does it actually need to follow parts of traffic laws that apply to human drivers even though the bill says human driver rules don't apply to an ADS? Needs to be resolved.
  8. Page 8 line 25: quotes J3016 definition of DDT. Given the history of J3016 “lateral vehicle motion” is not the same as “turning" but rather can be interpreted to mean lane-keeping. This ambiguity can be exploited as part of the Level 2 Loophole. If "turning" at intersections is meant to be included in the DDT and permit Level 2 systems to act in a way indistinguishable from Level 3 systems, that should be stated. It would be better to limit Level 2 systems to those that are not capable of making turns at intersections. (See SSRN paper section V.)
  9. Page 9 lines 15-18: incorrectly characterizes the Level 3 intervention wording. SAE Level 3 does not require notification by the ADS for "evident" failures. So any assumption that a Level 3 ADS will always notify the driver when to take over is false. Nor does it deal with the issue of how a remote teleoperator is supposed to detect "kinesthetically apparent" failures if not in the vehicle.  This definition should be changed to require driver notification by the ADS of all failures relevant to ability to safely drive the vehicle. See SAE J3016 Myth #6 here.
  10. Page 9 line 29 - Page 10 line 3.: This section gives permission for operation with no human driver on board, but does so in an overly broad manner. Given the wording, one might assume that  the DDT Fallback operation is NOT considered a "driver" since DDT Fallback is by J3016 different distinct from performing the Dynamic Driving Task (DDT). 
  11. Page 9 line 29 - Page 10 line 3.: Part (1) requires "capable of operation" in compliance with regulations, but does not actually require it to operate in compliance with traffic laws, regulations, and relevant ordinances.
  12. Page 10 lines 11-15: requires a minimum risk condition (MRC) be achieved in case of an ADS failure. It does not require an MRC if a failure of non-ADS equipment renders the vehicle unsafe to drive, but should. It does not require an MRC if the vehicle is forced out of its ODD e.g., due to a sudden rain squall it is not designed to handle, or entering a construction zone it is not designed to handle. (This is related to the "evident failure" exclusion of Level 3 previously discussed.)
  13. Page 10 lines 11-15: does not require that the MRC be free of unreasonable risk. An MRC can do a panic stop in front of a heavy truck, or stop in the proverbial railroad crossing with an oncoming train. This should be changed to further require the MRC to be be free of unreasonable risk to the maximum degree practicable given vehicle and environmental conditions.
  14. Page 10 lines 16-18: require a licensed driver. However, that licensed driver might be abroad in a foreign labor market (for example, in Central America). There are numerous issues raised by this that need to be dealt with such as what happens if such a driver behaves in a reckless or intentionally malicious manner with a vehicle on Commonwealth roads. How do you give such a driver a breathalyzer test (even a teleoperator who is within PA)? How do you arrest such a driver when called for? At the very least, PennDOT should be given broad latitude to restrict and license teleoperator drivers, as well as require some mechanism to assure that companies will be held responsible for the behavior of their remote drivers.
    • The bill includes the phrase "The highly automated vehicle driver on board must be properly licensed under this title" which might or might not be interpreted to be a PA state license, but it is unclear if this is the case. Without clarification, AV operators might find it easier to argue in support of foreign teleoperators to exploit this situation. 
    • For Levels 4 & 5 there is no requirement for a driver, nor a requirement that the vehicle be safe (per SAE J3016), so the requirement for a a human driver to be licensed can be disclaimed simply by avoiding calling a vehicle a "test" vehicle.
    • For Level 3, a "highly automated vehicle driver" (page 2 lines 16-19) drives or is in physical control of a vehicle, which describes the DDT, but not necessarily the Fallback task required in a level 3 vehicle.
    • Since the bill does not distinguish test vehicles, the situation is further muddled.
  15. Page 10 line 24 – page 11 line 5: this exempts school buses, which is good. Placarded loads should also be excluded. Per SAE J3016 the ADS has no responsibility whatsoever to monitor vehicle condition or other aspects of vehicle safety (e.g., loose loads, cargo fires, tires on fire, lost wheels). Hazardous loads should not be carried without human supervision at this early stage of deploying technology.
  16. Page 11 lines 6-17: this authorizes transportation network services without safety drivers. Such services should require SAE Level 4 or 5, and exclude SAE Level 3. Operating a transportation network service at Level 3 creates significant risk of using passengers that are not able to ensure safety in practice as a "moral crumple zone" -- blaming them for failing to avoid a crash in situations for which it is unreasonable to expect them to do that. At the very least, minors should be prohibited from riding unescorted in a Level 3 transportation network vehicle.
  17. Page 11 lines 20-26: declares the ADS is the driver if there is no safety driver for the purpose of licensing drivers. This is an exceptionally bad idea.
    • Is PennDOT required to administer driver tests to an ADS?  This needs to be clarified. Especially since in practice there is likely to be a requirement to re-license after every software update, which might occur daily. A driver license test cannot ensure HAV safety. Rather conformance to industry standard safety requirements proposed by the NHTSA ANPRM on Framework for Automated Driving System Safety should be required (including at least ISO 26262, ISO 21448 and ANSI/UL 4600) for any production ADS (one without a human safety driver monitoring its operation as a test platform).
    •  Any unlicensed driver, including a minor, could load ADS software onto a cell phone and start driving themselves around the city with no need for a driver license, and no HAV testing permit. This is not just a theoretical possibility. Such systems are for sale now that advertise compatibility with over 150 vehicle types starting at $1100: and could plausibly be claimed as SAE Level 3 systems by someone registering a vehicle in PA. (To be clear, these systems are likely to be unsafe if operated as Level 3 systems, but J3016 does not require safety operation or even driving competence for an ADS to be assigned a particular automation level. Such a Level 3 or even Level 4 claim could be made within the scope of SAE J3016 invoked by the bill either now or in the readily foreseeable future.)
    • It is unclear what effect this would have on insurance issues, but it might result in literally having no natural person and no company to pursue for recompense after a major injury or fatality to a vulnerable road users who does not carry automotive insurance (and indeed neither owns nor drives a car). I point out a concern here, and defer to legal experts on this matter.
  18. Page 11 lines 20-26: directs police to cite the HAV "owner or registrant".
    • Why should a vehicle owner who puts their current-unused vehicle into a transportation network pool be held responsible for traffic violations committed by an ADS, when they likely have no understanding of how it has been programmed and certainly have no control over its driving? Either this clause assumes only large sophisticated companies will own or register vehicles (which is likely to be false soon for Level 3), or this is a clause intended to transfer liability onto hapless vehicle owners who have no practical ability to control the actions of their HAV's software.
    • And what if the "registrant" is an anonymous series LLC in another state (or country) with no assets other than the car, while the citation is for a serious offense such as vehicular manslaughter? How are those whose actions, or perhaps even negligence, might have contributed to easily avoidable harm held accountable?
    • A different approach should be used to hold a party accountable for driving behavior who has an actual understanding and/or ability to control the behavior and software quality of the HAV.
    • Again, I point out concerns, but defer to legal experts on this matter.
  19. Page 11 line 27- page 12 line 3: If there is a remote safety driver, police citations are issued to that remote safety driver. Again, what if the safety driver is outside the US? What is to stop a company from employing semi-disposable safety drivers that are simply fired after an infraction with another driver hired to replace them after each traffic ticket? (This is another potential manifestation of moral crumple zone abuse.) PennDOT should be given broad latitude here to, for example, create a point assessment system book-kept against a large company designing and/or operating an AV rather than solely against a safety driver for illegal behavior.
  20. Page 12 lines 9-17:  $5M insurance seems grossly inadequate. (I'm aware other states do this, but they are historically dealing with testing with human safety drivers, not un-crewed HAVs.) A few issues that come to mind are listed here, and there are no doubt others.
    • The $5 million amount is a "not to exceed." This implies it can be lower, and the bill prohibits carrying more insurance than that. As written, this requirement permits either state minimums or $1 of insurance, depending on whether this is considered to preempt other insurance wording, and what other actions PennDOT might take.
    • There is no insurance requirement for the Fallback operator, who might be found at fault by police but not qualify as a "driver." Thus in a mishap there may be no insurance coverage to pursue.
    • There is no mechanism for vulnerable road users without their own automotive insurance policy to collect without initiating a lawsuit (e.g., for hospital co-pays for injuries they have suffered). Perhaps HAV policies should treat such crash victims as named insured parties.
    • The US DOT sets a statistical value of human life at $11.6 million for 2020, with yearly increases. The insurance minimum should be this amount, and should be per person, not per incident as seems implied by the bill wording.
  21. Page 12 lines 18-25: this is incredibly broad preemption language. It should be entirely removed so that municipalities such as the City of Pittsburgh can adopt further ordinances to ensure safety responsive to local conditions. Some concerns include:
    • This could be interpreted to mean that HAVs do not have to obey local traffic rules.
    • This could be interpreted to mean that HAVs do not have to obey traffic directions from local police or school crossing guards.
    • This could be interpreted to mean that HAVs do not have to obey local police stops.
    • In general this clause is an egregious example of autonomandering (see SSRN paper section IV)
  22. Page 12 lines 27-30: gives PennDOT permission to regulate or publish guidance “consistent with this title.” PennDOT should additionally have the ability to issue at least temporary rules more strict than this bill in the interest of public safety, and should be explicitly given permission to require operating permits. As it stands, since no permitting process is required, PennDOT would seem to have no authority to stop reckless testing being conducted by a bad actor, and neither would cities due to the preemption clause. PennDOT should be given authority to issue operating permits subject to quarterly review, along with establish data reporting requirements so that it can monitor that HAV operation is free from unreasonable risk.
  23. In contrast to a statement made at the press conference, SAE J3016 is not a safety standard in any way, shape, or form. Rather, conforming to the letter of the standard for assigning SAE Levels (and no more) is guaranteed to result in vehicles that are unsafe in practice. (For example, J3016 does not require driver monitoring for safety drivers.) Instead, conformance to SAE J3018 for safe road testing should be required.
  24. There is no requirement anywhere for HAVs to be safe. There should be a requirement that the company self-certify they have a credible safety case that their vehicles will be at least as safe as an average unimpaired human driver in a comparable ODD, and that they will update the safety case and re-certify that statement to PennDOT quarterly.
  25. Page 7 line 28 - page 8 line 7: a "set of devices or components" that produces a rear view image is considered a mirror for Federal laws. For context, this is referring at least to FHWA width requirements, which exclude mirrors from truck width. Big rig truck mirrors stick out at a height well above cars, and are not hugely threatening compared to say a wider truck, so as a practical matter this rule makes sense. Replacing side view mirrors with cameras is also sensible, but as drafted this provision seems to have issues:
    • It appears to over-ride Federal regulations. Is it really PA's place to tell FHWA what does and does not count as a mirror? Why not kick this to PennDOT and get a ruling from FHWA as a matter of enforcement policy rather than law?
    • As a practical matter, the FBI does not (as far as I know) enforce truck equipment regulations. So this seems to have the effect of telling PA State Police not to enforce truck width requirements so long as the driver can claim whatever sticking out from the truck is "similar" in function to a mirror -- without restriction. One wonders if this is an attempt to normalize evading FHWA regulations for truck size requirements at the state-by-state level. 
    • The apparent reason for a width rule is to set a boundary for switching to "wide load" procedures, which has to be set at some definite number -- 2.6 meters as it turns out. With this loophole, you could put two-foot-wide armor-plated camera box sticking out the side of a truck down where it is at eye level to passenger vehicle drivers (wouldn't want expensive cameras to get damaged), effectively making trucks wider than they would otherwise be. Even if somehow it is OK to override FHWA rules, there should be a restriction that such a device should be comparable in size, placement, and crash risk to a 3-D footprint to a mirror, or the like.
Some issues might be fixed easily, but others will likely require more pervasive changes. Be that what it may, a bill that has this much of an effect on public safety should be easy to understand for all stakeholders. And certainly should not be mispresented by its sponsors (as was done, for example, by claiming SAE safety standards had been incorporated).

It seems likely that bill advocates will argue that deficiencies in the bill might be made up for by PennDOT guidance and policies. Maybe so -- and maybe not.  

It is also possible other sections of the vehicle code interact with this bill in counter-intuitive ways. Still, since we've already heard clearly inaccurate statements about the bill, we need some more transparency as to the actual regulatory outcome will be, especially since the bill was developed without broad stakeholder input.

Regardless, it seems ill advised to put out a bill with known deficiencies in hopes that they might be made up for later. Why not get it right in the first place?