Opinion

What We Do at Streetscope

How can we ensure that cities and countries are not used as live test tracks, not to mention prevent the weakest of these from being the recipient of unsafe systems when they are deployed?

Mark Goodstein
September 9, 2024

It’s hard to imagine a smooth pathway from today, when everybody drives themselves, to a future where almost nobody drives. The transition will be chaotic in numerous ways. Today, the safety of competitive systems is measured by using trailing indicators of collision likelihood. A rearward-facing method, using a count of collisions per million miles driven, is being used to assess automated driving technologies whose design and proof of safe operation are fiercely guarded secrets.  How can we objectively compare the safety of partially automated vehicles using Super Cruise, BlueCruise, or Tesla FSD? Or fully automated vehicles using Waymo, Wayve, or Cruise software? Or Class 8 trucks using Aurora, Kodiak, or PlusAI software? Racking up the number of miles to make a statistically significant assessment using trailing indicators is impractical, given the intense pressure to deploy. At worst, claiming safety based on <1,000th the miles required to make such a claim is disingenuous and objectively misleading.

Given market pressures, then, how can we ensure that cities and countries are not used as live test tracks, not to mention prevent the weakest of these from receiving unsafe systems when they are deployed?  

Such an approach—using a leading indicator—would not require people to be hurt (or worse) to evaluate safety. It would provide third-party validation of a person’s or computer’s ability to sense and respond to traffic circumstances. It would give drivers, system developers, fleets, insurers, and regulators a common language to communicate objectively about—in a word, assess—traffic risk.

We at Streetscope have spent five years developing a leading measure of traffic hazard that can be used, unlike trailing indicators, to objectively and effectively assess the safe movement of vehicles in traffic: in short, our system does not rely on collisions and people being hurt to help determine and ultimately guide greater safety. During those five years, we have worked closely with companies that are each market leaders in their respective industries to understand and build what they need to satisfy their specific needs. Along the way, we have demonstrated that measuring collision hazard continuously can be done economically at scale and that the resulting data stream is a powerful safety tool.

Our patented measure, the Streetscope Collision Hazard Measure (SHM), is an ideal way to transition from human to automated driving. Streetscope’s software platform calculates and produces a high frame-rate data stream of automatically scored and labeled real-world driving dynamics. While the insurance and telematic industries have fixated on using harsh events as a leading indicator of collision likelihood, no such correlation exists. Without a highly correlated leading indicator like SHM, there is no way to support a principled transition from today’s anonymous driving behavior to observed and evaluated safe automated driving behavior.

Streetscope’s data stream is useful to telematics companies and insurers as a next-generation approach to reducing collisions in human-driven fleets. It is helpful to fleet managers desperately looking for actionable insights they can use to train their drivers, route managers, and others in the safety loop to help measurably reduce incidents.

It is useful to the system development community as an objective function in their system validation efforts. For system engineers, the folks tasked with assuring the safety of their company’s products, this data stream can be used to validate required behaviors at the system level, rather than just at a subsystem level. This is a requirement of existing safety standards like ISO26262 and SOTIF, each of which states the requirement but fails to specify how. Streetscope’s data stream enables validation at the system level.

 

We should not compete on safety. We should partner on it. Many developers treat safety as a competitive domain. We would like to create a partner and user base that sees it the way we do, and will participate in the creation of a capability the entire industry can use. Confidence is crucial. Objective evaluation is essential. The resulting assurance will accelerate development and deployment of safe automation technologies.

 

Get in touch with us.

Related Articles