In last month's column I summarized the rather unsettling results of a government-mandated review of FMCSA's SafeStat system by the Oak Ridge National Laboratory. Basically, ORNL determined that the SafeStat rankings are significantly compromised because state and local reporting of crash data is inconsistent, incomplete and not done in a timely manner.

The ORNL report made it clear that our safety measurement system is in dire need of repair. Since efficiency gains in the crash reporting process will be very difficult to achieve, I think SafeStat, as it's currently configured, will have a short shelf life.

This is indeed disheartening because SafeStat has contributed to advancements in truck safety, as evidenced by a 15% drop in the “recordable accident rate” since 1999, when the system was introduced. We've got to keep the momentum going.

I'd like to suggest an approach that uses two new safety models: a Roadside Performance Indicator (RPI) and a Crash Involvement Measure (CIM).

Rather than rely on crash data to predict high-risk carriers, RPI would incorporate MCMIS-based roadside inspection and driver violation information, as well as CDLIS driver-conviction data. Carriers would be assigned both “vehicle” and “driver” measures of safety performance, which together would determine RPI.

The vehicle measure would be based on brake, light and tire violations, for example.

The driver measure would be based on two types of data: MCMIS violations relating to HOS, drug/alcohol and traffic enforcement, and CDLIS-based convictions such as failure to obey traffic signals and speeding. This data would be combined to create a relative measure of driver safety behavior for each motor carrier.

Remember, the vehicle and driver measures would be combined to determine carrier RPI scores. Carriers would then be ranked according to these scores.

The second safety model, CIM, would rely exclusively on MCMIS crash data. Based on CIM scores, we could provide a normalized ranking (e.g., number of crashes per power unit or number of crashes per million-vehicle-miles traveled) of all carriers.

Finally, carriers would be assigned to one of four risk categories:

  • Category A: high RPI and high CIM;
  • Category B: high RPI and low CIM;
  • Category C: low RPI and high CIM;
  • Category D: low RPI and low CIM.


In terms of enforcement efforts, carriers in Category A — those with the highest number of at-risk drivers and the highest crash rates — would take priority, followed by Category B, etc.

I think this model would work for two reasons. First, it would place emphasis on the most reliable data sources, i.e., MCMIS inspection data and CDLIS driver-conviction data. Research has clearly demonstrated a link between at-risk driver behavior and crash involvement. Second, the crash data would serve as a kind of confirmation of the results.

I know that many industry professionals might view this approach as bordering on heresy. “How dare you relegate crash data to the status of a confirmation or validation mark? Past crash rates are the biggest predictor of future crashes!”

My response is this: Why wait for state and local governments to improve the quality and timeliness of crash reporting processes? It could be many years before that happens. Instead, let's build a new system now to keep the truck safety momentum alive.




Jim York is the manager of Zurich Service Corp.'s Risk Engineering Transportation Team, based in Schaumburg, IL.