FMCSA sees what’s not there, ignores what is

RSS

When the Federal Motor Carrier Safety Administration announced its own research to counter a highly critical Government Accountability Office report on the Safety Measurement System’s ability to target high-risk carriers for safety intervention, its press release offered a fairly straightforward claim: “A new study confirms that the Federal Motor Carrier Safety Administration’s Safety Measurement System (SMS) is more effective at identifying commercial bus and truck companies of all sizes for targeted enforcement than the system it replaced.”

FMCSA’s spin on the report prepared for it by the Dept. of Transportation’s Volpe National Transportation Systems Center is bogus for a couple of reasons. First: So what? Does anyone really think the question is whether SMS is better than SafeStat, the data approach FMCSA used from the 1990s until December 2010?

In its report, GAO questioned the utility of SMS even as an internal enforcement targeting tool, let alone as a basis for a carrier fitness determination or for the public to make their own judgments. Even to suggest more than three years into the Compliance, Safety, Accountability program that it’s constructive to compare SMS and SafeStat implies that FMCSA simply isn’t listening to the criticism.

For argument sake, however, let’s agree that it matters that SMS is better than SafeStat. Here’s the problem: The Volpe report doesn’t make that comparison. Not at all. The report contains only a couple of passing references to SafeStat. Volpe’s analysis isn’t comparative in any way. It simply took data collected during the SafeStat period, crunched it under SMS algorithms and looked at the crash performance of carriers that would have been identified as high risk had SMS been in place.

The result of Volpe’s calculations was a seemingly impressive connection between high-risk status and crash risk. ATA and others – including researchers contracted to conduct peer reviews of the Volpe study – have pointed to several flaws in Volpe’s approach, but even if you assume that its point is valid that says nothing about SafeStat’s performance. What was the crash performance of carriers actually identified by SafeStat as high-risk? Conversely, how many of the carriers that SMS would have pegged as high-risk were in fact identified as high-risk by SafeStat? We have no idea.

To be fair to Volpe, the report itself never pretended to be a comparison of SMS and SafeStat. In fact, one of the peer reviewers questioned why SafeStat was even mentioned in the report. So the genesis of FMCSA’s spin on the report is a mystery. When asked this very question, FMCSA declined to respond. If you were to speculate, you might reasonably conclude that this erroneous conclusion stemmed from the fact that Volpe used pre-SMS data to conduct its analysis. That would have allowed Volpe to compare SMS and SafeStat had it chosen to do so, but it never did.

Still, FMCSA staff should have picked up in even a cursory reading of the Volpe report that it made no comparison of SMS and SafeStat. This raises another reasonable possibility: FMCSA was in such a hurry to publicize the research to counter GAO that it didn’t worry with minor details such as what the report actually said or whether the conclusions were valid.

One indication that FMCSA may have just rushed the report out the door is its decision to include the peer reviews in the report it distributed. Certainly they could have issued the report without this information, and for their own sake they probably should have. Peer review comments include:

  • The report’s conclusion “is over-reaching.”
  • The conclusions are “incredibly limited (one paragraph of the entire report).”
  • “I think the analyses themselves indicate that the program may be suboptimal.”
  • "...the nature of their analysis is limited and does not necessarily serve the purpose of determining the effectiveness" of SMS.

The peer reviewers also noted that the report didn’t consider the effectiveness of FMCSA interventions or what the relationship was between identification as high-risk and selection for intervention. Here, Volpe and FMCSA have some cover since SMS didn’t exist during the period analyzed. But this just raises an entirely different question: Why rely on pre-SMS data?

CSA has been in place for more than three years now, and on day one FMCSA had two years of SMS data at its disposal for targeting. Volpe’s analysis could easily have reviewed actual SMS performance rather than hypothetical performance. Why didn’t it? Volpe and FMCSA might answer the question differently, but clearly using actual SMS data would have held FMCSA accountable for interventions based on that data. Perhaps that’s a cynical conclusion, but it’s more than plausible.

The timing of FMCSA’s research is hardly a coincidence. The agency not only knew that the GAO report was coming, but – in keeping with normal GAO practice – it also had an opportunity to review and comment on the report before GAO published it. So FMCSA probably asked its DOT cousin to whip up a report to counter GAO. Clearly, FMCSA either didn’t take the time to analyze the report thoroughly or just decided to pretend that the report had reached a different conclusion.

Please or Register to post comments.

What's Down the Road?

Avery Vise comments on how economic, regulatory, technological and supply chain developments affect the trucking industry.

Blog Archive

Sponsored Introduction Continue on to (or wait seconds) ×