Automation in vehicles and trucking: Imperfections and solutions
Dr. Gill Pratt, CEO of the Toyota Research Institute (TRI), recently offered up some startling – indeed, quite worrying – concerns regarding autonomous vehicle (AV) technology in terms of highway safety during the 2017 Consumer Electronics Show (CES) in Las Vegas.
While introducing Toyota’s new Concept-i – a “thinking car” for lack of a better term – Pratt also highlighted the limitations of the artificial intelligence (AI) powering such vehicles.
And those limitations present some big ramifications when the talk turns to self-driving trucks.
However, the trucking industry might also be crucial to solving some of the “limitations” inherent within self-driving systems too. [Make sure you read to the end of this post to find out how.]
“Historically, humans have shown nearly zero-tolerance for injury or death caused by flaws in a machine,” he explained during a speech at CES.
“And yet we know that the AI systems on which our autonomous cars will depend are presently and unavoidably imperfect,” Pratt stressed. “So: how safe is safe enough? Society tolerates a lot of human error. We are, after all, ‘only human.’ But we expect machines to be much better. In the very near future, this question will need an answer.”
Pratt also put it another way: right now, the U.S. experiences some 35,000 traffic fatalities a year. Now, if a machine-driven car proved twice as safe as the human-driven variety, thus meaning only 17,500 lives would be lost in the U.S. every year to vehicle crashes, would society accept that “autonomous” trade-off?
“Rationally, perhaps the answer should be yes,” he said. “But emotionally, we at TRI don’t think it is likely that being ‘as safe as a human being’ will be acceptable.”
Still, vehicle manufacturers – heavy truck makers included – are still charging ahead with self-driving technologies, with many expecting AVs to be on the road in practical numbers by the 2020s.
But what kinds of self-driving systems are we talking about here?
Pratt contends what we’ll be seeing is a Level 4 autonomous vehicle, NOT the fully self-driving no-human-involved Level 5 type. In the case of Level 4 systems, the vehicle will only drive itself in what he calls “specific Operational Design Domains” such as at only certain speeds, only certain times per day, only when the weather is good, etc.
“It will take many years of machine learning and many more miles than anyone has logged of both simulated and real-world testing to achieve the perfection required for Level 5 autonomy,” Pratt pointed out.
Yet even getting to the point where Level 4 autonomous vehicles are “accepted” may be a challenge due to the hurdles presented by Level 2 and Level 3 systems – hurdles Toyota believes are largely generated by the impact on humans from vehicles that can operate themselves in only limited capacities.
Bob Carter, senior vice president of automotive operations for Toyota, added during his own CES presentation that “considerable research” shows that the longer a driver is disengaged from the task of driving, the longer it takes to re-orient them back to driving.
“There is evidence that some drivers may deliberately test even level 2 [semi-autonomous] system limits; essentially misusing a device in a way it was not intended to be used,” he emphasized.
“When someone over-trusts a Level 2 system, they may mentally disconnect their attention from the driving environment and wrongly assume the system is more capable than it is,” Carter warned. “We worry that over-trust may accumulate over many miles of ‘hands off’ driving. Human nature, not surprisingly, remains one our biggest concerns.”
TRI’s Pratt noted that Level 2 autonomous systems are “perhaps the most controversial right now because it’s already here and functioning in some cars on public roads.”
He explained that, in a Level 2 scenario, a vehicle hand-off to a human driver may occur at any time with only a second or two of warning.
“This means the human driver must be able to react, mentally and physically at a moment’s notice,” he said.
“Even more challenging is the requirement for the Level 2 human driver to always supervise the operation of the autonomy taking over control when the autonomy fails to see danger ahead,” Pratt added. “It’s sort of like tapping on the brake to disengage adaptive cruise control when we see debris in the road that the sensors do not detect. This can and will happen in Level 2 and we must never forget it.”
Then you get to Level 3, which to Pratt’s mind is a lot like Level 4 technology but with an autonomous mode that at times may need to “hand-off” control to a human driver – a driver who may not be paying attention, since the machine is doing all the driving.
“’Hand-off,’ of course, is the operative term and it’s a difficult challenge,” he explained. “In Level 3, as defined by SAE [the Society of Automotive Engineers], the autonomy [system] must ensure that if it needs to hand-off control of the car, it will give the driver sufficient warning. [The technology] must also ensure that it will always detect any condition requiring a hand-off.”
That warning is needed, Pratt added, because with Level 3 systems, the driver is not required to oversee the autonomy, and may instead fully engage in other tasks.
“The challenge lies in how long it takes a human driver to disengage from their texting or reading once this fallback intervention is requested and also whether the system can ensure that it will never miss a situation where a hand-off is required,” he stressed.
So here are the complications for such a situation:
- Research shows that the longer a driver is disengaged from the task of driving, the longer it takes to for them to “re-orient” back to driving;
- At 65 miles per hour, a car travels around 100 feet every second;
- This means in order to give a disengaged driver 15 seconds of warning, at that speed, the system must spot trouble about 1,500 feet away – some five football fields worth of distance;
- On top of that, regardless of speed, a lot can happen in 15 seconds; thus ensuring at least 15 seconds of warning is very difficult.
As a result, Pratt noted that moving to Level 3 AVs may be as difficult to accomplish as moving to Level 4-equipped vehicles.
But here’s where the trucking industry can play a role in solving this issue, which he said revolves around what psychologists call “Vigilance Decrement.”
In 1948, a fellow by name of Norman Mackworth wrote a paper that examined the “breakdown of vigilance” during prolonged visual search. To illustrate this issue, he used a clock that only had a second hand – a hand that would occasionally and randomly jump ahead by two seconds.
Pratt said it turns out that even if you keep your eyes on the “MacWorth clock,” your performance at detecting two-second jumps will decrease in proportion to how long you do it.
It’s TRI’s belief that something similar occurs to human drivers when they are forced to “remain vigilant” for a possible “hand-off” of control from a Level 2-equipped autonomous car.
Yet TRI’s research also found that if drivers conduct mild secondary tasks (texting is NOT included, by the by) it might actually help them maintain situational awareness and reduce “Vigilance Decrement.”
“For example, long-haul truck drivers have extremely good safety records, comparatively,” Pratt noted. How do they do it? He said that perhaps it’s because they employ mild secondary tasks that help keep them vigilant, such as: talking on citizens band (CB) two-way radios; continuously scanning the road ahead; and listening to the radio to stay alert and engaged during long drives.
“We’ve only begun our research to find out exactly how this all works,” Pratt added.
It’ll be interesting to see what else TRI discovers at it keeps working on AV technology.