The trucking industry is becoming increasingly dependent upon the use of advanced analytics—algorithms really—to guide and even make decisions for people. And there are plenty of reasons why.
An algorithm is a set of mathematical rules or procedures, often incredibly complex, for solving a particular problem in a defined number of steps. When people talk about computer intelligence, they are really talking about different kinds of algorithms and sets of algorithms that can solve problems based on far more details than even the brainiest humans can make quick sense of without their aid.
In trucking, this means the ability to do many useful things like identify the combination of conditions that signal a pending system failure on a truck before it actually takes place, or almost instantly sense when a load has shifted, putting a truck and trailer in danger of a rollover if something isn’t done to prevent it. Who wouldn’t want to know that?
Recently, however, ethical questions have begun to arise concerning what data gets included in the analytical processes that create (or soon will create) certain computer-generated decisions. Some insurance companies now include factors such as credit scores and incomes in calculating rates. They do this because data has shown that people with good credit scores and higher incomes are better insurance risks. It is tough to argue with the facts, even tougher to ask insurance companies to ignore these predictive markers once they discover them.
But it is the development of automated vehicles that is really adding some new heat and urgency to the discussion of decision-making software systems and ethics.
In the U.K.’s International Business Times, for example, writer Alistair Charlton considers the issue of autonomous vehicles and ethics in a June 18, 2015, feature tellingly called, “You or the Pedestrian: Ethics of autonomous cars making emergency decisions to save lives.” “An autonomous car is likely able to react more quickly and stop in a shorter distance than a human driver; it is also likely to have a better understanding of road and traffic conditions at that precise moment, and it will not get tired or distracted,” he writes. “But what about its reasoning? Should it react to save its owner and passengers at all costs, or should it choose to hit a wall to save the life of a child in the road?”
Author Patrick Lin, associate professor of philosophy and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, also asks lots of what-if questions to illustrate the ethical challenges autonomous vehicles pose in his article titled, “The Ethics of Autonomous Cars” (The Atlantic, October 8, 2013) “… if an animal darts in front of our moving car, we need to decide whether it would be prudent to brake; if so, how hard to brake; whether to continue straight or swerve to the left or right; and so on. These decisions are influenced by [a number of conditions]….
“Human drivers may be forgiven for making an instinctive but nonetheless bad split-second decision, such as swerving into incoming traffic rather than the other way into a field. But programmers and designers of automated cars don’t have that luxury, since they do have the time to get it right and therefore bear more responsibility for bad outcomes.”
Now there’s something else to keep engineers awake at night.
Wendy Leavitt is Fleet Owner’ s director of editorial development. She can be reached at [email protected].