KENNESAW, Ga. | May 14, 2024
Why Autonomous Cars Aren’t Yet Ethical For Wide Deployment
This month I will address an aspect of the ethics of artificial intelligence (AI) and analytics that I think many people don’t fully appreciate. Namely, the ethics of a given algorithm can vary based on the specific scope and context of the deployment being proposed. What is considered unethical within one scope and context might be perfectly fine in another. I’ll illustrate with an example and then provide steps you can take to make sure your AI deployments stay ethical.
Why Autonomous Cars Aren’t Yet Ethical For Wide Deployment
There are limited tests of fully autonomous, driverless cars happening around the world today. However, the cars are largely restricted to low-speed city streets where they can stop quickly if something unusual occurs. Of course, even these low-speed cars aren’t without issues. For example, there are reports of autonomous cars being confused and stopping when they don’t need to and then causing a traffic jam because they won’t start moving again.
We don’t yet see cars running in full autonomous mode on higher speed roads and in complex traffic, however. This is in large part because so many more things can go wrong when a car is moving fast and isn’t on a well-defined grid of streets. If an autonomous car encounters something it doesn’t know how to handle going 15 miles per hour, it can safely slam on the brakes. If in heavy traffic traveling at 65 miles per hour, however, slamming on the breaks can cause a massive accident. Thus, until we are confident that autonomous cars will handle virtually every scenario safely, including novel ones, it just won’t be ethical to unleash them at scale on the roadways.
Some Massive Vehicles Are Already Fully Autonomous – And Ethical!
If cars can’t ethically be fully autonomous today, certainly huge farm equipment with spinning blades and massive size can’t, right? Wrong! Manufacturers such as John Deere have fully autonomous farm equipment working in fields today. You can see one example in the picture below. This massive machine rolls through fields on its own and yet it is ethical. Why is that?
In this case, while the equipment is massive and dangerous, it is in a field all by itself and moving at a relatively low speed. There are no other vehicles to avoid and few obstacles. If the tractor sees something it isn’t sure how to handle, it simply stops and alerts the farmer who owns it via an app. The farmer looks at the image and makes a decision -- if what is in the picture is just a puddle reflecting clouds in an odd way, the equipment can be told to proceed. If the picture shows an injured cow, the equipment can be told to stop until the cow is attended to.
This autonomous vehicle is ethical to deploy since the equipment is in a contained environment, can safely stop quickly when confused, and has a human partner as backup to help handle unusual situations. The scope and context of the autonomous farm equipment is different enough from regular cars that the ethics calculations lead to a different conclusion.
Putting The Scope And Context Concept Into Practice
There are a few key points to take away from this example. First, you can’t simply label a specific type of AI algorithm or application as “ethical” or “unethical”. You also must also consider the specific scope and context of each deployment proposed and make a fresh assessment for every individual case.
Second, it is necessary to revisit past decisions regularly. As autonomous vehicle technology advances, for example, more types of autonomous vehicle deployments will move into the ethical zone. Similarly, in a corporate environment, it could be that updated governance and legal constraints move something from being unethical to ethical - or the other way around. A decision based on ethics is accurate for a point in time, not for all time.
Finally, it is necessary to research and consider all the risks and mitigations at play because a situation might not be what a first glance would suggest. For example, most people would assume autonomous heavy machinery to be a big risk if they haven’t thought through the detailed realities as outlined in the prior example.
All of this goes to reinforce that ensuring ethical deployments of AI and other analytical processes is a continuous and ongoing endeavor. You must consider each proposed deployment, at a moment in time, while accounting for all identifiable risks and benefits. This means that, as I’ve written before, you must be intentional and diligent about considering ethics every step of the way as you plan, build, and deploy any AI process.