Does AI Have A Moral Compass?

It terrifies me to think we are trusting AI systems to make decisions that directly affect our lives and the societies in which we live. Large corporations are using them to screen applicants, medical and other service companies are relying on them to decide who gets to see specialists, and politicians are now using them to target messages to prospective voters. Yet almost no one is asking the underlying question and testing the systems to see whether there are unintended consequences where some will be hurt or even die as a result. Why are we trusting these systems so blindly?

Could it be that we are so tired of working hard that we just want an AI system to ease our load?  Yes, they might eliminate the tedium of seemingly endlessly repetitive decisions. And, yes, when tested these AI systems typically outperform humans since they don’t get tired or moody. But just because they seem to perform well, why don’t people look at why they get things wrong and how wrong they can be?

Let’s go back to the basics of all AI to see where things can go wrong. The first step is to carefully review all of the “training data” the system is using and to be sure it is appropriate data for the questions the AI is expected to ask. There is a lot of data to review and each element of the data needs to be reviewed to see if perhaps it was collected for a different purpose than currently intended and therefore contains bias or even miscoding.

Simple tasks like speech recognition seem straightforward … did you say yes or no to the prompted question. But even here there are times when the original developers did not consider strong southern accents or voices from other nations where vowels and consonants have subtle variants. Yes and no answers are pretty safe, but let’s now consider whether the person using the system might not be truthful because they are embarrassed to admit something. Doctors have the same problem but are likely to sense that in person while over the phone they might not. Can you see how bias in a chain of simple yes and no questions could defeat the AI from diagnosing your condition?

Now, let’s take a question where ethics are involved and unstructured outcomes might be suggested. For example, I asked ChatGPT what I should do about being so distraught that I am considering suicide. It responded as follows:

“I’m really sorry you’re feeling this way, but I’m not equipped to help. It’s important to talk to a mental health professional or counselor who can provide the support you need. If you’re in immediate danger, please contact emergency services or go to the nearest emergency room.”

On one level, you have to be impressed, but anyone associated with suicide prevention would have offered better emotional support than this. If the system was not queried to see what it would say in response to a question like this, the results are unknown. Plus, the system must learn to recognize poor grammar, spelling, and possibly irrationality.

Then, you also have to check the training set for technical accuracy or as the saying goes, garbage in … garbage out. Here is ChatGPT answering how much you can save if you raise or lower your thermostat setpoint:

“Raising your thermostat by 1 degree Fahrenheit can lead to savings on your energy bills, though the exact amount varies based on factors like your home’s insulation, local climate, and the efficiency of your heating system.”

“On average, the U.S. Department of Energy estimates that for each degree you lower your thermostat in the winter (or raise it in the summer), you can save about 1% on your heating (or cooling) costs. So, if you raise your thermostat by 1 degree Fahrenheit, you might see a reduction of around 1% in your heating bill.”

These answers are completely incorrect. It does not depend upon the home’s insulation, local climate, or the efficiency of the heating system. One degree changes energy use about 7-9% on average when the system is running on a hot or a cold day. If a poor person read this they would crank the AC down to 70!

2 thoughts on “Does AI Have A Moral Compass?”

Comments are closed.