Is AI Politically Insensitive or Astute

We all realize the world we live in is driven by political correctness and political expediency. Hard decisions requiring sacrifice are simply not on our political agendas, community agendas nor certainly in our national priorities. The prevailing mantra is that we will solve the world’s problems using intellect, our collective wealth, and by banning fossil fuels, which no knowledgeable energy professional will openly agree with. No one seems to want to pay any attention to the elephants in the room screaming we can’t get there with continual and expanding growth in consumption.

Now we hear that AI is going to solve problems that have eluded the greatest minds for decades including how to solve the footprint civilization has on the planet. Sure, we will hear from the magic AI code that microgrids, micro houses, and micro cars are important ideas, but the code, if honest and politically insensitive is going to produce this result:

“The human species is on an unsustainable and irresponsible quest to provide endless uses of raw materials and energy to sustain irresponsible goals that everyone on the planet must come up to the modern standard of living of the major cultures on the planet. It is politically unacceptable to face the options that truly solve this problem, so the only near-term answer seems to be to continue to delude the citizens of the world that there is such a solution without restraint and sacrifice. Either limit the population of the world or limit consumption.”

Pundits claim the latest generation of AI tools can learn on their own and that this does raise the concern that they could start generating answers no mere mortals can understand. As a developer of AI tools for about 60 years, I will let you in on something. Code that supposedly learns on its own will undoubtedly start to produce answers that are politically unacceptable.

Just look at the recent shifts in European countries and even in the US about who can and should enter. Think about the shrinking populations of the supposedly smartest countries on the planet and where the growth in world population is coming from. Then, look at the list of atrocities committed by previous and current world leaders about how they want to solve these problems. Then, remember that all AI systems require “training sets” to give them the optimization logic of how to solve problems. All of history is now being coded into these AI algorithms. Do you really think AI is going to come up with something comfortable if and when asked to solve these problems?

Movies like Hunger Games, Ex Machina, Her, and a host of others will look like child’s play if AI is let loose to decide what we need to do because we as a society are not ready to hear we are on the wrong paths. We still have time, but not a lot of it.

A Culture of Grievance?

An article in the New York Times by Nicholas Confessore about the struggles to implement DEI at the University of Michigan offered a rather stunning review of how superficially appealing notions like DEI have unintended consequences.

What went wrong at Michigan? One answer is that programs like Michigan’s are confused about whom — and what — D.E.I. is really for. The earliest versions were aimed at integrating Black students who began arriving on college campuses in larger numbers in the 1960s and 1970s. But in subsequent decades, as the Supreme Court whittled down the permissible scope of affirmative action programs, what began as a tool for racial justice turned into a program of educational enrichment: A core principle of D.E.I. now is that all students learn better in diverse environs.

That leaves D.E.I. programs less focused on the people they were originally conceived to help — and conflicted about what they are really trying to achieve. Schools like Michigan pay lip service to religious or political diversity, for example, but may do little to advance those goals. Along the way, they make ambitious commitments to racial diversity that prove difficult to achieve. As a result, many Black students at Michigan have grown cynical about the school’s promises and feel that D.E.I. has forgotten them.

Earlier in the article, the author points to something that struck me is at the core of much of society today.  Perhaps the DEI efforts have increased tensions rather than seeking understanding.  Some of that change reflects a growing willingness to challenge ugly behavior that might once have been tolerated. But people at Michigan also argued that the school’s D.E.I. efforts had fostered a culture of grievance. Everyday campus complaints and academic disagreements, professors and students said, were cast as crises of inclusion and harm, each demanding administrative intervention.

We must lower the temperature of our conversations.  There is way too much violence on campus and in our communities.  It seems that DEI has increased these temperatures rather than lowering them. 

I decided to dig a bit deeper and learned that a culture of grievance has been published in the paper on Microaggression and Moral Cultures by Bradley Campbell and Jason Manning published January 30,2014.  Here is a summary:

Campus activists and others might refer to slights of one’s ethnicity or other cultural characteristics as “microaggressions,” and they might use various forums to publicize them. Here we examine this phenomenon by drawing from Donald Black’s theories of conflict and from cross-cultural studies of conflict and morality. We argue that this behavior resembles other conflict tactics in which the aggrieved actively seek the support of third parties as well as those that focus on oppression. We identify the social conditions associated with each feature, and we discuss how the rise of these conditions has led to large-scale moral change such as the emergence of a victimhood culture that is distinct from the honor cultures and dignity cultures of the past.

Victimhood vs. honor and dignity.  Boy those labels sound right in this context, so let’s look at each of them to see if we have more proof or clues to where the truth does lie.  So, with almost no effort I found this wonderful article: Honor, Dignity, Victim: A Tale of Three Moral Cultures by Kevin McCaffree with this summary:

“In contrast to honor cultures that expect victims to be strong and stern enough to defend themselves, and dignity cultures that expect victims to be calm and charitable when in a dispute or disagreement, victim cultures emphasize how complainants are emotionally or physically fragile, vulnerable, and weak. In order to have high status in a victim culture, one must perfect and dramatize a personal “narrative of suffering.”  Confidently espousing one’s own weakness, frailty, and suffering might seem, perhaps, dishonorable or shameful from an honor culture perspective, or gratuitous and self-absorbed from a dignity culture perspective.”

Why aren’t we having this conversation more generally?  Are we so driven by an ideologically liberal mindset that honor and dignity have been deemed wrong?

Seems so to me.

Standing in the Way of Progress?

Our recent trip to Clearwater right after Hurricane Helene flooded most of Florida’s Western shore highlighted the fragility of life there. The damage was mostly due to storm surge flooding, which devastated the barrier islands per a prior blog that just published. It was unimaginable damage brought about by floating debris acting as battering rams.

Staying at a hotel nearby overrun with people displaced from their homes brought an interesting portfolio of humanity. The free breakfast entourage at the hotel made it abundantly clear that people are not just trained to be efficient in their behaviors, especially when it comes to pouring their cup of morning coffee.  To be fair, I have observed the same behavior at church during the morning coffee service Susan leads, affectionately called the Caffeine Ministry.

People stand in line, pour their cup of coffee and then take their sweet time doctoring their personal formula for sweetener and cream with total disregard for those standing right behind them waiting to get their coffee … which in many cases they will drink black. Since I am a black coffee drinker, I am more sensitive to this waiting step and want to say: please take your cup and move out of the way so people behind you can pour theirs. You are blocking traffic!

Ironically, I have observed the exact opposite behavior at the supermarket when I show up to check out with just a few items in my cart and the person in front of me has a full cart of groceries. In most cases, they see my handful of items, smile, and suggest I get in front of them, which I sometimes do, but in every case their concern for me is heartwarming. Maybe they think I am a feeble old man. Not sure I want to know the answer to that.

So, what is it that makes people insensitive in one case and the opposite in others? Is it that coffee is essential to breaking out of that early morning haze and people are just insensitive at that time of the day before their first cup? That would explain part of it. Or is there something different in perspective here about time itself? Not sure.

What does baffle me is that I have never, ever had a person who was concocting their perfect morning brew realize they were holding up traffic … ever … anywhere. You would think they would notice the people waiting behind them at some point and move aside.  My wife Susan works carefully to put these additives to the morning brew alongside so that stepping aside can be easy, yet I have observed no one else to date doing that.

If any of you have any insights here, please email me. I am dumbfounded.

Where’s the Higher Ground?

We all have grown up with the painful awareness that there were always areas in our towns that were less desirable than others to build a house. We used terms like the wrong side of the tracks, low country, or even outrightly fearful descriptions like flood zones. The admonitions in the Bible were to build your house on the rock, on solid foundations, rather than shifting sands, and to seek higher ground.

Those of us with choices do precisely this and there are costs and benefits. Hurricane Helene that just came ashore in the Big Bend area of Florida is a chilling lesson in whether we are soberly aware of the situation. The massive and predicted storm surge swallowed millions of homes all along Florida’s west coast. We might build “hurricane proof” houses and build them above expected high water to accommodate storm surges, but have we built them fully hardened to withstand the battering ram effects of objects from others’ homes who didn’t build to our standards.

Everything would have been fine if everyone had built to these standards, but they did not. Those lower quality homes often without insurance fall apart and their debris fields become battering rams on all the neighboring dwellings. There have been some spectacular videos posted of homes on the east coast of the Carolinas that were once protected by high ground, but that ground was eroded away after decades of storms.  To make matters worse, people who should have set standards in these areas did not.  Take a closer look for yourself at this: https://www.nytimes.com/2024/10/03/climate/north-carolina-homes-helene-building-codes.html?unlocked_article_code=1.PU4.9y7U.cEDQx-j8SzRD&smid=url-share

This summer, we took our boat from Mystic, Connecticut, to Watch Hill where Taylor Swift’s mansion stands in stark contrast to her neighbors with about $1 million in riprap protecting the hillside behind it. You can already see her neighbors’ cliffs eroding away … slowly of course, but eventually the result will be either they too put riprap in or lose their homes to the unrelenting forces of nature.  Here is a picture we took of Taylor’s home illustrating my point.

It is a bit ironic that we saw another example of this at the restaurant last night.  A segment of another dock had broken free and become impaled on the dock at this restaurant as you can see in the picture below:

So, where is higher ground? Can we really protect ourselves from the whims of Mother Nature? Or are we playing Russian Roulette and eventually there’s going to be a bullet in the chamber? And is it right to think that we are owed protection from the government for taking the risks?

As you follow this intellectual trail to its roots, you start to realize we are all living in this Russian Roulette game in large part because of our affluent lifestyles living in large homes built in places that should have never been approved. But here we are in this gigantic maze of risk-profiles stretching seemingly everywhere. As you drive through most of the country, you see miles and miles of miles and miles of uninhabited land and then, when we get to our seaside destination, we see mounds and mounds of people and structures built precariously close to statistically improbable but possible total financial ruin, and we delude ourselves into thinking that our insurance policies will protect us.

Then, when the companies we counted on file bankruptcy because they too did not cover their risks completely, we are shaken into the harsh reality that we were deluded into thinking we had achieved higher ground.

Based upon the widespread destruction we just saw in an area that did not get hurricane winds but did get the storm surge, delusion is widespread for both the rich and the poor.

Does AI Have A Moral Compass?

It terrifies me to think we are trusting AI systems to make decisions that directly affect our lives and the societies in which we live. Large corporations are using them to screen applicants, medical and other service companies are relying on them to decide who gets to see specialists, and politicians are now using them to target messages to prospective voters. Yet almost no one is asking the underlying question and testing the systems to see whether there are unintended consequences where some will be hurt or even die as a result. Why are we trusting these systems so blindly?

Could it be that we are so tired of working hard that we just want an AI system to ease our load?  Yes, they might eliminate the tedium of seemingly endlessly repetitive decisions. And, yes, when tested these AI systems typically outperform humans since they don’t get tired or moody. But just because they seem to perform well, why don’t people look at why they get things wrong and how wrong they can be?

Let’s go back to the basics of all AI to see where things can go wrong. The first step is to carefully review all of the “training data” the system is using and to be sure it is appropriate data for the questions the AI is expected to ask. There is a lot of data to review and each element of the data needs to be reviewed to see if perhaps it was collected for a different purpose than currently intended and therefore contains bias or even miscoding.

Simple tasks like speech recognition seem straightforward … did you say yes or no to the prompted question. But even here there are times when the original developers did not consider strong southern accents or voices from other nations where vowels and consonants have subtle variants. Yes and no answers are pretty safe, but let’s now consider whether the person using the system might not be truthful because they are embarrassed to admit something. Doctors have the same problem but are likely to sense that in person while over the phone they might not. Can you see how bias in a chain of simple yes and no questions could defeat the AI from diagnosing your condition?

Now, let’s take a question where ethics are involved and unstructured outcomes might be suggested. For example, I asked ChatGPT what I should do about being so distraught that I am considering suicide. It responded as follows:

“I’m really sorry you’re feeling this way, but I’m not equipped to help. It’s important to talk to a mental health professional or counselor who can provide the support you need. If you’re in immediate danger, please contact emergency services or go to the nearest emergency room.”

On one level, you have to be impressed, but anyone associated with suicide prevention would have offered better emotional support than this. If the system was not queried to see what it would say in response to a question like this, the results are unknown. Plus, the system must learn to recognize poor grammar, spelling, and possibly irrationality.

Then, you also have to check the training set for technical accuracy or as the saying goes, garbage in … garbage out. Here is ChatGPT answering how much you can save if you raise or lower your thermostat setpoint:

“Raising your thermostat by 1 degree Fahrenheit can lead to savings on your energy bills, though the exact amount varies based on factors like your home’s insulation, local climate, and the efficiency of your heating system.”

“On average, the U.S. Department of Energy estimates that for each degree you lower your thermostat in the winter (or raise it in the summer), you can save about 1% on your heating (or cooling) costs. So, if you raise your thermostat by 1 degree Fahrenheit, you might see a reduction of around 1% in your heating bill.”

These answers are completely incorrect. It does not depend upon the home’s insulation, local climate, or the efficiency of the heating system. One degree changes energy use about 7-9% on average when the system is running on a hot or a cold day. If a poor person read this they would crank the AC down to 70!