Everyone today is talking about Artificial Intelligence, yet few understand how it works. The average person on the street thinks it is somehow programmed in … giving it actual instructions of what to say and do in a given situation.
That might make intuitive sense, but it is not how it works. Instead, the computer is given a large training set of inputs and outputs and algorithms to process them … kind of like the way you may have done your statistical testing back in college. One of the first applications of AI was OCR: optical character recognition. The idea was to take a printed sheet of paper and produce a text file of the characters.
This is a relatively well-defined training experience: there are only a limited number of characters and context can give the computer hints as to whether it is the letter “I” or the character “1”. These systems are now reliable. Speech recognition was next, with huge business implications for legal and medical professions. While speech recognition is pretty good and in so many systems, it also depends upon a “training set” to learn. If the software was trained on British voices, it would have trouble understanding Americans. Accents from the deep south are humorous examples.
Facial recognition was next and while China used it to keep track of their citizens, MIT researchers discovered that it did not work reliably on people of color. Once again, the problem was the training set did not include enough people of color.
But AI has its challenges when a new situation does not “map” to some part of the training set or when those designing the training algorithms never anticipated the use case. In these cases, AI produces general nonsense and probably can’t be trusted. My wife Susan asked ChatGPT for a list of public swimming pools in our area and it offered a list of the nearby private hotel pools.
Modern prognosticators seem obsessed with the prediction that AI will develop a sense of itself and result in some level of revenge against its creators. Movies have long suggested this theme.
Those of you who care about how AI learns should watch the movie Ex Machina where the AI is a beautiful young lady robot named AVA. She is being tested by a young programmer to see if she can pass for being human … aka the Turing Test … but she learns by being tested and in short order realizes she is being tested and then logically deduces that there might be something bad that happens if she fails the test.
I have watched this movie about a dozen times and each time I discover more about how the process of the Turing Test taught her more and more about her situation and eventually led to her asking two key questions: First, “What will happen if I fail your test?” He replies, “I don’t know.” She then asks the second question, “Why should anything happen to me based upon a test?”
Do you remember that wonderful movie War Games and how the computer learned that the path it was on lead nowhere? It was the youngster played by Matthew Broderick who thought by learning the trick behind playing Tic Tac Toe, the computer would learn the game of thermonuclear war would also have no winners … which then stopped the escalation with the computer’s famous line: “Strange game … the only winning move is not to play.”
Spoiler alert … the movie Ex Machina does not end well. I personally believe everyone today needs to watch this movie and think about where we are all going with AI.
Yes, of course, there will be some good outcomes with AI, but more often than not there will be some very disruptive ones too.
