Opinion: A. Michael Noll on AI Hype

AI CRAZED HYPE

A. Michael Noll

Posted with permission of the author

AI (artificial intelligence) is gripping the media. The claim is made that computers can think and understand language. Predictions are being made that AI will replace human creativity in music, art, and literature. Others predict that the harm from AI will mean the end of humanity. Is all this just old-fashioned hype, motivated by entrepreneurs over promotion?

It is important to realize that AI is not new – it goes way back to the 1950s and 1960s when digital computers were relatively novel. The fear of computers taking over was a frequent part of science fiction movies, such as the robots in the 1951 movie The Day the Earth Stood Still. The computer in the 1956 movie Forbidden Planet was even murderous. The HAL computer in the 1968 movie 2001: A Space Odyssey by Stanley Kubrick and Arthur C. Clarke goes berserk and starts killing astronauts. So, if we are to believe sconce fiction, computers and AI can indeed threaten humanity.

IBM’s Watson computer system competed against humans Ken Jennings and Brad Rutter in 2011 in the TV game show Jeopardy – and won. Watson utilizes AI and analyzes vast amounts of data, looking for patterns and answers to queries. It is being used in healthcare to assist physicians in making diagnoses.

AI is decades old, and so too the use of computers to make and assist decisions. Today, digital computers are an essential part of our lives and society. The term “AI” is now being used for whenever a digital computer is involved – and this creates much hype over nothing. Decades ago, digital computers analyzed poems, music, and text, and created their own variations. Today, this is being again, but on a much vaster scale – and the term AI is applied. But computers do not think – they do not understand the sentences they might create, nor the beauty of any art they might help create.

Humans seem to want to believe in a more powerful and knowing being – which for some now is the computer and AI.

In the 1960s, I gave a presentation at a professional meeting of engineers. I claimed that I had developed a computer program that could analyze an image to extract its “essential aesthetic ingredients.” I showed many complex equations, and claimed that Picasso’s “Ma Jolie” was analyzed to obtain its essential aesthetic ingredients from which the computer than created the work “Gaussian Quadratic.” The entire presentation was gibberish – nonsense – a spoof, but the audience did not laugh and believed it all. I guess today I would claim that AI analyzed the Picasso to extract its essential aesthetic ingredients – and it all would still be nonsense. “Gaussian-Quadratic” today is a well-recognized piece of early algorithmic computer art, what seems to be called “AI art.”

Cycles of crazed hype occur frequently over some supposedly new technology that is about to revolutionize our lives. Only a year or so ago, it was the “metaverse” that somehow included everything. Mark Zuckerberg even renamed Facebook “Meta” in honor of the hype. Now, the metaverse seems to have vanished in the universe of hyperbole.

Head-mounted stereoscopic 3D goggles go back about six decades ago when Hugh Upton at Bell Helicopter invented them for use by pilots to see what was below the helicopter. The idea of using half-silvered goggles for computer-generated display, superimposed on reality, soon followed – what much later was named “virtual reality.” The technology today for the displays and the computer processing has greatly improved, but the applications still seem nebulous. The goggles are now sometimes called “holo,” implying that some sort of holography is involved. But this is not holographic technology – yet another example of hype used to confuse and exaggerate.

Stereoscopic 3D movies have their own cycle, and it about time for them to once again reappear. The audience will need to wear some sort of glasses: polarized or colored. There will be a craze of excitement, but ultimately the content – the story – will matter much more than the dimensionality of the picture on the screen.

So, how do we know what to believe? If it failed in the past, might success nevertheless be around the corner? John R. Pierce (the Bell Labs scientist who was the father of Telstar) liked to contrast the artificial intelligence of computers with the “natural stupidity” of humans. We should probably add the “real ignorance” of humans. We should keep calm – ask questions – learn from the past. In the end, my opinion is that AI is mostly a fake. Do not worry computers and robots will not take over – we need mostly to fear other humans.

A. Michael Noll

September 27, 2023

Copyright © 2023 AMN

A. Michael Noll currently is Professor Emeritus of Communications at the USC Annenberg School, and a former writer of opinion pieces and columns, known for his skeptical and critical opinion. In the early 1960s, he was employed as a research engineer at Bell Telephone Laboratories, Inc. where he pioneered digital computer art and stereoscopic computer animation. He has taught and been affiliated with various universities, and is the author of many textbooks about communication technology.

A. Michael Noll

Comments are most welcome