A. Michael Noll
March 4, 2024
© Copyright 2024 AMN
[This blog is authored by A. Michael Noll, an Emeritus Professor at USC, and a early pioneer in digital computer art and 3D animation and tactile communication at Bell Labs. Posted with the permission of the author.]
Regulators like to regulate, and technology and big businesses are tempting targets. Government hates the success of new technologies and businesses. Perhaps their success only more strongly exposes the failures of government. Today, it is AI (artificial intelligence) that has attracted the attention of regulators.
Only a year or so ago, it was the metaverse that we were told needed to be regulated. However, it seemed impossible to define what the metaverse was all about, other than nearly everything and anything. Ultimately, the bubble burst – or the metaverse simply evaporated.
Now it is artificial intelligence – AI – that is attracting all the regulatory attention. It is supposed to revolutionize everything, eliminate jobs, destroy our privacy, and even possibly lead to the end of humanity. We are told that AI needs to be regulated to save humanity.
In the 1960s and 1970s, it was IBM and digital computers that had to be regulated. Computers would eliminate jobs and destroy privacy. The people had to be protected through government regulation. But no practical form of regulation could be invented.
AI seems to include “the cloud” with its centralized storage of data and provision of computing power. The debate between centralization and decentralization of computing is quite old. Decades ago, centralization was called “time shared” computing, but the computing power and storage capacity was not then available to make it practical. That has changed today. The intelligence community likes the cloud, since all the information is stored in one place as opposed to in millions of individual personal computers. But one big database is also one big target for those intending to conflict harm – or spy.
AI seems to encompass nearly everything and anything involving computers and data storage. Other than scale, nothing is new, and much of AI dates back many decades ago – even including robotics. To the extent that AI involves “copying,” copyright and existing laws might suffice. The privacy issues have always been there, and are difficult to define and control. But fear of the unknown – and of technology – is not new.
Perhaps Hollywood is at fault in creating fear of computers and AI. The film “Forbidden Planet” had a computer (AI by today’s terms) that took over and was attempting to kill to protect. The HAL computer in “2001 – A Space Odyssey” actually did kill to protect itself from supposed harm. Many decades ago, I wrote that there should always be a human in the final decision – computers (or AI) should not be allowed to act without human oversight and review.
Regulators (and lawyers and politicians) need employment. Regulation is one way of saving themselves. Conferences, meetings, discussions, studies, legislation are some of their self-serving activities. Is it government and regulators that need to be contained – regulated? I worry more about the “natural stupidity” of humans – perhaps that is what needs to be regulated!

In reflecting on Michael Noll’s post and an online discussion we both participated in, I’m convinced that the focus on AI regulation is politically rational, although not clearly rational from an analytical standpoint. Its hard to seriously regulate something for which there is not agreed definition, for example. But once a leading authority on AI announces that it could be an existential threat to our world, no politician or regulator would refuse to discuss its regulation. It is a politically rational response.