Main content

Six month moratorium on AI development? In your dreams

Neil Raden Profile picture for user Neil Raden April 5, 2023
Did the now-infamous six month AI moratorium letter ever have a chance? Here's a look inside the arguments - and where we go from here.

Rear view of businessman hands behind head looking at cloudy arrow in city sky © ImageFlow - shutterstock
(© ImageFlow - shutterstock)

The news hit fast. From an article in SCMagazine, Tech luminaries call for a 6-month moratorium on AI, but can they stop it?

Social media is buzzing with the news that more than 1,000 tech and AI luminariessigned a petition for the industry to take a six-month moratorium on the training ofartificial intelligence (AI) systems more powerful than OpenAI's GPT-4.

The petition opens with this:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.

Andrew Barratt, Vice President at Coalfire, offers an interesting perspective in an article in Secureworld, Tech Leaders Call for Pause on AI Development:

Anything we do to stop things in the AI space is probably just noise. It's also impossible to do this globally in a coordinated fashion. AI will be the productivity enabler of the next couple of generations. The danger will be watching it replace search engines and then become monetized by advertisers who 'intelligently' place their products in to the answers.

What is interesting is that the 'spike' in fear seems to be triggered since the recent amount of attention applied to ChatGPT. Rather than take a pause, really we should be encouraging knowledge workers around the world to look at how they can best use the various AI tools that are becoming more and more consumer friendly to help provide productivity. Those that don't will be left behind.

Before I get into this discussion, let me share a little diversion about this “cat out of the bag” expression. Linguists (and this is confirmed by author and historian Aja Raden) consider this the most plausible explanation. It dates back to the Middle Ages. People would come to Market Day to stock up on necessities. Livestock vendors in medieval marketplaces sought to swindle their buyers. When someone would purchase a piglet, (to be raised for eventual sale or nourishment) the vendor would sneak a cat into the bag instead, cheating the buyer out of the higher price for a pig. It wasn’t until the buyer arrived home and, literally, let the cat out of the bag that they’d realize they’d been scammed. Snopes, in fact, casts it as an outright falsehood, pointing out that the significant weight difference between a livestock pig and a cat would make such a scam impossible even without the buyer seeing the critter. Apparently, the Snopes writer doesn’t know the difference between a pig and a piglet.

The AI moratorium - framing the arguments

From an article in Yahoo Finance, previously published in Fortune (paywall) by David Meyer, I’ll summarize his arguments and add my own:

1. "The signatories over-hype AI and generally have the wrong motivations.”

Summary: Computational linguistics expert Emily Bender mocks the letter’s authors of “unhinged AI hype, “helping those building this stuff sell it,” and argues that policymakers should focus on how technology is being used to “concentrate and wield power,” and is suspicious because it was published by the Future of Life Institute, which espouses longtermism that focuses on humanity’s very-long-term survival.

My view: A few years ago, my position was that “AI Ethics” was wound wound around the axle of dystopian visions of robot supremacy which flavored their obsession with ethics instead of seeking practical solutions to the potential harms in existing “narrow AI.” However, with the public appearance of LLMs, most notably OpenAI’s GPT-3.5 (chatGPT), and now a more functional GPT-4 (GPT-5 on its way) my fear is that the decidedly non-intelligent class of “Generative AI” is being perceived as true intelligence, and fuels up hysteria that the end of humanity is nigh.

2. "The pause would do nothing to mitigate existing AI threats.”

Summary: Princeton computer scientists Sayash Kapoor and Arvind Narayanan argue that the letter focuses on speculative threats while proposing nothing to mitigate the harms that can arise from today’s AI technology.

See my comments on point #1.

3. "Business is business.”

Summary:  Tim Hwang, the CEO of regulatory-data outfit FiscalNote thought the letter’s authors had valid concerns, but: “It’s hard to put the genie back in the bottle at this point. There’s too much at stake at this point in multiple different geographies to be able to roll back time here. It’s also somewhat impractical, I think, to tell an entire industry to stop making money.”

This is a pretty lame assessment, but probably true. Consider the cigarette industry. After years of trying, the FDA and the FTC finally throttled tobacco companies from luring young people into a deadly habit (The Tobacco Master Settlement Agreement reached in 1997 bans outdoor, billboard and public transportation advertising of cigarettes and restrictions on cigarettes were further tightened in 2010 with passage of the Family Smoking Prevention and Tobacco Control Act). What was the effect?

On a personal note, my father had five siblings. All six were born between 1908 and 1917. All lived into their nineties except one, who died at age 44 from lung cancer in 1960. Everyone in the family knew cigarettes were the cause and everyone immediately ceased smoking. It was thirty-one years before the federal government clamped down on the cigarette industry, but the number of cigarettes that the largest cigarette companies in the United States sold to wholesalers and retailers nationwide increased from $202.9 billion in 2019 to $203.7 billion in 2020, according to the most recent Federal Trade Commission Cigarette Report. From Statistica, “In 2021, revenues from tobacco tax in the United States amounted to 12.14 billion U.S. dollars.” As the late Illinois Republican Senator Everrett Dirkson is apocryphally quoted as say, “A billion here, a billion there, and pretty soon you're talking real money.

4. "Why hand an advantage to China?"

Summary: Let’s pretend magically that OpenAI, Amazon, Microsoft, and Google stop, do you really think the Chinese are going to stop? Or the Russians? There’s no way,” veteran tech investor Daniel Petre told the Australian Financial Review. The Center for Data Innovation, a reliably pro-Big Tech think tank, also raised the specter of China racing ahead in its argument that the U.S. should accelerate rather than pause its AI development.

It’s no secret that impressive advances in AI are happening all over the world, so a moratorium on US companies would have little effect. Consider the case of TikTok: TikTok, and its Chinese counterpart Douyin, is a short-form video hosting service owned by the Chinese company ByteDance. [Concerns regarding alleged security risks posed by TikTok have most prominently been raised by US lawmakers and national security officials who say that user data gathered by the app could be accessed by the Chinese government].

5. "It could set a dangerous precedent.” 

Summary: Andrew Ng called the moratorium call “a terrible idea” because government intervention would be the only possible way to enforce it:

True, but my contention is that the danger today is not AI taking over humanity anytime soon, it’s the presumption that it is already happening, which is worse. I already sense a capitulation to GPT-X.

My take

Melanie Mitchell is a brilliant and insightful professor at the Santa Fe Institute. Sen. Chris Murphy of Connecticut recently tweeted, “ChatGPT taught itself to do advanced chemistry. It wasn’t built into the model. Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made its knowledge available to anyone who asked. Something is coming. We aren’t ready.” 

Dr.Mitchell felt the need to comment and tweeted:

Murphy laced into her and accused her of ”…bullying policymakers away from regulating new technology by ridiculing us when we don’t use the terms the industry use.” She wrote, “ strange and ironic, given that many of those who criticized his tweet (including me) are strongly in favor of the kind of transparency and accountability from tech companies that Murphy is pushing for.”

If that isn’t enough, she continued to write about “The Letter,” disagreeing with it’s basic premise that we are doomed. She finished with a flourish in “What Should We Be worried About?”

- LLMs “are unreliable and cannot be trusted to generate truthful information.”

- "Humans are continually at risk of over-anthropomorphizing and over-trusting these systems, attributing agency to them when none is there."

- “Public fear of AI is actually useful for the tech companies selling it, since the flip-side of the fear is the belief that these systems are truly powerful and big companies would be foolish not to adopt them.”

You can read the whole exchange on her substack, including her conclusion:

We need a Manhattan Project of intense research on AI’s abilities, limitations, trustworthiness, and interpretability, where the investigation and results are open to anyone.   What would be a good name for such a project? Is the name ‘Open AI’ taken?

A grey colored placeholder image