Why I Signed the "Pause Giant AI Experiments" Petition

Author:Murphy  |  View: 29781  |  Time: 2025-03-23 19:05:59

The "spirit" is right; the body has many flaws

Photo by Álvaro Serrano on Unsplash

Last Tuesday, I received from the Future of Life Institute an email asking me to sign a petition to pause giant AI experiments. When I signed the letter, the organizers asked us to keep it confidential until its publication moment arrived. At that time, I didn't expect it to raise so much news, comments, articles, and more.

Shortly after its publication, I was contacted by a couple of news outlets, one from Argentina and the other one from Mexico, to participate in their live programs and give my opinion there.

It was then that I realized the FLI's letter was indeed a high-impact initiative.

Though in the end, I decided to sign it, I also found many statements in the letter I disagree with, so in this post, I want to make the record straight and give the reasons for and against the letter. I encourage you to read the letter itself as well; it's not that long.

Why now?

It's important to be aware that the sense of urgency in the open letter is not about Artificial Intelligence in general; it's about the recent development and release of what's been called "Generative AI" or GenAI for short.

Unless you've been hiding under a rock, you've heard about ChatGPT (released last November, gosh, it seems so far in the past), which is the most prominent example of GenAI, but there are many other ones, like DALL-E, Claude, Stable Diffusion, Poe, You.com, Copy.ai, and more. AI capabilities are being incorporated as well into many products, like Notion, Microsoft Office, Google Workplace suite, GitHub, etc.

Many of us have recognized GenAI as a real game changer, as opposed to others who called it "a fad." Bill Gates writes that he's seen twice in his already long life transformational technologies and that GenAI is his second time (the first was when he saw a graphical user interface).

But it hasn't been a smooth road.

Aside from the notorious cases of "evil personalities" hijacking the chatbots, we have seen a lot of factual errors or even invented facts –called "hallucinations"– which are misleading to humans because the text looks as if it was written with the utmost assurance; we humans tend to show insecurity when we aren't certain of what we are saying, but of course, machines don't feel insecurities (nor assurance, actually).

Companies like OpenAI try to give the impression that mistakes are being ironed out, but some experts believe that mistakes and hallucinations are intrinsic to the technology and not minor details. I proposed a way to minimize mistakes without pretending to eliminate them altogether.

While deficiencies are far from being corrected, the race between competing companies, in particular, OpenAI (with Microsoft behind) and Google (with its associates DeepMind, and Anthropic), is at full speed. Products are being released at a neck-breaking pace, just for the sake of a market share advantage, without really worrying about consequences in society.

We –the citizens– are left on our own to deal with the introduction of GenAI in our lives, with all the possibilities of misinformation, biases, fake news, fake audio, or even fake videos.

Governments do nothing about it. International organizations do nothing about it.

I understand that a text of image generation doesn't look as critical as medical diagnosis or medication, but there are important consequences nonetheless. We had a first taste of how misinformation (leveraged by tech platforms like Twitter) played a role in the US 2016 and 2020 elections, and now we are suffering from polarization in society all around the world. But Twitter bots of some years ago are nothing compared to what is about to come with GenAI if we do nothing about its adoption.

Let's now review what the letter gets right and later on what, in my opinion, it gets wrong.

What the open letter gets right

  1. GenAI systems are "powerful digital minds that no one – not even their creators – can understand, predict, or reliably control." They are "unpredictable black-box models with emergent capabilities." This explains why they are intrinsically dangerous systems. For instance, "emergent capabilities" means that when GenAI systems get large enough, new behaviors appear out of thin air –like hallucinations. Emergent behaviors are not engineered or programmed; they simply appear.
  2. "AI labs [are] locked in an out-of-control race to develop and deploy ever more powerful digital minds." This non-stop race can be understood in terms of market share domination for the companies, but what about societal consequences? They say they care about it, but the relentless pace points otherwise.
  3. Instead of letting this reckless race continue, we should "develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."
  4. Another good point is not trying to stop AI research or innovation altogether: "This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities." Further, a reorientation of tech efforts is proposed: "AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."
  5. Finally, an emphasis on policymaking is proposed as the way to go: "AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should, at a minimum, include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause."

What the letter gets wrong

Most of what I think the letter doesn't get right is at the beginning; later on, things improve a lot. I have the clear impression that the first and last parts of the letter were written by different people (I don't suspect either to have been written by a bot). Let's jump to the specifics:

  1. References are not authoritative enough. Oral declarations are not objective evidence. Even the Bubeck et al. reference is not really a scientific paper because it wasn't even reviewed! You know, papers published in prestigious journals go through a review process with several anonymous reviewers. I review myself more than a dozen papers each year. If the Bubeck paper were sent to a reviewed journal, for sure, it wouldn't be accepted as it is because it uses subjective language (what about "Sparks of Artificial General Intelligence"?).
  2. Some claims in the letter are plain ridiculous: it starts with "AI systems with human-competitive intelligence…", but as I explained in a previous post, AI current systems are not at all human-competitive, and most human vs. GenAI comparisons are misleading. The reference supporting machine competitiveness is bogus, as I explained in the previous point.
  3. The letter implies claims of Artificial General Intelligence (AGI), as in "Contemporary AI systems are now becoming human-competitive at general tasks," but I'm in the camp of those who place AGI as a very distant future and don't even see GPT-4 as a substantial step to it.
  4. The dangers for the jobs market are not well put: "Should we automate away all the jobs, including the fulfilling ones?" Come on; AI is not coming for most of the jobs, but the way it's taking some of them (like the graphic design capabilities made from scrapping thousands of images without giving any monetary compensation to their human authors) could be taken care of, not by a moratorium, but by imposing taxes to big tech and giving support to graphic designer communities.
  5. Sorry, but almost every single question the letter asks is ill-written: "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?" This is a "human vs. machines" scenario, which is not only ridiculous but also fuels the wrong hype about AI systems, as Arvind Narayanan (@random_walker) points out on Twitter. Terminator-like scenarios are not the real danger here.
  6. Just to conclude with nonsense questions in the letter, let's check this one: "Should we risk loss of control of our civilization?" This is wrong at so many levels that it's hard to comment on. For starters, do we currently have control of our civilization? Please tell me who has control of our civilization besides the rich people and the heads of state. Then, who is "we"? The humans? If this is the case, we are back to the human vs. machine mindset, which is basically wrong. The real danger is the use of AI tools by some humans to dominate other humans.
  7. The "remedy" proposed (the "pause" on the development of Large Language Models more capable than GPT-4) is both unrealistic and misplaced. It's unrealistic because it's addressed to AI labs, which are mostly under the control of big tech companies with specific financial interests –one of which is to increase their market share. What do you think they'll do, what the FoL Institute proposes, or what their bosses want? You're right. It's also misplaced because the pause wouldn't take care of the looting already taking place from human authors or the damage already being done with misinformation from human actors with tools that don't need to be more powerful than GPT-4.
  8. Finally, some people signing the letter, and in particular Elon Musk, cannot be seen as an example of what would be AI ethical behavior: Musk has misled Tesla customers by naming "Full Self-Driving" the Tesla capabilities that not only fail to comply with Level 5 of the standard proposed by the Society of Automotive Engineers, but also fail to comply with level 4, and barely could fit into level 3. Not only that, but also Tesla has released to the public potentially deadly machines much before ensuring their safety, and Tesla cars in autonomous mode have actually killed people. What is the moral authority of Elon Musk to ask for "safe, interpretable, transparent, robust, aligned, trustworthy, and loyal" AI systems that he hasn't put into practice in his own company?

Then again, why I signed at all?

After all the letter gets wrong, why I decided to sign?

I'm not alone in signing the letter and criticizing it as well. There is, for instance, @GaryMarcus, who said, as reported by the NYT:

"The letter is not perfect, but the spirit is exactly right."

This is a way to say that something needs to be done, and the letter can be seen as a first attempt at doing it. This is something I can agree on.

But if you want a more lucid take on the subject, read, for example, the Yuval Harari op-ed in the NYT. Apart from some over-ambitious phrases as "In the beginning was the word," I liked his critique of Terminator-like scenarios and his take on the real dangers:

… Simply by gaining mastery of language, A.I. would have all it needs to contain us in a Matrix-like world of illusions, without shooting anyone or implanting any chips in our brains. If any shooting is necessary, A.I. could make humans pull the trigger, just by telling us the right story.

Tags: AI Artificial Intelligence Editors Pick Humanity Machine Learning

Comment