Should We Slow Down AI Research?

World AI Cannes Festival – AI R&D Fast or Slow?

I had the pleasure of speaking at the largest AI conference in Europe the week of February 5, 2024, the World AI Cannes Festival (WAICF), held in the homonymous city on the Côte d’Azure. The conference was a premier event, with keynotes that featured Bruno Le Maire, the French Minister for the Economy, Finance, and Industrial and Digital Sovereignty; Yann Lecun, Chief AI Scientist at Meta; and Luc Julia, Scientific Director at Renault (and co-inventor of Siri).

One of the more interesting and insightful panel discussions was the session titled: Should we slow down research on AI? It was a debate on whether effective accelerationism, also known as e/acc, should be constrained in favor of a more responsible, methodical approach to AI research and development.

Three points of View

Yes: Regulate and Constrain

Representative: Mark Brakel, Director of Policy, Future of Life Institute

The case against e/acc was not well made. The view is to pause development of “powerful” AI systems, (“powerful” was not well defined), and allow R&D to continue for “good” causes, (“good” was not well defined). Take the analogy with nuclear weapons—there is a belief in the “decelerate” AI camp that AI will be used by bad actors, just like nuclear weapons are being used as a threat today (although they have only been detonated twice to harm humans). The belief is that the risk is there.

What the “Yes” camp failed to mention is that while bad actors and rogue states will inherently embrace AI for nefarious purposes (see the recent article about North Korea, Russia, China, and Iran using AI to enhance their hacking skills), then shouldn’t AI be used to proactively assess and predict these threats, and to actively fight them with the same means and weapons?

Middle of the road

Representative: Professor Nick Bostrom, Oxford University and head of the Future of Humanity Institute

Move fast in the R&D direction, but slow down as the technology approaches transformative capabilities, until the risks have been fully analyzed. The self-driving car was used as an example: some would argue that the “move fast R&D” approach has continued in the production phase, endangering drivers, passengers, and bystanders.

No: Don’t constrain R&D, but intelligently regulate the application of AI

Representative: Yann Lecun, Vice President and Chief AI Scientist, META AI; Francesca Rossi, IBM Fellow and Ethics Global Leader

AI growth will be progressive, not instant, hence there is no fear that AI will go rogue on its own anytime soon. Society has inherent built-in safeguards that make AI safe. As an analogous situation, look at turbojet technology. The turbojet was invented in the thirties, and it was not until the sixties that it was deemed safe and economically viable for commercial use—in this case the regulators and government entities stepped in to ensure the safety of consumers.

Mr. Lecun believes that this will be the case with AI. R&D on the turbojet was allowed to progress (and implemented for military use), but regulation came in when it was ready to be commercialized. Therefore, AI R&D should be allowed to progress unfettered from government regulation, because R&D will also help with addressing and managing the risks of AI.

Regulators should focus on regulating the application of AI—the use case—not the R&D, with companies doing the research held responsible and accountable for what they create and deploy.

Ms. Rossi made an insightful analogy: the counter to the nuclear weapon analogy is that AI research is analogous to the splitting of the atom, and that a nuclear weapon is the application of the function of splitting the atom. Look at the nuclear reactor; it is a positive use of the atom-splitting technology.

Mr. Lecun then used other colorful analogies to discuss managing AI risk; one should look at it from an existential risk perspective:

  1. Terminator Scenario: Loss of control scenario where AI goes rogue and takes over, destroying humanity
  2. Dr. Strangelove Scenario: AI is empowered by humans and used to destroy other humans
  3. Blade Runner Scenario: Humans harm AI that have gone rogue

There are safeguards already in place to manage the risk for the first two, whereas the third is more an ethical question, and not a practical scenario. Furthermore, these are imaginary existential risks, and one cannot regulate existential risks, fictitious and theoretical threats, because otherwise governments will enact regulations that will be counterproductive.

Another problem with potential regulation as it is being proposed today—as in the EU AI Act—is that governments are targeting opensource large language models (LLMs), because private companies that have an interest in restricting innovation—companies who want to sell their proprietary LLMs—are convincing the regulators that unfettered access to opensource LLMs will allow rogue players access to IP that they can then weaponize. I’m not going to pontificate on the benefits of opensource, suffice it to say that the focus on regulating AI is on the wrong aspect of AI here i.e., the R&D.

Panel Discussion Takeaways

The takeaways from the session can be summarized as: a responsible AI framework should be implemented by all companies and individuals performing R&D on AI; regulating entities should focus on regulating the application of AI—the use case—and not the R&D; standards that do not inhibit innovation should be defined and applied; and access to opensource LLMs should remain unfettered to allow companies to implement safe, trusted, AI solutions.

How it impacts MOZAIQ

MOZAIQ is nowhere near to creating an AI-powered solution that has the ability to go rogue, and if it did, to cause life-threatening and / or reputational damage to the provider, and user of the solution.

What MOZAIQ is doing is applying the principles of responsible AI to all intelligent automation solutions that are being designed, built, and deployed for our customers, to ensure that the responsible AI core tenets are adhered to, including privacy and security, trust and fairness, equity and inclusion, transparency and accountability, and safety and reliability.

Note: The topic of AI is massive, provocative, and constantly evolving. Every day there’s a new story, a point of view, a new application of AI, that spurs heated debates across multiple aisles. There is no right answer. Therefore let me be clear: the opinions expressed in this blog post are one of many possible takes on the topic, and the opinions are mine and mine alone. Oh, and this blog post was written by a real human and does not contain content generated by ChatGPT or any other Generative-AI platform.

Find out how we can support all of your intelligent mortgage automation needs.BOOK A DEMO