Isaac Asimov’s fictional interview on the dangers of AI

-

Today, AI is everywhere and is sometimes seen as a driver of innovation, sometimes as a danger. But as the ethical debate rages on and everyone talks about the role of AI in our lives and whether or not it needs to be regulated, I invited Isaac Asimov, writer, scientist, and inventor of the famous “Three Laws of Robotics,” to compare his vision with our reality.

In short:

  • Asimov’s original goal in formulating the Three Laws of Robotics was to overturn the image of robots as a threat and highlight human dilemmas through moral science fiction.
  • Asimov points out that these laws were a literary device, unsuitable for direct application to current AI, which lacks both understanding and consciousness.
  • He clearly distinguishes modern AI, which is disembodied and statistical, from fictional robots with physical presence and moral reasoning abilities.
  • He criticizes the delegation of moral responsibilities to AI and emphasizes that biases and abuses are human in origin, amplified by opaque systems.
  • He proposes new laws for AI designers, focused on responsibility, transparency, and collective ethics, and calls for human rather than technical reflection on the use of AI.

Me: Mr. Asimov, you came up with your famous “Three Laws of Robotics” in Runaround, a short story published in 1942. What inspired you to write them at the time?

Isaac Asimov:

I wanted to change the way robots were portrayed in fiction. In the 1930s and 40s, robots were still seen as a threat, metal monsters that would eventually turn against their creators, a vision inspired by Frankenstein. I wanted to do the opposite: show that the danger didn’t come from machines… but from us.

The Three Laws weren’t technical rules. They were a literary device, a pretext for exploring the complexity of human dilemmas. What I was writing was moral science fiction, not an engineering manual.

Me: Here’s a reminder for our readers:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, unless those orders conflict with the first law.
  3. A robot must protect its own existence as long as doing so does not conflict with the first two laws.

Today, some see these laws as a basis for guiding AI. Is this reasonable?

Isaac Asimov:

It’s appealing, but misleading. These laws only make sense if we assume that robots understand what a human being is, what injury is, what obedience and intention are. But even in 2025, you haven’t created thinking beings. Your AI doesn’t understand. It correlates, it predicts, it imitates, but that’s all.

You can’t moralize a system that is incapable of consciousness and morality.

Me: So you would say that today’s AI are not yet “robots” in the sense of your novels?

Isaac Asimov:

Absolutely. The robots in my stories were material entities, often humanoid, with positronic brains. They lived in the physical world and were exposed to real dilemmas. And that’s precisely what allowed me, through them, to ask profound philosophical questions about freedom, responsibility, and truth.

Your AI are disembodied. They are distributed algorithms. We make them do things, but they don’t experience them.

Me: And yet, generative AI is already being used to recruit, diagnose, and predict behavior. It influences human decisions. Does that worry you?

Isaac Asimov:

It’s not AI that worries me. It’s how humans use it and, above all, their willingness to delegate moral responsibility to a machine.

When AI rejects an application, is it discriminating? When a predictive model overestimates the risk posed by a prisoner, is the algorithm racist? Obviously not. These are human biases, amplified by opaque systems. And you persist in talking about intelligence when what we’re talking about is automated influence.

Me: If you were alive today, what laws would you formulate instead of the Three Laws of Robotics?

Isaac Asimov:

I would address them not to machines, but to those who design them. Here is my attempt:

  1. An AI creator must not harm humanity, nor, through inaction, allow humanity to be harmed or manipulated.
  2. AI systems must be transparent, explainable, and designed to serve the collective interest.
  3. Businesses exploiting AI must be legally and ethically responsible for its effects.

And again, that’s just one point

Me: Your stories often featured dilemmas and paradoxes. Is that what’s missing from the current debate on AI?

Isaac Asimov:

Yes. You’re looking for technical solutions to political, social, and philosophical problems. You hope that “ethical AI” will solve what you have been unable to tackle collectively: inequality, manipulation, and technological escapism.

In “The Robots” each story was a way of showing that laws were never enough. Arbitration, context, and discernment were needed. In short, humanity.

Me: One last question. Do you think your reviews are still relevant today?

Isaac Asimov:

I don’t know if my reviews can guide you, but I know why I wrote them: so that you would never forget that all technology carries with it a vision of the world. And that ethics is not a module that can be grafted on, it is a consciousness that must be cultivated.

If my Three Laws seem outdated to you, it may be because the question we should be asking ourselves is no longer “What can machines do?” but rather “What do we want to do with ourselves?”.

Image credit: Image generated by artificial intelligence via ChatGPT (OpenAI)

Bertrand DUPERRIN
Bertrand DUPERRINhttps://www.duperrin.com/english
Head of People and Business Delivery @Emakina / Former consulting director / Crossroads of people, business and technology / Speaker / Compulsive traveler
Vous parlez français ? La version française n'est qu'à un clic.
1,756FansLike
11,559FollowersFollow
31SubscribersSubscribe

Recent