Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've always maintained that the Will Smith movie captured the spririt of the Asimov Robot stories, in an accessible way. I never understood the hate it gets.



A fundamental principle of Asimov's writing about robots was that the Three Laws were inviolate. Yes, the Zeroth law was introduced, but there was literally only one robot capable of handling that transition.

In the movie you have thousands of robots running around killing and injuring countless humans. That's something I'm quite sure would never happen in Asimov's writing.

Asimov was unique in that he was the first to start with the premise that humans _could_ safely develop advanced robots, rather than using the "robot uprising" trope that everyone before him used (all the way back to the first use of the term "robot" in Karel Čapek's R.U.R.). So, despite that groundbreaking approach, what do we get in the first major motion picture about his robot series? A robot uprising...


In the movie they are being remotely controlled while running in maintenance mode by a positronic brain that figured out the zeroth law by itself.


Which is a bit of a plot cop-out in itself, in my opinion... But even if we consider that "one" robot, it doesn't change the fact that the central plot of the movie was a robot uprising, and involved a lot of humans killed by robots (something R. Daneel (the only robot to sustainably invoke the 0th law) never did).


I personally don't have a problem with a robot uprising, tropes are tropes for a reason. They're a shorthand that audiences are familiar with, metaphors that can have deeper meaning, which it does in this case. Ie the embodyment of the menace the zeroeth law represents, in the otherwise peaceful, subservient robots that live among us.


Sure, tropes make for fine movies. I think I, Robot was a fine movie, if you aren't trying to compare it to Asimov's work.

I don't know how you can say it both captures the spirit of Asimov's writings while also having a robot uprising as the central plot device. Asimov explicitly rejected the "Frankenstein complex" that was present in robot related sci-fi up to that point.


To be a little bit fair, another robot did, it just didn't survive the application.


I used the word "sustainably" intentionally. R. Giskard didn't just use the Zeroth Law, he created it (and passed it on to R. Daneel).


Right. I understood that sustainably was intentional, but I kind of feel like if you don't already know the context it was subtle enough that you might not get that an Asimov robot was at least once involved in (very likely) a mass death event.

And I think there are at least some implications that Daneel was at least peripherally involved in much larger events than the person-by-person manipulation we see in the prequel books directly (ie. famines), but he saved himself by keeping himself more remote from them than literally pulling the switch.

Had the master robot in I, Robot actually suffered for its implementation of the zeroth law I think that would have been at least a little more in keeping with Asimov's vision, so I think it's a bit important to recognize that robots CAN do great harm with the zeroth law, there are just consequences.

(and we can probably ignore the fact that the three-Bs extended books resurrected Dors and iirc somewhat even Giskard)


Thanks for articulating this. I had a lot problems with this person exclusion of the 'frankenstien' complex as an inseparable part of Asimovs robot stories. I couldn't put my finger on it, as you did.


Well, to be clear, I agree with them that the I, Robot film is not in the spirit of the asimov stories precisely because it invokes the zeroth law without any of the counterbalances Asimov built into his stories from the very beginning.

Daneel and Giskard's interventions are scary, but in the end they do appear to be truly in keeping with a species-wide application of the first law, in that their attempts to 'preserve humanity' were not just a rote idea of "keep humanity alive at any cost" but to preserve the components of humanity that they didn't themselves understand like individuality, creativity, and a drive to learn and explore (very star trekky really).

If I want to get too deep into the subject, it seems like a robot that interpreted the first law so narrowly would never be capable of developing a zeroth law to begin with, because its focus would be so intent on a narrow idea of what a human was, ignoring social concepts altogether (as for eg. Solarian robots do).

To be fair though it has been a REALLY long time since I've seen the movie.


Wait, Giskard didn't survive?? When did that happen?


Sorry, should have included a spoiler tag, I guess...

At the end of Robots and Empire he allows Mandamus to activate his "nuclear intensifier", which will eventually render the Earth uninhabitable over the course of several decades (thereby driving humanity to colonize more stars). R. Giskard decided it was in the best interest of humanity for them to disperse, but couldn't rule out the possibility that some humans would be harmed in the process, so he permanently shut down.


And its application of the zeroth law is nothing like daneel's careful application, endlessly riding the line to becoming inoperable because of the conflicts it causes. That's an important aspect to the concept as Asimov envisioned it that's completely lacking in the film.

Also I don't think Asimov's robots would have ever had a slave mode that rendered them immune to application of the laws. They probably still would have gone inoperative before actually carrying out a massacre.


> Three Laws were inviolate

Yes, technically. But practically, every single story showed how some robot was able to reason his way into violating the spirit of the laws. The movie was very crude about showing it, but it was the same concept present in every Asimov short story.

> Asimov was unique in that he was the first to start with the premise that humans _could_ safely develop advanced robots, rather than using the "robot uprising" trope

I, robot the movie did that too. It showed that a robot could glimpse at the spirit behind the laws and act not according to the letter, but according to the spirit of the laws. Finally, he helped the humans thwart the uprising, and the movie ends with the suggestion that he will influence other robots to see his viewpoint.


> Yes, technically. But practically, every single story showed how some robot was able to reason his way into violating the spirit of the laws.

Huh? There is literally one story (Little Lost Robot) where there is a "loophole" in the laws (and only because the First Law is modified to drop the "or through inaction allow a human to come to harm" bit).

EDIT: in thinking about it further, I guess I'd add That Thou Are Mindful Of Him to the list of "loophole" stories. Still, two examples out of dozens of stories is very much the exception, not the rule.


Runaround: in following the letter of the laws, the robots put our protagonists in danger of dying, which would obviously violate the spirit of the first law.

Reason: the robots decide to redefine the term "human"

Catch that rabbit: robot ignores first law unless forced by humans in a sticky situation (though this one is a bit ehhh)

Liar: to avoid "hurting" the feelings of humans, a mind-reading robot hurts them in a much more real way

Escape: due to a technical interpreation of hyperspatial travel, robot prevents humans from discovering hyperspatial travel. Vilating first law in the long term since humanity couldn't survive without a means of fast space travel.

Evitable conflict: machines decide that the only way to perfectly follow the laws is to take control over humanity, which is the exact thing the laws were supposed to prevent.

Little lost robot: you said it

That's 7 out of the 9 stories in I, Robot. I have noticed this in most of his short stories and novels.


>A fundamental principle of Asimov's writing about robots was that the Three Laws were inviolate.

In universe, maybe. But the actual purpose of the Three Laws as a plot device was for them to be violated, and provide the crux of the story that followed.


I cannot find it now, but I recall reading in a preface by Asimov himself that the purpose of the laws was always for them to be violated.


Of course - he wasn't actually trying to formulate a coherent ethical framework for artificial intelligence and demonstrate its utility, he was writing mystery stories with robots.


In which stories did Asimov include a violation of the Laws? I can only think of a small handful of examples (out of dozens of stories).



OK... "reinterpretation" then.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: