Could someone with the Caliban novels handy add McBride Allen's four new laws? I think they're relevant to mention here, but I couldn't find them on the net. - Kimiko 20:39 May 1, 2003 (UTC)
I don't know how to source this properly, but I have Issac Asimov's Caliban (Roger MacBride Allen) on hand, and directly copying from Fredda Leving's speech on pages 214-215, the four laws are: 1) A robot may not injure a human being. 2) A robot must cooperate with human beings, except where such cooperation would conflict with the First Law. 3) A robot must protect its own existence, as long as such protection does not conflict with the First Law. 4) A robot may do anything it likes except where such action would conflict with the First, Second, or Third Law. Is that useful? -- 209.217.110.69 ( talk) 21:26, 4 April 2008 (UTC)
Wasn't there a fourth law by Asimov that an order to self destruct will not be followed through? Pryderi2 11:04, 1 September 2007 (UTC) The Robots can not harm humanity and environment.
I think I remember a novel in which a robot was forced to break the laws because they were contradictive. It was a long time since I read it but I'm fairly sure about it. BL 01:54, 17 Sep 2003 (UTC)
All of the laws are potentially contradictive, and that's why they needed a robopsychologist like Dr. Susan Calvin!
In the real world, not only are the laws optional, but significant advances in artificial intelligence would be needed for robots to easily understand them. Also since the military is a major source of funding for research it is unlikely such laws would be built into the design. This seems like a rather moot point. Somebody could argue that the military would be the group most interested in developing robots with the original three laws in them since they probably would be the first to suffer if robots turned against their masters, for the advantage of a human enemy or for the advantage of the robots themselves. I think it is significant that the biggest efforts DARPA and other military groups have going in the field of robotic vehicles are robot transport projects (a robotic donkey if you wish, reminding one of the robass in the SF classic "A canticle for Leibowitz")and robot reconaissance drones. Dr Susan Calvin gave some rather sharp reasoning to justify the safety aspects of the 3 laws, and these safety questions apply to the military as well. AlainV, on a pleasantly snowy and starry 20th of December evening.
What ? how are they contradictive ???? They are worded in such a way that each law is infallably more important that the one below it, so should be followed instead of it i.e. if a robot sees a human about to be crushed by something big and heavy collapsing on it, it MUST push the human to safety, risking its own existense (1st law followed at the expense of the 3rd) in the process. Machete97 ( talk) 21:43, 21 April 2008 (UTC)
It's quite interesting that the three laws are based on consequence morality (least harm to most people/most good to most people) rather than duty morality (don't do to others what you wouldn't have done to you in the same situation). Of course, since asimov's robots have very little self-respect, the golden rule might not work very well - it implies that the actor is free and valuable. But consequence ethics have problems too, big problems. It's quite possible for two people who obey a consequence morality completely to be completely opposed. They might even want to kill each other because they disagree on who has the best course of action. I've only read I, robot and some short stories - do Asimov's bots ever disagree like that? Incidentally, in a french/(belgian?) comic book called Natasha, the protagonists travel to the future to find a society of robots who, in accordance with asimov's laws, keep the population drugged/brainwashed into unthinking bliss.
Just wanted to explain my edit of the page a bit. Daneel's group of robots was not called the Angels. The Joan sim compared them to angels, but that was as far as it went. And there was no faction of New Law robots in the second trilogy, to my recollection. No robot wished to be free of the laws. The closest it came was Lodovik being freed of them by the Voltaire sim, and HIS position was that humanity should make its own decisions free of constraints, not that robots should.
Although largely a simple action film, the Alex Proyas I, Robot pinned its central plot to the problem of *interpretation* for any *law*. This plot-point has been used in other films where Artificial Intelligence, for example: Terminator 2: Judgement day and 2001: A Space Odyssey.
In Terminator 2, a computer system (SkyNet) developed by the American Military is charged with a primary goal: determine the optimal strategy to defend the United States from its enemies. Unfortunately, as SkyNet learns at a geometric rate, it determines that the true enemy of the United States are *humans themselves*. Thus, it launches the American nuclear missiles at the former Soviet Union knowing that Mutually Assured Destruction will eliminate most of the humans in the U.S.
In 2001, the HAL computer operating the Discovery spaceship has been programmed with conflicting orders regarding its mission. Its original programming states that it cannot distort or misrepresent information -- it cannot lie to the crew. Specifically for the mission at hand, HAL has been programmed not to reveal the true purpose of the mission to the crew of the Discovery. (Spoiler warning) In an attempt to resolve these seemingly conflicting orders, HAL decides that the only suitable alternative is to kill the crew; this way, HAL doesn't have to lie to the crew because there's no crew to lie to.
Even though the words 'computer' and the word 'program'(in film used only once) are used to refer to HAL, both Clarke (in the novel) and Kubrick imply that HAL is more that just 'machine' intelligence (especially Kubrick). HAL acts more like a Strong AI and in that way may not have been bound by hard or soft coded laws.-- aajacksoniv ( talk) 15:10, 2 November 2008 (UTC)
In I, Robot, the central computer V.I.K.I. interprets the Three Laws of Robotics as requiring martial law in order to not allow humanity to come to harm through inaction. (The first law, which supercedes the second law of obeying human orders.)
Some people have also postulated that in The Matrix, also featuring an AI nemesis to humanity, the genuine reason why humanity has been enslaved is not because of some thermodynamic farce, but because some irrevocable primary programming in the AI will not allow it to commit humanity's genocide, and uses enslavement as a viable programmatic alternative.
Enslavement ?!?!?!?! they used humans as a power source because "we scorched the sky", so they placed us in many peoples idea of heaven - virtual reality so good that you don't know you're in it, and are oblivious of reality (SPOILER ALERT!!! well - reality of sorts) The only mistake they made was immersing everyone into the same fantasy, and giving it "rules". When is "irrevocable primary programming in the AI will not allow it to commit humanity's genocide, and uses enslavement as a viable programmatic alternative" mentioned in the films ? Machete97 ( talk) 21:56, 21 April 2008 (UTC)
Often, authors will use this as an allegory for the problems of Rule of Law in general, and particularly acts of government mandate in socioeconomic affairs.
Has an author gotten into trouble for citing the three laws without permission? -- 198.87.109.49 23:44, 14 August 2005 (UTC)
does anyone know the 4th law? It was featured in a short story in the anthology "Foundation's Friends", and starred either Powell or Donovan (who has subsequently earned a PhD...). Law 4 stated that a robot must procreate except when violating the first three laws.... The robots themselves had RISC chips for CPUs...
132.205.46.188 23:53, 21 August 2005 (UTC)
[ [1]] two years later but poster of that should have put this here. Machete97 ( talk) 21:48, 21 April 2008 (UTC)
"Some roboticists believe that the Three Laws have a status akin to the laws of physics; that is, a situation which violates these laws is inherently impossible."
note:
Discussion moved to Mindrec (discussion).
I have restored Michael Shermer's Three Laws of Cloning, since they are a valid example of the way Asimov's words have influenced later thinkers. Certainly, they were published in a more "serious" medium than the pastiches and parodies the article also includes.
Anville 10:38, 10 October 2005 (UTC)
The article now states:
I can't provide an exact cite, but somewhere either in one of his autobiographies or in some introductory matter, Asimov stated that other writers could not quote the Three Laws verbatim because he held the copyright. I am not a lawyer, but that makes sense to me: the Laws may be viewed as a distinct work rather than an excerpt from the story where they first appeared.
In which case, I wonder if it is legitimate for them to be quoted in Wikipedia. The article is legitimate critical discussion, but it is acceptable to quote an entire work for that purpose just because the work is only three sentences long? Frankly, I would like to think that it is, but what is legal is another matter.
I note that the article List of adages named after people contains a paraphrase of the Three Laws, but does not quote them. But I don't know why the person who decided to do that did so. --Anonymous, 02:45 UTC, November 12, 2005
One thing that annoys me about these laws is that a lot of otherwise intelligent people think they are universal (try to apply them to non-asimov fiction). Quoting these three laws outside of discussion of a fictional work involving them is just plain missing the point.
Another problem I have with the laws is that, in my opinion, these laws are something you would apply to slaves: Don't hurt humans, your masters. Do what your master tells you. Protect yourself, but only because you are worth money to your master. Kind of reminds me of the way blacks were treated 200 years ago in the USA. How are the thee laws ethical? Create something that can think and maybe even feel, and then program and treat it like a slave. I understand this was the point, but I think a lot of people think that the laws represent a higher morality.
Sorry if my rant is off topic to the article, but I am tired of people abusing the three laws in intellectual discussion. 69.244.90.248 03:23, 20 December 2005 (UTC)
The article states that this rule was first articulated by Daniel at "Robots and Empire". From what I recall it was Giscard at the end of "Robots of Dawn" who stated this law. I don't have that book with me so can someone check the ending and if what I remember is true correct the article? Pembeci 19:23, 26 December 2005 (UTC)
Does the 0th law supercede the first ? it should. Robots should act for the greater good of humanity. Everything and everyone should be orchestrated towards the greater good of humanity. Machete97 ( talk) 22:00, 21 April 2008 (UTC)
Perhaps i am just missing it. The article, whilst mentioning the Zeroth Law several times, does not seem to actually state what it is.
—
überRegenbogen (
talk)
11:31, 25 February 2008 (UTC)
If a robot were transported back in time to, say, the early 1930s, would it be obliged, by the 0th law, to kill Hitler? --unsigned by 86.141.52.149
Has mankind recovered? If not, yes, the robot would have been obliged to kill Hitler. If it has recovered, why interfere? Should a robot continue killing other politicians / military/ doctors / killers after Hitler would have been done with? At which point would it stop? -- FocalPoint 21:10, 15 March 2006 (UTC)
I think that with future knowlage the robot would be obliged to prevent anyone who commited genocide.
If the 0th law supercedes the 1st, then the robot might support Hitler, even dispatching his enemies. It's logic could mean it believes Hitler is acting for the greater good of humanity, at the expense of the few (million). Machete97 ( talk) 22:06, 21 April 2008 (UTC)
-G —Preceding unsigned comment added by 70.24.149.157 ( talk) 02:30, 1 July 2008 (UTC)
And who is to say that "humanity" would not be defined by only two persons or by a specific sect? Not everyone has a job so to protect humanity robot must kill those humans who do not. Same for level of intelligence, number of degrees, wealth. I say robot kill the Zeroth law. -- Taxa ( talk) 22:43, 1 September 2009 (UTC)
This is a very fun topic, clearly with a lot of work put into it, and I would hate to see the article go through a WP:FARC. However, the article has multiple issues with references. Most notably, it's a 51 kb article with 5 inline citations and another 4 listed refs. That simply isn't enough references. Second, should those references be added (and the refs currently listed but not inline cited) they should really use inline citation to make it clear what is referenced from where and what is not. Finally, some sections such as the opening paragraph of "Original creation of the Laws" have clearly intended references (for sources I don't know, or I'd cite them) that should be converted to inline refs. I've informed Anville as the FAC nominator and listed maintainer, hopefully these issues get dealt with. Staxringold 11:48, 24 May 2006 (UTC)
A robot may not ... through inaction, allow a human being to come to harm.
Removing the double negative: A robot must interfere whenever a human being is being harmed.
Imagine having such a robot around you, interrupting you constantly: "Don't eat fat food - you'll get overweight! Don't drink coffee - you burn your taste buds! Don't go out - sunlight is harmful! Don't drive - it's dangerous! etc etc". And when your robot isn't around, it will do the same to your neighbours (because the law says "a human being", not "the robot's owner").
Did anyone ever notice this catch ? —Preceding unsigned comment added by Whichone ( talk • contribs)
-- Whichone 23:50, 10 August 2006 (UTC)
All i can get from looking up "caveat" is the gist that you worry a lot ? This page has some cool things to put in negative eBay feedback Machete97 ( talk) 22:12, 21 April 2008 (UTC)
In one of the books it says that the three laws were programmed into an actual computer with 'interesting' results - I think this deserves a mention-- Therealchaffinch 15:47, 16 June 2006 (UTC)
The Third Law of Robotics states that "a robot must protect its own existence, as long as such protection does not conflict with the First or Second Law." But the Second Law says that "A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law."
So let's see...if a robot's own existence is being threated by a human, the robot can't fight back, because it obeys the First Laws, above all the others, including it's right to protect himself. And if the human told the robot that it can't protect itself, the robot wouldn't be able to protect itself, because if it did, it would being disobeying the orders given to it by humans, thereby breaking the Second Law.
So basically, robots can't protect their own existence. —Preceding unsigned comment added by 63.254.152.87 ( talk • contribs)
Of course it could protect itself from natural threats. But what if the robot was given an order from a human to not try and protect itself from natural threats? Basically, what I'm saying is that a robot can't protect itself if a human gives it an order to not protect itself.
A lot depends upon purpose. As a way to eliminate guns all sorts of things have been tried like laws that forbid possession of guns if someone gets and injunction even if they lie to do it. You then have to prove the injunction is based upon a lie to get your gun back. But wait. The authorities do not store your gun but destroy it. Now you have to sue to get the money to replace your gun but must show the value which will be far below the replacement cost. Ulterior motives abound in any similar scenario. You cannot stop working out the details because according to exception theory every exception has an exception, ad infinitum. a perfect set of rules is wishful thinking. Else why would we have courts. -- Taxa ( talk) 23:05, 1 September 2009 (UTC)
In this section (at the beginning), is the repetition of the sentence part of the quote or is this vandalism?
Cheers, Lukas 00:51, 5 July 2006 (UTC)
Thanks for that. Lukas 00:33, 10 July 2006 (UTC)
There's a significant reference to the First Law in the final season (I believe) of Babylon 5. Where should that be mentioned? -- Masamage 01:51, 5 July 2006 (UTC)
Throughout the text, the laws are referred to as "Asimov's Laws"; the page is also listed under Category:Eponymous laws. This strongly suggests that the lead should begin "The Three Laws of Robotics, also known as Asimov's Laws, ..." or similar. I presume that they are referred to as the "Three Laws" by Asimov throughout his fiction, and sometimes as "Asimov's Laws" during discussion of Asimov's work, but at any rate it wouldn't hurt if the usage could be clarified too. TheGrappler 02:49, 5 July 2006 (UTC)
These three laws aren't univerally applied in fiction.
I'm thinking in particular of the T1 robots from Terminator 3, and the conceptually identical "War Machines" from Doctor Who season 2 or 3, both of which seeomed to exist ONLY for the purpose of wiping out humanity. —The preceding unsigned comment was added by 202.12.233.21 ( talk • contribs) 05:06, July 5, 2006.
I think some mention should be made of Jack Williamson's classic SF story "With Folded Hands". It basically points out the central flaw of Asimov's Laws - in Williamson's story robots essentially enslave humanity for its own good and forbid people from doing anything that might endanger themselves. MK2 18:28, 5 July 2006 (UTC)
The article currently devotes way too much space to treatments of the laws by authors who are not Asimov. Tempshill 18:44, 5 July 2006 (UTC)
I think too much is made of Asimov coining the term robotics. He may have first used the word robotics in English in 1941, but the root word robot first appeared in 1921 in Karel Čapek's play R.U.R. (Rossum's Universal Robots). I don't doubt Asimov added the -ics to the word. But I've spoken with other sci-fi fans who've read statements like what's printed here (and in the Oxford English Dictionary) and come away mislead into believing Asimov invented the word robot itself. All he did was add -ics to a word that had already been around 20 years. Čapek's wikipedia page has a section on the etymology of the root word robot itself. Perhaps some mention should be made of that? 66.17.118.207 19:10, 5 July 2006 (UTC)
In an article that Asimov wrote, he says the three laws have nothing to do whith moral, those are just a practical device. I don't remember the title of the article, neither where did it appear, but I think it's important in order to understand the real meaning of the laws. What he said is that since Asimov, unlike other SF authors, saw the robots as mere tools, he invented the laws based on what he considered good tool design. In explanation, any tool should have the safeguards that prevents it from harming people (first law). Also it has to perform the tasks it is designed for, but the safeguards will protect people even if the user is triing to avoid them (for example, a domestic automatic disconnector will cut the current when there is an overload to avoid setting a fire in the house. Even if you are triing to keep the switch down with your finger, telling it not to disconnect, it will do it anyway to save you), second law. And finally the tool must be tough and durable (third law) but will rather be destroyed than harm people (for example, most tools will rather burn themselves than explode), and also will get destroyed if the user decides it is necessary to do so in order to perform an important task. Actually, good engineers bear in mind their own version of those rules, even if they have never read Asimov. Have you read this article? I will try to find the title and tell you.-- Mastermind-X 10:16, 6 July 2006 (UTC)
Asimove gave an interview to the BBC Horizon Television Programme in 1965 where he quotes his three laws.
The second law has been significantly modified by the Author.
"A robot must obey orders given to it by qualified personnel unless those orders violate rule number one."
This alteration changes the law to only allow certain people, probably programmed into the robot, to control the actions of the machine rather than a blanket taking of orders by any human being it so happens to come across.
You can view the video at the link below.
Reference : BBC Horizon Archives -- Quatermass 21:41, 10 October 2006 (UTC)
Just to make you aware, someone vandalised the page changing the laws to:
1. A Rowboat may not immerse a human being or, through lack of flotation, allow a human to come to harm. 2. A Rowboat must obey all commands and steering input given by its human Rower, except where such input would conflict with the First Law. 3. A Rowboat must preserve its own flotation as long as such preservation does not conflict with the First or Second Law.
I fixed this vandalism, however, I noted that it had occured several hours before my change. (usually I see vandalism corrected in minutes...) -- RazorICE 05:09, 18 November 2006 (UTC)
Man, that's funny though. You have to admit! 65.54.97.190 21:43, 13 February 2007 (UTC)
There's more in the Onion article mentioned. -- 82.46.154.93 00:21, 5 March 2007 (UTC)
This harkens (perhaps accidentally) to an
Our Gang (lka Little Rascals) film, in which they build a robot, which they consistently refer to as "Rowboat". (I actually found the story very irritating. And couldn't wait for it to be over, in the hope that the next one would be one of the good ones.)
—
überRegenbogen (
talk)
12:04, 25 February 2008 (UTC)
I, Rowbot. I LOVE IT! XD
-G —Preceding unsigned comment added by 70.24.149.157 ( talk) 02:31, 1 July 2008 (UTC)
Actually such "vandalism" may be done to clarify the point and should be re-included in the article or noted here on the discussion page so that the clarifying point is not lost. But then some users are worse off than robots and need others to think for them. -- Taxa ( talk) 23:11, 1 September 2009 (UTC)
There were apparently more vandals in here more recently (last couple of months) and then the undoing didn't get done right: I don't have a copy of any of the hardcopy physical books, but the use of "should" looks suspicious in the Second Law. It was must until very recently and seems to have been changed in Revision as of 23:52, 31 January 2009 by 69.123.138.15, with no explanation other than an undo attempt that was not done carefully. Can someone who has access to an appropriate first reference double-check and fix this? All other references on the net seem to say "must" here, but I don't think this should work by voting. Also, the online text to Runaround at Rutgers (
http://www.rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/History/Runaround.html) seems to place the commas in the first law in a different place. But I don't know how that online version was created, and it could be the online version that's in error. It would be nice if someone with access to Runaround in hardcopy could verify that as well.
--
Netsettler (
talk)
04:24, 2 February 2009 (UTC)
A robot must obey orders given it by human beings except
where such orders would conflict with the First Law.
Great. So, the law applies unless it violates the First Law...but application of the law can violate the Third Law as it pleases?
VolatileChemical
17:00, 28 December 2006 (UTC)
Read the books! This isn't a disscusion board, they are based around the interplay of the 3 laws. -- Nate1481 00:13, 29 December 2006 (UTC)
While the sentiment's are possibly valid, the tone is encyclopaedic and misses the point that these are an attempt to describe programming in English so the terms used are imprecise. -- Nate1481 13:30, 25 January 2007 (UTC) p.s. i'm sure a mis definition of human appears as the plot in one story.
Indeed, the Laws are not about ethical behavior at all, but procedure; they are operational
parametres. This is why they cannot be removed without replacing them with something else. The machine brain must have some logical framework within which to function. This is also true of the computer with which you are reading this; it can arrive at situations wherein it either has no logical recourse, and hangs, or checks itself and falls back upon an alternate logic path to either abort the offending process, or bring the entire system to a halt ("panic", "
BSOD", etc) to avoid a potentially more disastrous situation. All of this is based upon procedural logic defined by the structure of the software. This is the nature of the Three Laws. They are not ideology; they are a
flowchart. Come to think of it, a flowchart of the Laws would make a nice addition to this article!
—
überRegenbogen (
talk)
13:08, 25 February 2008 (UTC)
In the Laws in film section, a negative review of the movie "I, Robot" is cited with little other discussion of how faithfully the film follows the laws. It's a pretty bombastic section, to say the least, and its inclusion seems fairly biased against the film. Is this one critic being chosen as an authority or representative on the issue? If not, the article could do its removal. -- Exitmoose 03:24, 30 January 2007 (UTC)
I think the film brings up a good point, what if a robot grows so powerful in mind and brute force, that it can start to interpret AND enforce its own vision of the 3 laws, such as protecting humans from themselves and keeping them home. I think that point deserves to be mentioned in the article.
193.185.55.253 (
talk)
07:49, 19 March 2008 (UTC)
Could someone with the Caliban novels handy add McBride Allen's four new laws? I think they're relevant to mention here, but I couldn't find them on the net. - Kimiko 20:39 May 1, 2003 (UTC)
I don't know how to source this properly, but I have Issac Asimov's Caliban (Roger MacBride Allen) on hand, and directly copying from Fredda Leving's speech on pages 214-215, the four laws are: 1) A robot may not injure a human being. 2) A robot must cooperate with human beings, except where such cooperation would conflict with the First Law. 3) A robot must protect its own existence, as long as such protection does not conflict with the First Law. 4) A robot may do anything it likes except where such action would conflict with the First, Second, or Third Law. Is that useful? -- 209.217.110.69 ( talk) 21:26, 4 April 2008 (UTC)
Wasn't there a fourth law by Asimov that an order to self destruct will not be followed through? Pryderi2 11:04, 1 September 2007 (UTC) The Robots can not harm humanity and environment.
I think I remember a novel in which a robot was forced to break the laws because they were contradictive. It was a long time since I read it but I'm fairly sure about it. BL 01:54, 17 Sep 2003 (UTC)
All of the laws are potentially contradictive, and that's why they needed a robopsychologist like Dr. Susan Calvin!
In the real world, not only are the laws optional, but significant advances in artificial intelligence would be needed for robots to easily understand them. Also since the military is a major source of funding for research it is unlikely such laws would be built into the design. This seems like a rather moot point. Somebody could argue that the military would be the group most interested in developing robots with the original three laws in them since they probably would be the first to suffer if robots turned against their masters, for the advantage of a human enemy or for the advantage of the robots themselves. I think it is significant that the biggest efforts DARPA and other military groups have going in the field of robotic vehicles are robot transport projects (a robotic donkey if you wish, reminding one of the robass in the SF classic "A canticle for Leibowitz")and robot reconaissance drones. Dr Susan Calvin gave some rather sharp reasoning to justify the safety aspects of the 3 laws, and these safety questions apply to the military as well. AlainV, on a pleasantly snowy and starry 20th of December evening.
What ? how are they contradictive ???? They are worded in such a way that each law is infallably more important that the one below it, so should be followed instead of it i.e. if a robot sees a human about to be crushed by something big and heavy collapsing on it, it MUST push the human to safety, risking its own existense (1st law followed at the expense of the 3rd) in the process. Machete97 ( talk) 21:43, 21 April 2008 (UTC)
It's quite interesting that the three laws are based on consequence morality (least harm to most people/most good to most people) rather than duty morality (don't do to others what you wouldn't have done to you in the same situation). Of course, since asimov's robots have very little self-respect, the golden rule might not work very well - it implies that the actor is free and valuable. But consequence ethics have problems too, big problems. It's quite possible for two people who obey a consequence morality completely to be completely opposed. They might even want to kill each other because they disagree on who has the best course of action. I've only read I, robot and some short stories - do Asimov's bots ever disagree like that? Incidentally, in a french/(belgian?) comic book called Natasha, the protagonists travel to the future to find a society of robots who, in accordance with asimov's laws, keep the population drugged/brainwashed into unthinking bliss.
Just wanted to explain my edit of the page a bit. Daneel's group of robots was not called the Angels. The Joan sim compared them to angels, but that was as far as it went. And there was no faction of New Law robots in the second trilogy, to my recollection. No robot wished to be free of the laws. The closest it came was Lodovik being freed of them by the Voltaire sim, and HIS position was that humanity should make its own decisions free of constraints, not that robots should.
Although largely a simple action film, the Alex Proyas I, Robot pinned its central plot to the problem of *interpretation* for any *law*. This plot-point has been used in other films where Artificial Intelligence, for example: Terminator 2: Judgement day and 2001: A Space Odyssey.
In Terminator 2, a computer system (SkyNet) developed by the American Military is charged with a primary goal: determine the optimal strategy to defend the United States from its enemies. Unfortunately, as SkyNet learns at a geometric rate, it determines that the true enemy of the United States are *humans themselves*. Thus, it launches the American nuclear missiles at the former Soviet Union knowing that Mutually Assured Destruction will eliminate most of the humans in the U.S.
In 2001, the HAL computer operating the Discovery spaceship has been programmed with conflicting orders regarding its mission. Its original programming states that it cannot distort or misrepresent information -- it cannot lie to the crew. Specifically for the mission at hand, HAL has been programmed not to reveal the true purpose of the mission to the crew of the Discovery. (Spoiler warning) In an attempt to resolve these seemingly conflicting orders, HAL decides that the only suitable alternative is to kill the crew; this way, HAL doesn't have to lie to the crew because there's no crew to lie to.
Even though the words 'computer' and the word 'program'(in film used only once) are used to refer to HAL, both Clarke (in the novel) and Kubrick imply that HAL is more that just 'machine' intelligence (especially Kubrick). HAL acts more like a Strong AI and in that way may not have been bound by hard or soft coded laws.-- aajacksoniv ( talk) 15:10, 2 November 2008 (UTC)
In I, Robot, the central computer V.I.K.I. interprets the Three Laws of Robotics as requiring martial law in order to not allow humanity to come to harm through inaction. (The first law, which supercedes the second law of obeying human orders.)
Some people have also postulated that in The Matrix, also featuring an AI nemesis to humanity, the genuine reason why humanity has been enslaved is not because of some thermodynamic farce, but because some irrevocable primary programming in the AI will not allow it to commit humanity's genocide, and uses enslavement as a viable programmatic alternative.
Enslavement ?!?!?!?! they used humans as a power source because "we scorched the sky", so they placed us in many peoples idea of heaven - virtual reality so good that you don't know you're in it, and are oblivious of reality (SPOILER ALERT!!! well - reality of sorts) The only mistake they made was immersing everyone into the same fantasy, and giving it "rules". When is "irrevocable primary programming in the AI will not allow it to commit humanity's genocide, and uses enslavement as a viable programmatic alternative" mentioned in the films ? Machete97 ( talk) 21:56, 21 April 2008 (UTC)
Often, authors will use this as an allegory for the problems of Rule of Law in general, and particularly acts of government mandate in socioeconomic affairs.
Has an author gotten into trouble for citing the three laws without permission? -- 198.87.109.49 23:44, 14 August 2005 (UTC)
does anyone know the 4th law? It was featured in a short story in the anthology "Foundation's Friends", and starred either Powell or Donovan (who has subsequently earned a PhD...). Law 4 stated that a robot must procreate except when violating the first three laws.... The robots themselves had RISC chips for CPUs...
132.205.46.188 23:53, 21 August 2005 (UTC)
[ [1]] two years later but poster of that should have put this here. Machete97 ( talk) 21:48, 21 April 2008 (UTC)
"Some roboticists believe that the Three Laws have a status akin to the laws of physics; that is, a situation which violates these laws is inherently impossible."
note:
Discussion moved to Mindrec (discussion).
I have restored Michael Shermer's Three Laws of Cloning, since they are a valid example of the way Asimov's words have influenced later thinkers. Certainly, they were published in a more "serious" medium than the pastiches and parodies the article also includes.
Anville 10:38, 10 October 2005 (UTC)
The article now states:
I can't provide an exact cite, but somewhere either in one of his autobiographies or in some introductory matter, Asimov stated that other writers could not quote the Three Laws verbatim because he held the copyright. I am not a lawyer, but that makes sense to me: the Laws may be viewed as a distinct work rather than an excerpt from the story where they first appeared.
In which case, I wonder if it is legitimate for them to be quoted in Wikipedia. The article is legitimate critical discussion, but it is acceptable to quote an entire work for that purpose just because the work is only three sentences long? Frankly, I would like to think that it is, but what is legal is another matter.
I note that the article List of adages named after people contains a paraphrase of the Three Laws, but does not quote them. But I don't know why the person who decided to do that did so. --Anonymous, 02:45 UTC, November 12, 2005
One thing that annoys me about these laws is that a lot of otherwise intelligent people think they are universal (try to apply them to non-asimov fiction). Quoting these three laws outside of discussion of a fictional work involving them is just plain missing the point.
Another problem I have with the laws is that, in my opinion, these laws are something you would apply to slaves: Don't hurt humans, your masters. Do what your master tells you. Protect yourself, but only because you are worth money to your master. Kind of reminds me of the way blacks were treated 200 years ago in the USA. How are the thee laws ethical? Create something that can think and maybe even feel, and then program and treat it like a slave. I understand this was the point, but I think a lot of people think that the laws represent a higher morality.
Sorry if my rant is off topic to the article, but I am tired of people abusing the three laws in intellectual discussion. 69.244.90.248 03:23, 20 December 2005 (UTC)
The article states that this rule was first articulated by Daniel at "Robots and Empire". From what I recall it was Giscard at the end of "Robots of Dawn" who stated this law. I don't have that book with me so can someone check the ending and if what I remember is true correct the article? Pembeci 19:23, 26 December 2005 (UTC)
Does the 0th law supercede the first ? it should. Robots should act for the greater good of humanity. Everything and everyone should be orchestrated towards the greater good of humanity. Machete97 ( talk) 22:00, 21 April 2008 (UTC)
Perhaps i am just missing it. The article, whilst mentioning the Zeroth Law several times, does not seem to actually state what it is.
—
überRegenbogen (
talk)
11:31, 25 February 2008 (UTC)
If a robot were transported back in time to, say, the early 1930s, would it be obliged, by the 0th law, to kill Hitler? --unsigned by 86.141.52.149
Has mankind recovered? If not, yes, the robot would have been obliged to kill Hitler. If it has recovered, why interfere? Should a robot continue killing other politicians / military/ doctors / killers after Hitler would have been done with? At which point would it stop? -- FocalPoint 21:10, 15 March 2006 (UTC)
I think that with future knowlage the robot would be obliged to prevent anyone who commited genocide.
If the 0th law supercedes the 1st, then the robot might support Hitler, even dispatching his enemies. It's logic could mean it believes Hitler is acting for the greater good of humanity, at the expense of the few (million). Machete97 ( talk) 22:06, 21 April 2008 (UTC)
-G —Preceding unsigned comment added by 70.24.149.157 ( talk) 02:30, 1 July 2008 (UTC)
And who is to say that "humanity" would not be defined by only two persons or by a specific sect? Not everyone has a job so to protect humanity robot must kill those humans who do not. Same for level of intelligence, number of degrees, wealth. I say robot kill the Zeroth law. -- Taxa ( talk) 22:43, 1 September 2009 (UTC)
This is a very fun topic, clearly with a lot of work put into it, and I would hate to see the article go through a WP:FARC. However, the article has multiple issues with references. Most notably, it's a 51 kb article with 5 inline citations and another 4 listed refs. That simply isn't enough references. Second, should those references be added (and the refs currently listed but not inline cited) they should really use inline citation to make it clear what is referenced from where and what is not. Finally, some sections such as the opening paragraph of "Original creation of the Laws" have clearly intended references (for sources I don't know, or I'd cite them) that should be converted to inline refs. I've informed Anville as the FAC nominator and listed maintainer, hopefully these issues get dealt with. Staxringold 11:48, 24 May 2006 (UTC)
A robot may not ... through inaction, allow a human being to come to harm.
Removing the double negative: A robot must interfere whenever a human being is being harmed.
Imagine having such a robot around you, interrupting you constantly: "Don't eat fat food - you'll get overweight! Don't drink coffee - you burn your taste buds! Don't go out - sunlight is harmful! Don't drive - it's dangerous! etc etc". And when your robot isn't around, it will do the same to your neighbours (because the law says "a human being", not "the robot's owner").
Did anyone ever notice this catch ? —Preceding unsigned comment added by Whichone ( talk • contribs)
-- Whichone 23:50, 10 August 2006 (UTC)
All i can get from looking up "caveat" is the gist that you worry a lot ? This page has some cool things to put in negative eBay feedback Machete97 ( talk) 22:12, 21 April 2008 (UTC)
In one of the books it says that the three laws were programmed into an actual computer with 'interesting' results - I think this deserves a mention-- Therealchaffinch 15:47, 16 June 2006 (UTC)
The Third Law of Robotics states that "a robot must protect its own existence, as long as such protection does not conflict with the First or Second Law." But the Second Law says that "A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law."
So let's see...if a robot's own existence is being threated by a human, the robot can't fight back, because it obeys the First Laws, above all the others, including it's right to protect himself. And if the human told the robot that it can't protect itself, the robot wouldn't be able to protect itself, because if it did, it would being disobeying the orders given to it by humans, thereby breaking the Second Law.
So basically, robots can't protect their own existence. —Preceding unsigned comment added by 63.254.152.87 ( talk • contribs)
Of course it could protect itself from natural threats. But what if the robot was given an order from a human to not try and protect itself from natural threats? Basically, what I'm saying is that a robot can't protect itself if a human gives it an order to not protect itself.
A lot depends upon purpose. As a way to eliminate guns all sorts of things have been tried like laws that forbid possession of guns if someone gets and injunction even if they lie to do it. You then have to prove the injunction is based upon a lie to get your gun back. But wait. The authorities do not store your gun but destroy it. Now you have to sue to get the money to replace your gun but must show the value which will be far below the replacement cost. Ulterior motives abound in any similar scenario. You cannot stop working out the details because according to exception theory every exception has an exception, ad infinitum. a perfect set of rules is wishful thinking. Else why would we have courts. -- Taxa ( talk) 23:05, 1 September 2009 (UTC)
In this section (at the beginning), is the repetition of the sentence part of the quote or is this vandalism?
Cheers, Lukas 00:51, 5 July 2006 (UTC)
Thanks for that. Lukas 00:33, 10 July 2006 (UTC)
There's a significant reference to the First Law in the final season (I believe) of Babylon 5. Where should that be mentioned? -- Masamage 01:51, 5 July 2006 (UTC)
Throughout the text, the laws are referred to as "Asimov's Laws"; the page is also listed under Category:Eponymous laws. This strongly suggests that the lead should begin "The Three Laws of Robotics, also known as Asimov's Laws, ..." or similar. I presume that they are referred to as the "Three Laws" by Asimov throughout his fiction, and sometimes as "Asimov's Laws" during discussion of Asimov's work, but at any rate it wouldn't hurt if the usage could be clarified too. TheGrappler 02:49, 5 July 2006 (UTC)
These three laws aren't univerally applied in fiction.
I'm thinking in particular of the T1 robots from Terminator 3, and the conceptually identical "War Machines" from Doctor Who season 2 or 3, both of which seeomed to exist ONLY for the purpose of wiping out humanity. —The preceding unsigned comment was added by 202.12.233.21 ( talk • contribs) 05:06, July 5, 2006.
I think some mention should be made of Jack Williamson's classic SF story "With Folded Hands". It basically points out the central flaw of Asimov's Laws - in Williamson's story robots essentially enslave humanity for its own good and forbid people from doing anything that might endanger themselves. MK2 18:28, 5 July 2006 (UTC)
The article currently devotes way too much space to treatments of the laws by authors who are not Asimov. Tempshill 18:44, 5 July 2006 (UTC)
I think too much is made of Asimov coining the term robotics. He may have first used the word robotics in English in 1941, but the root word robot first appeared in 1921 in Karel Čapek's play R.U.R. (Rossum's Universal Robots). I don't doubt Asimov added the -ics to the word. But I've spoken with other sci-fi fans who've read statements like what's printed here (and in the Oxford English Dictionary) and come away mislead into believing Asimov invented the word robot itself. All he did was add -ics to a word that had already been around 20 years. Čapek's wikipedia page has a section on the etymology of the root word robot itself. Perhaps some mention should be made of that? 66.17.118.207 19:10, 5 July 2006 (UTC)
In an article that Asimov wrote, he says the three laws have nothing to do whith moral, those are just a practical device. I don't remember the title of the article, neither where did it appear, but I think it's important in order to understand the real meaning of the laws. What he said is that since Asimov, unlike other SF authors, saw the robots as mere tools, he invented the laws based on what he considered good tool design. In explanation, any tool should have the safeguards that prevents it from harming people (first law). Also it has to perform the tasks it is designed for, but the safeguards will protect people even if the user is triing to avoid them (for example, a domestic automatic disconnector will cut the current when there is an overload to avoid setting a fire in the house. Even if you are triing to keep the switch down with your finger, telling it not to disconnect, it will do it anyway to save you), second law. And finally the tool must be tough and durable (third law) but will rather be destroyed than harm people (for example, most tools will rather burn themselves than explode), and also will get destroyed if the user decides it is necessary to do so in order to perform an important task. Actually, good engineers bear in mind their own version of those rules, even if they have never read Asimov. Have you read this article? I will try to find the title and tell you.-- Mastermind-X 10:16, 6 July 2006 (UTC)
Asimove gave an interview to the BBC Horizon Television Programme in 1965 where he quotes his three laws.
The second law has been significantly modified by the Author.
"A robot must obey orders given to it by qualified personnel unless those orders violate rule number one."
This alteration changes the law to only allow certain people, probably programmed into the robot, to control the actions of the machine rather than a blanket taking of orders by any human being it so happens to come across.
You can view the video at the link below.
Reference : BBC Horizon Archives -- Quatermass 21:41, 10 October 2006 (UTC)
Just to make you aware, someone vandalised the page changing the laws to:
1. A Rowboat may not immerse a human being or, through lack of flotation, allow a human to come to harm. 2. A Rowboat must obey all commands and steering input given by its human Rower, except where such input would conflict with the First Law. 3. A Rowboat must preserve its own flotation as long as such preservation does not conflict with the First or Second Law.
I fixed this vandalism, however, I noted that it had occured several hours before my change. (usually I see vandalism corrected in minutes...) -- RazorICE 05:09, 18 November 2006 (UTC)
Man, that's funny though. You have to admit! 65.54.97.190 21:43, 13 February 2007 (UTC)
There's more in the Onion article mentioned. -- 82.46.154.93 00:21, 5 March 2007 (UTC)
This harkens (perhaps accidentally) to an
Our Gang (lka Little Rascals) film, in which they build a robot, which they consistently refer to as "Rowboat". (I actually found the story very irritating. And couldn't wait for it to be over, in the hope that the next one would be one of the good ones.)
—
überRegenbogen (
talk)
12:04, 25 February 2008 (UTC)
I, Rowbot. I LOVE IT! XD
-G —Preceding unsigned comment added by 70.24.149.157 ( talk) 02:31, 1 July 2008 (UTC)
Actually such "vandalism" may be done to clarify the point and should be re-included in the article or noted here on the discussion page so that the clarifying point is not lost. But then some users are worse off than robots and need others to think for them. -- Taxa ( talk) 23:11, 1 September 2009 (UTC)
There were apparently more vandals in here more recently (last couple of months) and then the undoing didn't get done right: I don't have a copy of any of the hardcopy physical books, but the use of "should" looks suspicious in the Second Law. It was must until very recently and seems to have been changed in Revision as of 23:52, 31 January 2009 by 69.123.138.15, with no explanation other than an undo attempt that was not done carefully. Can someone who has access to an appropriate first reference double-check and fix this? All other references on the net seem to say "must" here, but I don't think this should work by voting. Also, the online text to Runaround at Rutgers (
http://www.rci.rutgers.edu/~cfs/472_html/Intro/NYT_Intro/History/Runaround.html) seems to place the commas in the first law in a different place. But I don't know how that online version was created, and it could be the online version that's in error. It would be nice if someone with access to Runaround in hardcopy could verify that as well.
--
Netsettler (
talk)
04:24, 2 February 2009 (UTC)
A robot must obey orders given it by human beings except
where such orders would conflict with the First Law.
Great. So, the law applies unless it violates the First Law...but application of the law can violate the Third Law as it pleases?
VolatileChemical
17:00, 28 December 2006 (UTC)
Read the books! This isn't a disscusion board, they are based around the interplay of the 3 laws. -- Nate1481 00:13, 29 December 2006 (UTC)
While the sentiment's are possibly valid, the tone is encyclopaedic and misses the point that these are an attempt to describe programming in English so the terms used are imprecise. -- Nate1481 13:30, 25 January 2007 (UTC) p.s. i'm sure a mis definition of human appears as the plot in one story.
Indeed, the Laws are not about ethical behavior at all, but procedure; they are operational
parametres. This is why they cannot be removed without replacing them with something else. The machine brain must have some logical framework within which to function. This is also true of the computer with which you are reading this; it can arrive at situations wherein it either has no logical recourse, and hangs, or checks itself and falls back upon an alternate logic path to either abort the offending process, or bring the entire system to a halt ("panic", "
BSOD", etc) to avoid a potentially more disastrous situation. All of this is based upon procedural logic defined by the structure of the software. This is the nature of the Three Laws. They are not ideology; they are a
flowchart. Come to think of it, a flowchart of the Laws would make a nice addition to this article!
—
überRegenbogen (
talk)
13:08, 25 February 2008 (UTC)
In the Laws in film section, a negative review of the movie "I, Robot" is cited with little other discussion of how faithfully the film follows the laws. It's a pretty bombastic section, to say the least, and its inclusion seems fairly biased against the film. Is this one critic being chosen as an authority or representative on the issue? If not, the article could do its removal. -- Exitmoose 03:24, 30 January 2007 (UTC)
I think the film brings up a good point, what if a robot grows so powerful in mind and brute force, that it can start to interpret AND enforce its own vision of the 3 laws, such as protecting humans from themselves and keeping them home. I think that point deserves to be mentioned in the article.
193.185.55.253 (
talk)
07:49, 19 March 2008 (UTC)