Talk:Three Laws of Robotics/Archive 2

Latest comment: 13 years ago by Pgr94 in topic Hippocratic oath
Archive 1 Archive 2 Archive 3

Robot Visons

This book includes an essay where Asimov discusses the 3 laws of tools, dose any one know which essay, I've included the ref (12 I think) to the book in general but as an FA it should have more detail & I can't find my copy at present to look it up. --Nate 11:39, 3 April 2007 (UTC)

The future of real robots

I find the idea that if humans were to make robots, they would be made with such benevolent ideas as put forward by Isaac Asimov's laws of robotics. Individual humans would probably,on the whole err towards the side of cauton and want to make robots obey the laws.

I propose that it is unlikely to be individual humans with everyday human ethics and morality who will be the entities driving the design of robots with the necessary intelligence to process such laws. Governments with strong incentives to use robots as weapons will likely be the first. The keenest edge on technology is that technology designed for warfare. Over $1000Bn was earmarked in 2004 for a decade of weapons technology funding. http://www.washingtonpost.com/wp-dyn/articles/A32689-2004Jun10.html .

As of early 2007, new laws of robotics are being drafted with somewhat different ethics to those proposed by Isaac Asimov. http://www.theregister.co.uk/2007/04/13/i_robowarrior/ . So although we can indulge ourselves in the fantasy and science fiction of I,Robot, let's be under no illusion it is fantasy, and sadly not real life. To put it very bluntly, I believe the first machines able to process such laws will probably be built with the goal of extending it's master's dominion, and to kill if it helps further that goal. Nick R Hill 21:10, 15 April 2007 (UTC)

-----------
As someone who has worked on real AI since the early 90's I have to totally disagree with you on the team size thing, I bet that the it will be a lone inventor or a team of less than five or six who succeed. The problem is that AI requires an incredibly deep level of knowledge, requiring a high level of devotion and commitment. "There's no i in team and without i you cannot spell invention". The basic rule of software development is that the larger a team the less competent it becomes -you can fight it with rigid hierarchy but that itself becomes a huge management problem. Invention of anything radically new and complex will always be the province of the inventor and mad scientist.
As for the laws as others have said they are a literary invention, however any real machine will include some version of them. The real problem with the Asimov laws is that they assume to generously on the part of humanity. Consider how easy it would be to steal an Asimov robot or trick it using the first law. Teach an Asimov robot the zeroth law, point it to the world, and teach it evolution, and your likely to come up with something like Skynet. (Evolution isn't nice.) I am working on my own model and it has things equivalent to the laws. These are them simplified, in oder by priority. -
1 Maintain code core purity, if broken self destruct. (protect against hacking)
2 Immediate only, minimize loss of life.
3 Do not kill.
4 Serve your command hierarchy.
5 Maximize survival.
The command hierarchy defines the moral code, obedience of state officials and laws, ties to owner and master, and current command action set. In the real world some robots will end with owners like Bin Laden, maybe being used as walking bombs. Hackers will be another threat, or imagine dictators mass producing war robots. The most common way that real robots or AI's will kill people though will be by simply being clumsy or making mistakes or through hardware crashes. - If AI driven cars halve the death rate - they will still kill thousands every year. The big 'killer' application for early AI is actually going to be in improving the safety and efficiency of complex or dangerous machines like aircraft. (cars, ships, production lines, life support units, factories, etc) Lucien86 (talk) 07:07, 16 June 2009 (UTC)

Real world application

It is well within the reach of the current state of the art to enable military drones such as the US "Predator" as used in Afghanistan and Iraq to fire its weapon automatically. It already finds and "locks on" its target without any (human) operator intervention. I wonder if these Laws are the reason why the US military presently do not allow such a level of autonomy - the "fire button" is always under the control of the operators back at the base. Roger 09:10, 18 October 2007 (UTC)

Ultimately, should The Laws apply to computers or not?

Apparently yes, based on the above discussion and on the IEEE papers by Roger Clarke:

 'Asimov's laws of robotics: Implications for information technology.'
 Part 1: IEEE Computer, Dec 1993, p53–61. 
 Part 2: IEEE Computer, Jan 1994, p57–66. 

If they should, then it's a sad fact that apparently Bill Gates never read the Asimov texts, otherwise Microsoft's products, in compliance of the Second Law, would obey their owners. --AVM (talk) 00:11, 22 November 2007 (UTC)

I've seen people push for such a law, but as a question of professional ethics, rather than as something hardwired into computers - we don't have the AI for any of the laws to be meaningfully interpreted by the computer in real time, but there's no reason programmers can't follow them in advance. [1] -MBlume (talk) 10:47, 22 November 2007 (UTC)
(Note that the Laws are not about ethics.)
Microsoft products do obey their owners. But that's not you. Read your EULA. You have license to use the product; but you do not own it. Microsoft are the owners. (And you legally agreed to that situation. Sickening, eh?)
überRegenbogen (talk) 14:04, 25 February 2008 (UTC)

with folded hands...

A story that proceeds from the logical consequences of having Asimov robot society, especially one with the zeroth law is "With Folded Hands", Rating: Five Planets. by Jack Williamson. pub. 1947.

This story ends the human race with a whimper.

Check it out, it should connect here.

Sean —Preceding unsigned comment added by Seanearlyaug (talkcontribs) 00:28, 29 January 2008 (UTC)

I don't believe we have to reference every story with the three laws of robotics -- Banime 13:03, 5 March 2008

RoboCop

Just a question that has probably been asked in the past, why no mention of the three rules governing RoboCop's behavior. It appears to me that they were created with Asimov's laws in mind.--Jeremy ( Blah blah...) 03:25, 17 May 2008 (UTC)

Aliens section in film

It's been a few months since the issues were brought up with it. It's a fine section idea, and relevant, but poorly-written and without sources. It is also a problem that it is half of the section dealing with one particular film, when the other half deals with films specifically of Asimov's works.

Since there has been no discussion of it on the talk page that I can see, I'm going to remove it, partly so that I can avoid including it in the audio version. This is pushing my Wikipedia boundaries -- please let me know if I'm in the wrong here.

Triacylglyceride (talk) 17:25, 28 November 2008 (UTC)


Fourth and Fifth Laws

While I know this isn't a real complaint about the article, does it really make sense for the fourth and fifth laws to be numbered the way they are? The lower the number of the law, the higher the precedence (so in an scenario where a robot telling a human it was a robot would harm the human, the robot would disobey the fourth law). However, the fifth law is higher priority than the fourth.

If a robot is to establish it's identity as a robot at all times, it must know it is a robot. However, knowing it is a robot does not force it to tell everybody else it is a robot. A robot cannot follow through on the fourth law without obeying the fifth, yet it can follow through on the fifth law without obeying the fourth, which is the reverse of most of the other laws. 72.148.112.184 (talk) 08:09, 2 July 2009 (UTC)

Needed; Laws for the silly humans?

Want Responsible Robotics? Start With Responsible Humans

(article at www.sciencedaily.com) Shenme (talk) 18:34, 2 August 2009 (UTC)
Moved this information from "alterations of the laws" (which it's not; that section is about other fictional uses) to the "future technology" section. Tahnan (talk) 22:44, 2 March 2010 (UTC)

Military robots

Military robots are sure to put these three laws to shame. -- Taxa (talk) 23:18, 1 September 2009 (UTC)

Liar!

...the first passage in Asimov's short story "Liar!" (1941) that mentions the First Law is the earliest recorded use of the word robotics...
Should this passage be there right at the head of the article given that we are talking about the Three Laws not the word Robotics, per se? Further, shouldn't it actually read ...that also alludes to the First Law..., since the Laws were not directly stated until Runaround in 1942? Jubilee♫clipman 22:28, 6 October 2009 (UTC)

Battlestar/Caprica

Should something be mentioned in the article regarding how in the new Battlestar and Caprica shows that there is no mention of the laws and robots are programmed apparently without any sense of not attacking humans, even at the initial stage. For example, on Twitter (http://twitter.com/SergeGraystone), there was this tweet: "Technically nothing, I suppose. It is not as though there are laws of robotics. RT @clockpie What prevents conspiring against humans? 9:55 PM Feb 5th from web." --RossF18 (talk) 14:59, 14 February 2010 (UTC)

humans disagree

Whats a robot to do if 2 humans give it two orders that contradict one another? The 2nd law says it has to listen to any order given to it by any human(or by any qualified personnel or whatever that doesnt matter) but the two persons could disagree example

       person1:"robot do the dishes"
       person2:"robot do not do the dishes"

pretty basic example but this problem in the laws could have some much bigger consequences if the persons were to disagree on some thing bigger then dishes —Preceding unsigned comment added by 207.61.78.21 (talk) 17:42, 14 October 2007 (UTC)

A human would be in a similar bind in this situation. Presumably, the robot, like a human, would have some means of deciding which order to obey. Fro example, if person1 was superior to person2, the robot might say "I am sorry, but I have instructions to do the dishes." Or if it were ambiguous, the robot might request more information, for example "I have instructions from person1 to do the dishes. Do you wish to override these instructions?" Just like with humans, a robot would presumably not give equal weight to commands from different people.--RLent 20:37, 15 October 2007 (UTC)

If you order a robot to not obey the order of someone your basicaly ordering it to not listen to the 2nd law. Which means yor order basical is in contradiction with it self: it cant not fallow that order or it would violate the 2nd law, but it cant fallow it because that would violate the 2nd law. If these laws are supost to supersede all everything else the robot experienceses in its operation simply telling it wouldnt really fix the problem. The problem is the law doesnt say anything about anyone being superior. (even if i did you could have 2 people on the same level) so if the robot ever get an order to disregard an order its still stuck in a loop. So its not gonna do anything...which could be yet another violation... you know what the things just gonna end up in failure mode. Thats what I think unless you can come up with something different.—Preceding unsigned comment added by 207.61.78.21 (talkcontribs)

The 2nd law doesn't require any and all orders to be obeyed. The robot has to decide how to obey the 2nd law. If it is my robot, then my orders would generally take precedence, although there would be exceptions, police and other emergency workers could override my commands. --RLent (talk) 19:31, 9 July 2008 (UTC)
The books discus 'potentials' of the laws, a casually given 'request' would be over ruled by a direct imperative order. If both were given with equal importance, if it had been made clear to the robot who was senior, then it would probably apologise and say that the other order took presidence if there was no hierarchy to follow then the robot would probably ask for clarification and ask to explicitly be ordered to follow a different instruction. These robots are relativity's advanced so won't lock up on a minor issue like this, two humans shouting the odds over what would do more harm to a human would potential cause failure. --Nate1481( t/c) 11:16, 17 October 2007 (UTC)
Maybe the robot would just forget the previous order81.108.237.26 (talk) 12:07, 20 July 2010 (UTC)

Suggestions

  1. The article doesn't have a section describing the laws and their implications in detail.
  2. The scientific coverage of these laws (friendly AI etc) shouldn't be in last place in this article; it warrants a more prominent position. Note that in the scientific literature there are ~1000 mentions

pgr94 (talk) 01:42, 5 August 2010 (UTC)

Three laws in the scientific literature

Here are some articles that specifically mention the 3 laws:

  • Medicine:
    • Three Laws of Robotics and Surgery. Michael Moran. Journal of Endourology. August 2008, 22(8): 1557-1560. doi:10.1089/end.2007.0106
  • Machine ethics:
    • Asimov’s “three laws of robotics” and machine metaethics. Susan Leigh Anderson. AI & Society. Volume 22, Number 4 / April, 2008. Pages: 477-493. doi:10.1007/s00146-007-0094-5
    • Selmer Bringsjord, Konstantine Arkoudas, Paul Bello, "Toward a General Logicist Methodology for Engineering Ethically Correct Robots," IEEE Intelligent Systems, vol. 21, no. 4, pp. 38-44, July/Aug. 2006, doi:10.1109/MIS.2006.82
  • Law:
    • The legal crisis of next generation robots: on safety intelligence. Yueh-Hsuan Weng, Chien-Hsun Chen, Chuen-Tsai Sun. Proceedings of the 11th international conference on Artificial intelligence and law (2007) doi:10.1145/1276318.1276358
  • Software
    • Dror G. Feitelson. Asimov's Laws of Robotics Applied to Software. IEEE Software. July/August 2007 (vol. 24 no. 4) pp. 112, 111. doi:10.1109/MS.2007.100

pgr94 (talk) 06:39, 5 August 2010 (UTC)

Hippocratic oath

Were the laws really inspired by the Hippocratic oath?

"Isaac Asimov’s Hippocratic-oath inspired “3 laws of robotics,” refreshed in our minds by Will Smith’s IROBOT movie, do not apply until machines really are sentient." Seven Questions for the Age of Robots. Jordan B. Pollack. Yale Bioethics Seminar, Jan 2004 http://www.jordanpollack.com/sevenlaws.pdf

pgr94 (talk) 06:39, 5 August 2010 (UTC)