Topic > Analysis of Isaac Asimov's Laws in Book I, Robots

Isaac Asimov's science fiction book “Robots” is a must-read for beginners who want to develop an interest in robotics. It provides a collection of nine short stories that imagine the development of a positronic brain for robots possessing intelligence equivalent to or greater than that of humans and discusses the moral implications of the technology. Among these I find the first story of Robbie, Gloria's robot nanny, particularly interesting and easy to relate to. Gloria's mother hates robots and as such conspires to get rid of them. Gloria becomes sad and then her parents try to convince her that robots are not human. But when Gloria is in danger, Robbie saves her life by making everyone appreciate robots. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an original essay We can very well imagine ourselves in a similar situation in the next 40-50 years, when our grandchildren would be looked after by such robots. A child is bound to become attached to toys, especially if they are human in appearance. Every coin has two sides and I would like to discuss both here. On the one hand, a robot nanny is necessary, but not to the point that your kids start to dislike real humans. We all know that a computer has much more intelligence than a human brain. Therefore, letting the child develop under the care of a robot could be dangerous as it would absorb more of the intelligence required in the child at one age. Imagine that your child knows the theory of relativity and quantum physics concepts at the age of 7 while his classmates are still learning the basics of mathematics. This would make the child feel a sense of superiority over others and his behavior would prove arrogant not only towards his friends but also towards his parents. Once again, this will also disturb the little girl's social life, just like Gloria, who never wanted to date other children her age but remain faithful to Robbie. A mother is especially responsible for transmitting moral values ​​to her children. But if left under the supervision of a robot, what values ​​will the child learn? Right or wrong? And as we discussed in our previous book, it still seems difficult to build a perfect moral machine that can distinguish right from wrong. Furthermore, from the chapter “Liar”, we know how the mind-reading robot gave answers that the person wanted to hear, in order to give satisfaction and thus respect the first law of robotics of not hurting human beings: physically as well as mentally. If a child does something unethical, parents take a strong stand against it to teach a lesson. But a robot would rather support the wrong action. Furthermore, the second law of robotics states that the robot must follow human orders. So, when the child becomes a teenager and the robot is of the opposite sex, equally attractive, then you can imagine what orders the teenager might give to satisfy lustful desires. This will affect the social behavior of the teenager. At the other end of the scale, we have robots designed to provide social assistance to humans. More sophisticated robots can serve as companions, moving alongside their users as they fetch and transport, issue reminders about appointments and medications, and send alerts if certain types of emergencies arise. They expect neither respect nor salary. Today we have robots that can detect our mood and emotions and correspond to possible solutions. This would be a good alternative for lonely and depressed people. Therefore, I believe there are bothpros and cons of having a robot nurse. I believe that although Asimov's laws are organized around the moral value of preventing harm to human beings, they are not easy to interpret. We must stop considering them an adequate ethical basis for robotic interactions with people. Part of the reason Asimov's laws seem plausible is the fear that robots could harm humans. IO bet many of us have read about malfunctioning autonomous cars causing fatalities in the United States. Furthermore, artificial intelligence is mainly concerned with training robots to adapt their behavior to new situations, but of course this behavior can sometimes be unpredictable. So Asimov was right to worry about the robots' unexpected behavior. But when we look more closely at how robots work and the tasks they are designed for, we find that Asimov's laws do not clearly apply. Let's take the example of military drones. These are robots directed by humans to kill other humans. The whole idea of ​​military drones seems to violate Asimov's first law, which prohibits robots from harming humans. But if a robot is directed by a human controller to save the lives of its fellow citizens by killing other humans who attack it, it is obeying and disobeying the first law. In this case, the equilibrium would shift back and forth between the first and second laws and result in a scenario described in the “Runaround” chapter. Nor is it clear whether the drone is responsible when someone is killed in these circumstances. Perhaps the human controller of the drone is responsible. But a human cannot break Asimov's laws, which apply exclusively to robots. Meanwhile, it is possible that armies equipped with drones will significantly reduce the amount of human lives lost overall. Not only is it better to use robots rather than humans as cannon fodder, but there is probably nothing wrong with destroying robots in war since they have no lives to lose and no personalities or personal agendas to sacrifice. Also, during robot-assisted surgery. , the first law would be a problem since the skin must be cut in order to heal the person and therefore needs to be modified. Robots working in industry dealing with dangerous chemicals will encounter frequent conflicts between the second and third laws. I had read in an article that the United States is trying to use robot judges in the courts. How ethically would the robot be able to judge the person? Also in this case the first law of robotics is followed and not followed to protect citizens and punish criminals who are both "human". In the story of the Mercury expedition, Mike did not emphasize the selenium collection from the pool that was the cause of all the fun. But in our daily lives, we will not always remember to dictate every order with emphasis. We order randomly. Therefore, the robot must also have priority settings. Also, Mike and Powell were present there to resolve the conflict of laws, but what should one do when there is no human being physically present. The chapter "Reason" is thought-provoking as it shows how the robot's faith towards its Master changes due to his strange logical reasoning. Let's talk about robots keeping in mind the fact that they would be at the service of man. But in this case, the Cutie robot, programmed according to Asimov's laws, wants clear logic as to why humans are its masters. Such faulty robots can corrupt other robots and then develop their own army that threatens human civilization. We live in a world where cybercrime is dominant, so anyone with high intellect can hack.