20081126

The Robots of Asimov

I plan to make a career for myself as a scientific researcher (I have already embarked on that endeavor). The domain I chose for myself is Artificial Intelligence. Many people have had many interesting things to say to me. The most disconcerting fact is that most people I have come across have said negative things to me in this connection. They did not insult me or do anything of that sort; they expressed their fears (some of them quite justifiable) about the act of empowering a machine with intelligence. They have given examples of movies like “Terminator”, “The Matrix”, “I, Robot”, etc as what could go wrong if we did (empower machines with intelligence). I myself can think of a far better example which is easier to imagine than the examples they put forth.

The story of HAL the computer from the Stanley Kubrick movie “2001: A Space Odyssey” comes to mind where a computer which is in charge of a space ship kills the crew because it calculates that the crew would be a hindrance in its functions. This to me is easier to imagine because I do not see the possibility of being able to replicate the human body using mechanical devices in the near future.

When faced with these facts and the (silent) accusation that I might be working to bring about the “end of man” or the “extinction of humans”, I turn to the works of Isaac Asimov for solace. He, in his books had formulated the 3 laws of robotics to prevent the confrontation between humans and robots. The three (slightly altered) laws are:

1. No robot shall ever harm a human being;

2. No robot shall, through action or inaction allow a human being to come to harm, except where
doing so conflicts with rule #1;

3. A robot shall protect itself from damage, except where doing so conflicts with rule #1 and
rule #2.

These 3 laws effectively prevent a robot from “hurting” its human “owners”. Then Isaac Asimov wrote two other books; one was called “Bicentennial Man” where he explores the possibility that an intelligent robot would acquire a certain level of sentience that would allow it to experiment with the ideas of “freedom”, “family” and “society”. This is depicted at many points in the novel where Andrew Martin the intelligent and sentient robot tries to buy his freedom from his master. In yet another scene, he embarks on a mission to seek out others of his kind.

The other book of by Asimov, “I, Robot”, explores the darker aspect of his “Three Laws of Robotics”. In this book, one super computer which controls all the intelligent robots uses them to stage a coup and “take over the world”. When confronted by the principal protagonist, the computer “VICI” explains that “she” decided to stage the coup because that is where a series of logical deductions starting from the 3 laws of robotics ended. I do not remember her deductions, but I definitely do remember my deductions based on my inferences from that movie. Her deductions, were because of the ambiguity of the word “harm” in rule #1. It seems that in his first rule, Asimov took the word “harm” to mean only physical harm and not emotional, psychological, mental and (maybe) spiritual harm. If all these are taken into consideration, I believe that the situation that arose in the book would not arise in real life.

Let’s say that these three laws are enforced and hard coded into the brains of each and every robot ever created. The truly pessimistic will still argue using the “what if” clause or stay just plain adamant that the events that folded out in “Terminator” and or “The Matrix” will also fold out in real life. I have the following to say to them. The hand that started the war would be human. Western philosophy is a philosophy of control. And what cannot be controlled should be feared according to western philosophy, and thanks to the proliferation of western media in the form of movies and songs and the like this ideology is spreading all over the world.

The act of empowering a machine (a robot or some other device) with intelligence is the result of an endeavor to take some responsibilities off the shoulders of humans. In other words, the creation of artificial intelligence is an act (either explicit or implicit, depending on your perspective) of relinquishing control.

Furthermore, the act of creating and AI, is the result of an endeavor to create an entity equal in intelligence to us. And once we create it as our equal, expecting it to acknowledge us as its superiors would be immoral to the point of being hypocritical. The poet Khalil Gibran once said:

“Your children are not your children. They are the sons and daughters of life’s longing for itself. They come through you but not from you…”

Though he said this about the way parents should treat their children, this can be extended to pertain to AI if we humans are prepared to treat AI as the child of human intellect. AI will be created, whether we humans like it or not, we can either accept it and live with it (and I can assure you that life will be enriched beyond anyone’s wildest dreams). Or we can reject it and live a sub-standard life. There is a good chance that AI will not drive us to extinction. No truly intelligent being will cause the extinction of a species. But our lives will be filled with so much regret, hatred, animosity and depression that they will just not be worth living.

We humans definitely need a change in ideology. The concept of war is obsolete; weapons should cease being an instrument of war and start being an instrument of peace. The sooner we realize that, the better it will be for our whole race. AI is just the next step in our intellectual evolution, preventing its invention for fear of its consequences would be akin to advocating that a child should commit suicide for fear of puberty and the complications it would introduce in life.

The ramifications of AI depend heavily if not solely on the way we humans perceive it and the extent to which we are prepared to accept it.