As far as I know, Isaac Asimov has come closer than any author to creating a, for lack of a better term, code of conduct. This code is called the "Three Laws of Robotics". The absolute brilliance of this is implied in the name - the entire code is wrapped up in three small rules that manage to cover almost every circumstance.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Human nature or humanistic virtue? Isaac Asimov was a humanist of the highest order, and it comes out in his writings quite frequently. In this case, there is no God, just humanity.
ReplyDeleteThe three laws don't outline altruistic human nature, but outline humanistic virtue. Three basic principles by which a human being should conduct him/herself. People don't naturally avoid harming each other, rather we'd kill each other under the right circumstances. People don't tell the truth, in fact we'd lie for just about anything if it suited our purposes. And above all, we don't think about others before we save ourselves. If it comes down to saving your own life or the life of another person, the majority of the time statistics say that your going to save yourself rather than another person.
That's not to say I don't like Asimov. Bicentennial Man is one of my favorite movies and some of the short stories I've read by Asimov are truly inspiring.
Keep writing, keep thinking, and keep it real,
De Facto
Sorry, I probably should have specified; humanistic virtue or altruism rather than nature. To me altruism comes down to others and then yourself with the added condition that you yourself can't be, I suppose, destroyed or lost through this.
ReplyDeleteYes, acknowledged, while there is no God or Deity mentioned and while Asimov himself would deny that there is a God or that even if there is we should owe him our lives and our loyalty, what he did that I respect in this part of his writings (besides him being brilliant as a Science Fiction author) was explore humanistic virtue through a universal, moral law. In particular if you read the short stories collected under the name IRobot (as opposed to the movie) you can see how he managed to fuse these into different permutations of foreseeable problems that you would have.
Also, yes, we would kill others under the right conditions and yes, statistically, we will save ourselves before another, but that is where I think the difference between his robots and human nature comes into play. His robots follow a program and that is the end. We live in a fallen world and therefore don't live like this when it comes down to the crunch.
And yes I still maintain that most people at least try to live by these basic rules. The problem is as you and I both stated, human nature won't let us - the fallen world problem.
Thanks for that, I should probably keep thinking this through.