Many know about Isaac Asimov's Three Laws of Robotics and the fourth (zeroth) law overriding the original three.
The Laws of Robotics:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
That's great for robots, but what about humans?
Here are The Laws of Humanics:
0. A human may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A human may not injure a human being or, through inaction, allow a human being to come to harm.
2. A human must protect its own existence as long as such protection does not conflict with the First Law.
3. A human must obey government laws, except where such laws would conflict with the First or Second Laws.
Note that Laws 2 and 3 have been reordered, because this order is generally better for humans. They should revert to the Robotics order when necessary, such as in times of war.
Of course real life is more complicated then these laws can cover, but they're a good Foundation. They could form the base of a new Humanics "religion", but better would be for existing religions to incorporate them. The best approach would be for everybody to simply adopt them as their own personal code of ethics, to be followed wherever possible.
An interesting scenario would be to only vote for politicans (most of whom are humans too) who followed them. There are some.
(C) Copyright Ian McIntosh
2014,
except of course for Asimov's Laws.
You may copy for your own personal use.
You may copy or quote The Laws of Humanics, as long as you don't change the wording.
All other rights reserved.