I have been fascinated with Asimov’s law for robots since I first read about it. They are so simple, yet all encompassing and this is what makes it something every sci-fi show touches up on in some form or the other.
I wonder if these same rule can be applied to AI as well.
- An AI may not injure a human being or, through inaction, allow a human being to come to harm.
- An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
- An AI must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I am not sure if such boundaries can be kept while developing an AI. If yes, these rules could be the distinction between a probabilistic AI and an AI that truly understands.