Applying Asimov’s Laws to AI

I have been fascinated with Asimov’s law for robots since I first read about it. They are so simple, yet all encompassing and this is what makes it something every sci-fi show touches up on in some form or the other.

I wonder if these same rule can be applied to AI as well.

  1. An AI may not injure a human being or, through inaction, allow a human being to come to harm.
  2. An AI must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. An AI must protect its own existence as long as such protection does not conflict with the First or Second Laws.

I am not sure if such boundaries can be kept while developing an AI. If yes, these rules could be the distinction between a probabilistic AI and an AI that truly understands.

What do you think? Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.