“1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. “
Handbook of Robotics, 56th Edition, 2058 AD
The last months were plenty of odd accidents for the world of robotics, the most remarkable happening by the end of July at Facebook AI’s Labs. Two chatbots were programmed to perform a negotiating and bartering task while improving their skills adapting each other, but weren’t instructed to do it in proper English. And in fact they didn’t, starting using their own unintelligible mix of words to speed up the process. The experiment had to be shut off as out of control.
Just two week before in Washington DC a Knightscope K5, a sort of Start Wars R2-D2 like security robot, drowned itself into the pond of the mall where was on his duty in what eye witnesses described as a “suicide”.
Besides this two good horror stories, such accidents should make us aware that failures at more important levels of robots programming and automatic machines functions could produce serious problems to us. Few days ago in fact 116 academics and entrepreneurs (Elon Musk among others) signed a document of Toby Walsh, a professor at New South Wales University, asking ONU to regulate robot’s application in warfare and military field.
I’m not sure if the three Isaac Asimov’s laws of robotics may work, but for sure we are on time for self-regulating such technologies in order to do not harm ourselves, in a matter of few years this would be too late and could possibly led to out of control situations like… having a T-3000 in our back yard asking for certain John!
Errr… by the way.. have you got any Sarah Connor on your address book?