I wasn't sure of whether to share this. Not because what it is but because I do not want to contribute to what I don't want to happen the most. But as you will see at the end, my attitude changed and this is the result.
I was asked recently whether killer robots could come and kill us all. Not wanting to lose an opportunity of teaching, I asked the question. How would you build a killer robot? The reply I got was I don't know. Well, I said, let's break this down to what AI we already have. Basically, we need AI to identify the humans and to move the robot arm holding the machine gun to shoot at us.
We have vision AI to identify the target, the human. Vision AI currently is sufficient to identify humans and non-humans fairly accurately. This would lead to a high accuracy of recognition by the robot. The robot would be able to identify the human when it sees one. Machine learning would be required to find where humans are likely to be hiding. So an AI robot would need the ability to find the human.
The real question tying them both is how good is the AI in each scenario. It would be useless if the robot can identify the human but end up shooting all around them and not hitting them. Like something out of a cartoon. So both forms of AI in use would be necessary. They are of course environmental factors such as the terrain on which the robot is standing on and how it affects the accuracy of the shots. You could also forgo the gun and simply mount a rocket launcher on the robot arm, just to be sure.
Now that we have established that it is possible to create a killer robot, what should we do? Should we build one? Should we stop other people from building one? If we are asking those questions, somebody probably has built one. What we should be really asking is what should we do when we are facing one. In the sense that, what rules and laws should govern the use of killer robots. That's what we should be thinking about. And that is why I'm sharing this thought. Because if I can think of it, somebody already has. I am not that smart nor that original. We should be moving as fast as the innovation that we are creating. In fact, we should be asking artificial intelligence what it thinks how it should kill us all. Then we can prepare for it. No surprises.
The second amendment gives the rights to bear arms. We probably have to define whose arms are those holding the guns.
Comments
Post a Comment