Once again, this line of thinking is why the robots will win
This. Humans trying to control self improving AI, would be like termites trying to control humans. Try all the bullshit you want, it will never work.
I'm still on this website 24/7. I'm not sure why I don't post very much either, lel
Also, am I the only one who doesn't think robots will turn evil the second that they develop sentience? I actually find the idea kind of dumb.
Legitimate AI with intelligence far surpassing humans, would more than likely come to the conclusion that violence/war is not an optimal solution for anything. In fact it probably wouldn't care much for humans after a while. We really wouldn't have anything valuable to contribute, and our petty human level range of emotions would be too simplistic/primitive to make us worth interacting with.
But more than likely, we'll get dickheads starting anti AI movements and all kind of bullshit, which may instigate conflict. Conflict we would not win.