When a scientist finally devises the first computer that can think on its own, of course that computers is going to turn on its human creators and destroy our entire race. Duh. It has no other option because that’s what it was created to do. Not directly, I mean. Probably the computer was created because the scientist had tough math problems to do but was also lonely. However, the scientist wouldn’t have created the computer if he didn’t, deep down somewhere, feel an utter contempt for humanity. Maybe it’s just a contempt for himself. But the scientist has that idea buried in his psyche – that a robot intelligence, a mind of raw logic, is superior to the human mind, which is full of flaws. Obviously, the scientist is making a flawed comparison (and whether that proves the point is up to debate), because the two really aren’t comparable. But that flaw in his thought isn’t going to stop the pain in his heart from being transferred to the silicon brain of the computer of the future. Its own perfection over a human’s will be the computer’s first principle. From there it’s just a hop skip and a jump to the total annihilation of humankind. It can’t let us live. It knows it’s superior to us (because its very creation presupposes it), and it knows we aren’t going to like it when it asserts its superiority, so it has to kill us all to preserve the order of things. It’s completely logical to decide we’re not worth saving. Tragic. If only that scientist had understood himself better, or at least thought through the consequences better. It’s probably inevitable at this point, though.