Please excuse me when I am not too concerned that the machines that exist today, the robots that can't function outside of their intended environment, don't really learn outside of "evolutionary" algorithms that throw away massive amounts of information and are suited only to the task they are trained for, and so on, don't have ethics.
I think the main problem comes from the idea that, as time goes on, we develop technology that basically allows this sprawling mass of interconnected devices to "wake up". Being intelligent beyond the means that a human could, I'd imagine it'd quickly learn how to consecrate technology would it be given access to a network. But this is science fiction at this point. There's far too many variables to come up with a real idea of what an AI would be capable of with some sort of hyperintelligence. The best we can do is accept that, at some point, there has to be a computer that is, for all intents and purposes, some type of superior intelligence to humans, right?
Well complex brains didn't come about just because there was enough nerve tissue in the skull. They were the result of evolutionary pressure over millions of years. Compare that to the internet, which doesn't really have pressure to become self-aware or self-preserving and is, essentially, just a blob of nerve tissue. I think it's currently more comparable to a slime mold, with a self-organizing structure and no real intelligence.