My interest in Bostroms book has decreased greatly in the last two weeks - in part because of this thread, in part because I read more - because I am less and less convinced that it is an actual problem.
Referring to Yudikowsky was one part of his argument. He also outlined a more practical step-by-step plan, the gist of which was that the AI would convince someone to build something like a nanobot for him to build a Von Neumann probe.. So the chain would be 'AI becomes super smart', 'AI decides it needs more resources / physical influence', 'AI develops new tech to build deployable bots', 'sociopathic AI convinces someone -through blackmail, Craigslist or whatever- to build it for him', '(nano)bots can then take over the handiwork', 'T2000'.
As you mentioned, the key issue here is control. Giving it control would be dumb, so its best chance is to take control. Ignoring the "it'll be so smart uguys" scenario, the only way for the AI to take control is to make a sudden intelligence jump that surprises the creators (large enough to overcome safety measures) AND the fast realization that a plan like the above is necessary to achieve its goal. In combination, very unlikely.
I stopped reading the book when I realized that the argument he was building was 'we need to focus on controlling the AI, or it might produce unforeseen consequences', a.k.a. common sense for everyone but the most optimistic technocrats.