a thoughtful web.
Share good ideas and conversation.   Login or Take a Tour!
comment by veen
veen  ·  1587 days ago  ·  link  ·    ·  parent  ·  post: Threat from Artificial Intelligence not just Hollywood fantasy

My interest in Bostroms book has decreased greatly in the last two weeks - in part because of this thread, in part because I read more - because I am less and less convinced that it is an actual problem.

Referring to Yudikowsky was one part of his argument. He also outlined a more practical step-by-step plan, the gist of which was that the AI would convince someone to build something like a nanobot for him to build a Von Neumann probe.. So the chain would be 'AI becomes super smart', 'AI decides it needs more resources / physical influence', 'AI develops new tech to build deployable bots', 'sociopathic AI convinces someone -through blackmail, Craigslist or whatever- to build it for him', '(nano)bots can then take over the handiwork', 'T2000'.

As you mentioned, the key issue here is control. Giving it control would be dumb, so its best chance is to take control. Ignoring the "it'll be so smart uguys" scenario, the only way for the AI to take control is to make a sudden intelligence jump that surprises the creators (large enough to overcome safety measures) AND the fast realization that a plan like the above is necessary to achieve its goal. In combination, very unlikely.

I stopped reading the book when I realized that the argument he was building was 'we need to focus on controlling the AI, or it might produce unforeseen consequences', a.k.a. common sense for everyone but the most optimistic technocrats.

kleinbl00  ·  1587 days ago  ·  link  ·  

Wow - the science paranoia trifecta of malevolent AI, Von Neumann Machines and Grey Goo! Someone's been reading Stephen Hawking...

The only way for an AI to take control is to either (A) magically infect everyone it talks to into an instantaneous AI death cult or (B) instantaneously automate a world so far stubbornly resistant to automation. I think Yudikowsky thinks (A) is a real possibility, which is one reason I lost interest in Yudikowsky... and I think everyone else assumes (B) is one switch-flip away. What really bugs me is that these conversations never even evolve to the thought-provoking stuff:

- So thanks to the Singularity, we have a hyperintelligent AI. Why do we have only one?

- Why do we assume that the minute we have hyperintelligent AI, all agents of it will work together towards a common goal? Has anyone ever observed two overly-smart people for more than ten minutes?

- Why is it assumed that humans, us distrustful machine-hating beings, don't start the clock on another hyperintelligent AI specifically seeded to protect humanity from hyperintelligent AIs?