a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
ecib  ·  3702 days ago  ·  link  ·    ·  parent  ·  post: I'm an artificial intelligence researcher and I'm not afraid. Here's why.

There has been a lot of "advanced AI is a threat to mankind in a singularity sort of way" floating around the futurology and VC/tech circles recently (I think because Elon Musk made that remark).

What I don't understand is why it follows that this would be the case.

I think it's a leap to suggest that very smart AI = self awareness, and another that it = a self preservation instinct (which assumes a need to replicate/dominate human systems). I don't think they are huge leaps necessarily, but they I think they are assumptions nonetheless.

At any rate, the assumption is that self aware AI will certainly want to preserve itself and will dominate humanity and human systems to ensure this happens. THAT is the massive leap I have trouble with. Could it happen? I guess, but why would it have to be that way? Why wouldn't benevolence be the most likely outcome of an advanced AI with a mission for self preservation?

Think about it. So much of the ill humanity visits on its varied tribes stem from resource grabs in the name of self preservation (on a very high level). If the AI in question is so advanced, wouldn't it follow that it would be faaaaaaaaaaaaar better at global resource distribution than humans are? Wouldn't it be able to more tidily see that it gets what it needs to perpetuate, while also arranging that resources for humanity are now more efficiently distributed because it sees a way to do that that isn't mutually exclusive with its own existence? And why wouldn't it have a feeling of benevolence to its creators?

I understand the doomsday arguments, but I feel that premises behind the most fantastic doomsday scenarios easily can suggest a better outcome.