Interesting article on the future of legal rights of an artificial intelligence. Wonder what you guys think.

kleinbl00:

The Sherry Turkle book linked in the article is a much more nuanced, much more scientific, much more researched look at the discussions in the article itself. She's not saying "there’s something immoral about tricking people into thinking that there’s someone there, even in something less creepy than sex, like elder care robots." Alone Together makes the point that we don't need very many signals for "human" and that we'll substitute the rest from our own impression of the situation - just like online, anything I don't know about you I'll substitute in from my own experience. My impression of you is about 95% made up of "me" and "my experiences" and when 95% of our interactions with robots are made up of us and our experiences, that means a 5% "human" robot will pass for fully human.

The implications for this are far-reaching: you can easily see how a perspective like "we will allow robots to own property" and silly flights of fancy like that get started. People will essentially give human-acting machines human-acting agency without emotionally internalizing that they aren't human at all. This is the basis of Turkle's argument: we need to understand that things that look and act alive to us do so because we give them one hell of a benefit of the doubt.

I haven't read Kaplan's book and I probably won't. I've yet to find a pop-sci treatment of artificial intelligence that didn't annoy me. I'll say this: Sherry Turkle spent two or three chapters dismantling the similar book Love and Sex with Robots, which looks to be a better book overall. Turkle's objections were essentially that human society will suffer if we come to accept that 5% humanity from our devices when the people around us are offering 100% by default.

And it doesn't include statements like

    If I didn’t tip, the robot might shoot a laser at me. Or hack my credit records.

posted 3128 days ago