a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by user-inactivated
user-inactivated  ·  3357 days ago  ·  link  ·    ·  parent  ·  post: How long until a robot cries?

Rosiland Picard's group at the Media Lab have been working on figuring out what people are feeling for 20 years, and they've gotten very good at it. You don't need to know what boredom is to recognize bored faces. Kismet proved you don't need to do much to make the right expression in response. Being able to fake it well enough is as good as it gets in AI. I go back and forth on whether faking it well enough is really very different from the real thing. I fake the right facial expression all the time, after all. I'm pretty sure everyone does.





rezzeJ  ·  3357 days ago  ·  link  ·  

Could it be argued that, in regards to human interaction, faking the right facial expression could still be seen as somewhat genuine? To fake the right expression, one has to have at least partly empathised with the other party. In turn, even though the expression isn't a knee-jerk response, conscious processing of the situation and consideration of the correct response has still taken place.

To provide an analogy. Imagine you were asked a question. To answer it you searched google, deduced the correct response and gave it to them. Even though you didn't intrinsically have that knowledge, you still displayed knowledge and tact in relaying the correct response. That answer isn't any less genuine than if you'd known off the top of you head.

kleinbl00  ·  3357 days ago  ·  link  ·  

This is kind of the crux of the issue. David Levy says yes. Sherry Turkle says no. Levy's argument is "good enough is good enough" while Turkle's argument is "a human response does not imply a human motivation yet that's what we're banking on and it's gonna fuck us, here's a giant ream of data proving so."

To your analogy: you ask me a question because you want my answer, not because you want a Google search. If I have no idea and tell you I have no idea, you have my answer ("I have no idea") and that is a datapoint in our relationship. If I have no idea and tell you I'll look it up, you have a different answer ("I don't know, but I like you enough to put in some research"). If I have no idea, look it up surreptitiously and then tell you, you have still a different answer ("I know everything"). It doesn't take too much extrapolation to see that these three responses cover a whole bunch of relational nuance and provide a wide fan of outcomes... yet from a machine intelligence standpoint, you only get the 2nd... and you're mistaking "IF A THEN B" for "I like you enough to put in some research".

rezzeJ  ·  3357 days ago  ·  link  ·  

Thanks for the books, I really must motivate myself to set about sometime to read instead of mindless trawling the web as much as I do. I've succeeded in reading a good few articles I never would have bothered with other the last few days, so hopefully that's on the up. I've been reading 'Our Mathematical Universe' for too long, I need to finish that.

I agree with your deconstruction of my analogy. I have no counter for now.

user-inactivated  ·  3357 days ago  ·  link  ·  

    You don't need to know what boredom is to recognize bored faces.

I thought the article's example of a robot noticing when someone's throat tightens was very good.