a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
kleinbl00  ·  368 days ago  ·  link  ·    ·  parent  ·  post: You Are Not a Parrot

Not in the slightest.

You watch that and you think "ramp, ramp, ramp, ramp, box, leap onto platform, leap onto ground", etc.

Atlas goes "lidar pulse, spacial data, G-code, lidar pulse, spacial data, G-code, lidar pulse, spacial data, G-code" and that's just the first few billionths of a second because Atlas is a pile of NVidia Jetson components running at around 2.2 GHz.

THERE IS NO PART OF ATLAS' PROGRAMMING THAT UNDERSTANDS "RAMP."

NOTHING Boston Dynamics does has any contextualization in it whatsoever. It has no concept of "ramp" or "table" because there is absolutely no aspect of Boston Dynamics' programming that requires or benefits from semantics. Now - the interface we use to speak to it? It's got semantics, I'm certain of it. But that's for our convenience, not the device's.

You say "something like this" but you don't really mean that. You mean "something shaped like me." You're going "it's shaped like me, therefore it must be like me" because anthropomorphism is what we do. Study after study after study, humans can't tell the difference between an anthropomorphic mannequin responding to a random number generator vs. an anthropomorphic mannequin responding to their faces.

Riddle me this - would you have asked "wouldn't this change once the AI is embedded in something like this?" if "something like this" is an airsoft gun hooked up a motion detector?

It's the same programming. It's the same feedback loops. One is more complex than the other, that's all.

More importantly, LLMs such as ChatGPT deliberately operate without context. Turns out context slows them down. It's all pattern recognition - "do this when you see lots of white" is hella faster than "lots of white equals snow, do this when you see snow". Do you get that? There's no snow. Snow doesn't exist. Snow is a concept us humans have made up for our own convenience as far as the LLMs are concerned, the only thing that matters is adherence to the pattern.

All models are wrong; some are useful. - George Box

The object of the article is to determine the usefulness of the model on the basis that all models are wrong. Your argument is "won't the models get less wrong?"

No.

They never will.

That's not how they work.

If you try to make them work that way, they break.

The model will get better at fitting curves where it has lots of data, and exactly no better where it doesn't have data, and "context" is data that it will never, ever have.

    A few minutes into our conversation, he reminded me that not long ago I would not have been considered a full person. “As recently as 50 years ago, you couldn’t have opened a bank account without your husband signing,” he said. Then he proposed a thought experiment: “Let’s say you have a life-size RealDoll in the shape of Carrie Fisher.” To clarify, a RealDoll is a sex doll. “It’s technologically trivial to insert a chatbot. Just put this inside of that.”

    Lemoine paused and, like a good guy, said, “Sorry if this is getting triggering.”

    I said it was okay.

    He said, “What happens when the doll says no? Is that rape?”

    I said, “What happens when the doll says no, and it’s not rape, and you get used to that?”

When we have this discussion about video games, it's just good clean masculine fun. The Effective Altruists among us can go "it's just a video game" because we can all see it's just a video game. But since the Effective Altruists among us don't actually believe all humans are worth being treated as human, they go "but see look if it's human-like enough to trigger your pareidolia, then obviously machines should have more rights than black people."

You're effectively asking "but once we're all fooled into thinking it's alive, shouldn't we treat it as if it were?" And that's exactly the point the article is arguing against.