a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
kleinbl00  ·  368 days ago  ·  link  ·    ·  parent  ·  post: You Are Not a Parrot

    I agree with a lot of what you're saying, but wonder about the claim that we think "ramp, ramp, box" watching this video.

The context is "box jump." You can google that. It's a cross-training thing. The way humans codify "jump" is "high jump" "long jump" "rope jump" etc. "jump" is the act of using your legs to leave the ground temporarily. We use it as a noun ("he didn't make the jump"), a verb ("he jumped rope"), as a metaphor ("that's quite a logical chasm to jump").

Logically and semantically, we parse the actions performed by the robot in terms of discrete tasks. Those tasks are assigned names. Those names have contexts.

The robot, on the other hand, "thinks"

    G1 X50 Y55 Z0.4 E0.06838 F500

    G1 X51.2914 Y54.8196 Z0.416667 E0.06824 F500

    G1 X52.4896 Y54.3121 Z0.433333 E0.0681 F500

    G1 X53.5134 Y53.5134 Z0.45 E0.06795 F500

    G1 X54.294 Y52.4792 Z0.466667 E0.06781 F500

    G1 X54.7793 Y51.2806 Z0.483333 E0.06767 F500

    G1 X54.9375 Y50 Z0.5 E0.06753 F500

    G1 X54.7592 Y48.7248 Z0.516667 E0.06738 F500

    G1 X54.258 Y47.5417 Z0.533333 E0.06724 F500

    G1 X53.4692 Y46.5308 Z0.55 E0.0671 F500

    G1 X52.4479 Y45.7601 Z0.566667 E0.06696 F500

    G1 X51.2644 Y45.281 Z0.583333 E0.06681 F500

    G1 X50 Y45.125 Z0.6 E0.06667 F500

    G1 X48.741 Y45.3012 Z0.616667 E0.06653 F500

    G1 X47.5729 Y45.7962 Z0.633333 E0.06639 F500

    G1 X46.575 Y46.575 Z0.65 E0.06625 F500

    G1 X45.8142 Y47.5833 Z0.666667 E0.0661 F500

    G1 X45.3414 Y48.7517 Z0.683333 E0.06596 F500

    G1 X45.1875 Y50 Z0.7 E0.06582 F500

    G1 X45.3615 Y51.2429 Z0.716667 E0.06568 F500

    G1 X45.8503 Y52.3958 Z0.733333 E0.06553 F500

    G1 X46.6191 Y53.3809 Z0.75 E0.06539 F500

    G1 X47.6146 Y54.1317 Z0.766667 E0.06525 F500

    G1 X48.7679 Y54.5982 Z0.783333 E0.06511 F500

    G1 X50 Y54.75 Z0.8 E0.06496 F500

    G1 X51.2267 Y54.5781 Z0.816667 E0.06482 F500

    G1 X52.3646 Y54.0956 Z0.833333 E0.06468 F500

    G1 X53.3367 Y53.3367 Z0.85 E0.06454 F500

    G1 X54.0775 Y52.3542 Z0.866667 E0.06439 F500

    G1 X54.5378 Y51.2159 Z0.883333 E0.06425 F500

    G1 X54.6875 Y50 Z0.9 E0.06411 F500

    G1 X54.5177 Y48.7895 Z0.916667 E0.06397 F500

    G1 X54.0415 Y47.6667 Z0.933333 E0.06382 F500

    G1 X53.2925 Y46.7075 Z0.95 E0.06368 F500

    G1 X52.3229 Y45.9766 Z0.966667 E0.06354 F500

    G1 X51.1997 Y45.5225 Z0.983333 E0.0634 F500

    G1 X50 Y45.375 Z1 E0.06325 F500

    G1 X48.8057 Y45.5427 Z1.01667 E0.06311 F500

    G1 X47.6979 Y46.0127 Z1.03333 E0.06297 F500

    G1 X46.7517 Y46.7517 Z1.05 E0.06283 F500

    G1 X46.0307 Y47.7083 Z1.06667 E0.06268 F500

    G1 X45.5829 Y48.8164 Z1.08333 E0.06254 F500

    G1 X45.4375 Y50 Z1.1 E0.0624 F500

    G1 X45.603 Y51.1782 Z1.11667 E0.06226 F500

    G1 X46.0668 Y52.2708 Z1.13333 E0.06211 F500

    G1 X46.7959 Y53.2041 Z1.15 E0.06197 F500

    G1 X47.7396 Y53.9152 Z1.16667 E0.06183 F500

    G1 X48.8326 Y54.3567 Z1.18333 E0.06169 F500

    G1 X50 Y54.5 Z1.2 E0.06154 F500

    G1 X51.162 Y54.3366 Z1.21667 E0.0614 F500

    G1 X52.2396 Y53.8791 Z1.23333 E0.06126 F500

    G1 X53.1599 Y53.1599 Z1.25 E0.06112 F500

    I think that discussions about what qualifies as artificial intelligence benefit from more careful consideration of what we consider to be our own intelligence.

I don't.

The difference is, I work with g-code. I work with servos. I work with stepper motors. I work with sensors. I work with feedback. I work with the building blocks that allow Boston Dynamics to do its magic - and it's not magic, and it certainly isn't thought. It's just code.

I know enough to know I don't know much about artificial intelligence. But I also know more than most any counter-party at this point for the simple fact that I understand machines. Most people don't, most people don't want to. What you're doing here is taking my arguments, disregarding them because you don't understand them, and operating from the assumption that nobody else does, either. And here's the thing: this stuff isn't hard to learn. It's not hidden knowledge. There's nothing esoteric about it. Every researcher in machine intelligence will tell you that machines aren't intelligent, and every chin-stroking pseudointellectual will go "but enough about your facts, let's talk about my feelings." And that is exactly the trap that everyone of any knowledge in the situation has been screaming about since Joseph Wiezenbaum argued ELIZA wasn't alive in 1964.

It's a fishing lure. It's about the same size as a caddis fly, it's about the same shape, and it shimmers in a similar fashion? But it's never ever ever going to do more than catch fish, and woe be unto you if you are a trout rather than an angler.

    We have a heuristic for ramp developed over a similar feedback loop, like gently correcting your kid when they call a raccoon "kitty".

No, we don't. We have a semantic language that contains logical and contextual agreement about the concept of "ramp." It breaks down, too - is a hot dog a sandwich? Thing is, our days don't come apart when we disagree about the characterization of "sandwich." We correct our kid when they call raccoons "kitties" because the raccoon is a lot more likely to bite them if they try and pet it - if it's a pet raccoon we're less likely to do that, too. When Boo calls Sully "Kitty" in Monsters Inc we laugh because he's obviously not a kitty, but Boo's use of the descriptor "kitty" gives us Boo's context of Sully, which is as a cute fuzzy thing to love, as opposed to a scary monster. More than that, as humans we have inborn, evolved responses that we're just starting to understand - the "code" can be modified by the environment but the starter set contains a whole bunch of modules that we rely on without knowing it.

Atlas sees position, velocity and force. That's it.

    What if an extension of LLMs was developed that made use of long term memory and the ability to generate new categories?

Then it wouldn't be an LLM.

It's a look-up table. That's all it is. That's all it can be. You give it a set of points in space, it will synthesize any interpolated point in between its known points. It has no "categories." It is not a classifier. It cannot be a classifier. If you make it operate as a classifier it will break down entirely. It will never hand a squashed grasshopper to Ally Sheedy and say "reassemble, Stephanie". What you're asking for is intuitive leaps, and the code can not will not is not designed for is not capable of doing that.

    Why do you think that they'll never be able to work that way?

Because literally everyone who works with markov chains says so? You're commenting on an article that spends several thousand words exactly answering this question. That's literally what it's about: researchers saying "it can't do this" and the general public going "but it sure looks like it does, clearly you're wrong because I want this to be true."

    And moreover, what is the model trying and failing to replicate in your mind?

None of this "in my mind" nonsense - this isn't my opinion, or anyone else's opinion. This is the basis by which LLMs work: they don't think, they don't feel, they don't intuit. They synthesize between data points within a known corpus of data. That data is utterly bereft of the contextualization that makes up the whole of human interaction. There's no breathing life into it, there's no reaching out and touching David's finger, there's no "and then a miracle occurs." The difference between Cleverbot and GPT4 is the difference between Tic Tac Toe and Chess - the rules are more sophisticated, the board is larger, it's all just data and rules.

A chess board is six types of pieces on 64 squares. It does not think. But suppose it was suddenly six squidillion types of pieces on 64 gazillion squares - would it think?

Complexity does not equal intelligence. It never will.