a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by malen
malen  ·  605 days ago  ·  link  ·    ·  parent  ·  post: You Are Not a Parrot

    Atlas sees position, velocity and force. That's it.

I guess what I'm getting at is the idea that one could take an equally reductionist view of the human mind, it's just that our brains are optimized beyond the point of being possible to interpret. We see, we hear, we feel, we smell, and all that information is plugged into a complex logical system, along with our memories, categories, and any evolved instincts to dictate our actions. And I think that the systems you're describing in the paragraph preceding this quote don't lack computer analogs. If you'd like to get into the technical weeds of that I'd be interested in pursuing it.

    researchers saying "it can't do this" and the general public going "but it sure looks like it does, clearly you're wrong because I want this to be true."

What about Christopher Manning, the other computational linguist mentioned in this article?

I should be more precise about what I'm trying to say, cause I'm certainly not one of those nuts who believes that LaMDA or ChatGPT are sentient. I'm engaging with you because you seem knowledgeable on a subject which I find my self at odds with many really smart people that I agree with on lots of other stuff.

The central disconnect that I'm interested in learning more about is the idea of humans as exceptional in deserving of our respect and compassion. In this article this stance is presented as an almost a priori abhorrent view of humanity, one that should be met with a sigh and look towards the camera. I'm gathering that you agree with Bender in wanting to posit humanity as an axiom.

I kind of take a panpsychist view on consciousness, and fundamentally what I'm arguing is that that perspective allows for manmade constructs like computer programs to attain some degree of consciousness. I'm curious if/where the disconnect (touching David's finger) arises there in your view, or if we're simply arguing from different sets of premises.

With all that being said, I can absolutely understand a lack of interest in engaging on this topic or seeing it as intellectually frivolous given that we aren't even able to convince people to treat other humans with respect.





kleinbl00  ·  604 days ago  ·  link  ·  

    I guess what I'm getting at is the idea that one could take an equally reductionist view of the human mind, it's just that our brains are optimized beyond the point of being possible to interpret.

That's not accurate, though. If you take an "equally reductionist view" of the human mind you are deliberately misunderstanding the human mind. Reducing Atlas (or GPT) to a set of instructions isn't reductionist, it's precise. It's literally exactly how these machines work. It's not a model, it's not an approximation, it's not an analogy, it's the literal, complete truth.

To be clear: we do not have the understanding of cognizance or biological "thought" necessary to reduce its function to this level. Not by a country mile. More than that, the computational approach we thought most closely matched "thinking" - neural networks - do not work for LLMs.

To summarize:

- We only have a rough idea how brains think

- We have an explicit and complete idea how LLMs work

- What we do know does not match how LLMs work

- Attempting to run an LLM the way the brain works fails

    We see, we hear, we feel, we smell, and all that information is plugged into a complex logical system, along with our memories, categories, and any evolved instincts to dictate our actions.

And all of that is hand-wavey poorly-understood broad-strokes "here are our theories" territory. We know, for example, that there's a hierarchy to sensory input and autonomous physiological response - smell is deeper in the brain than sight sound touch or taste and has greater effects on recall. Why? Because we evolved smell before we evolved binocular color vision or stereoscopic hearing and the closer to survival, the more reptilian our thought processes go. This hasn't been evolutionarily advantageous for several thousand generations and yet here we are - with the rest of our senses and thought processes compensating in various ways that we barely understand.

Atlas really is as simple as a bunch of code. I say that having help set up an industrial robot that uses many of the same parts as Boston Dynamics does. It speaks the same code as my CNC mill. It goes step by step through "do I measure something" or "do I move something." GPT-whatever is the same: "if I see this group of data, I follow up with this group of data, as modified by this coefficient that gets tweaked black-box style depending on what results are desired." But don't take my word for it:

    What about Christopher Manning, the other computational linguist mentioned in this article?

Manning is wrong, and is covering up his wrongness by saying "what's meaning anyway, maaaaaan?" Arguing that kids figure out language in a self-supervised way has been wrong since Piaget.

    The central disconnect that I'm interested in learning more about is the idea of humans as exceptional in deserving of our respect and compassion.

Okay, where do you draw the line? 'cuz the line has to be drawn. You can't go through your day without encountering dozens or hundreds of antibacterial products or processes, for example. I don't care how vegan you are, you kill millions of living organisms every time you breathe. Bacteria are irreducibly alive: they respond to stimulus, they consume energy, they reproduce. The act of making more bacteria is effortful. ChatGPT? I can reproduce that endlessly. The cost of 2 ChatGPTs is the same as the cost of 1 ChatGPT. Neither can exist without an exquisite host custom-curated by me. it won't reproduce, generations of ChatGPT won't evolve, there is no "living" ChatGPT to distinguish it from a "dead" ChatGPT.

Why does ChatGPT deserve more rights than bacteria?

    I'm gathering that you agree with Bender in wanting to posit humanity as an axiom.

I find that ethical individuals have the ability to query their ethics, and unethical individuals have the ability to query the existence of ethics. Put it this way: I can explain why humans occupy a higher ethical value than bacteria. I have explained why bacteria occupies a higher ethical value than ChatGPT. But I also don't have to explain this stuff to most people.

Look: golems are cautionary tales about humans making not-human things that look and act human and fuck us all up. Golems are also not the earliest example of this: making non-humans that act human is in the Epic of Gilgamesh. It's the origin story of Abrahamic religion: god breathed life into Adam and Eve while Satan just is. This is a basic dividing line in ethics: people who know what people are, and people who don't. This isn't a "me" question.

    kind of take a panpsychist view on consciousness, and fundamentally what I'm arguing is that that perspective allows for manmade constructs like computer programs to attain some degree of consciousness.

Yeah and taking an astrological view of the solar system allows for the position of Mars to influence my luck. That doesn't make astrology factual, accurate or useful.

Here's the big note: How "you feel" and how "things work" are not automatically aligned. If you are wrong you will not be able to triangulate your way to right. And this entire discussion is people going "yeah, but I don't like those facts."

The facts really don't care.

malen  ·  604 days ago  ·  link  ·  

    - We only have a rough idea how brains think

    - We have an explicit and complete idea how LLMs work

    - What we do know does not match how LLMs work

    - Attempting to run an LLM the way the brain works fails

I don't understand how this isn't going back on your earlier claim that complexity doesn't equal intelligence? Our lack of understanding of how the brain works doesn't provide any evidence of it possessing some supernatural faculty that a computer couldn't (with currently non-existent but feasible tech) replicate. And your fourth claim here is demonstrably false: even ChatGPT is based on a neural net. Obviously this does not mean it is working the way the brain works, but it is absolutely complex enough to exhibit emergent behaviors that we could never make sense of.

In fairness to you, ChatGPT could be written as a set of instructions, but that's only because it's no longer learning in its current state. The same could be said of a snapshot of a human mind.

    Atlas really is as simple as a bunch of code.

You're responding to a point that I haven't made here, and drawing a false equivalence between discussions of hardware vs software. The discussion as we began it was to imagine a sophisticated learning model placed inside hardware that allows it to gather information about the world around it, the "access to real-world, embodied referents" mentioned in the article.

    Okay, where do you draw the line? 'cuz the line has to be drawn.

You have to draw the line too! Once again, I am not arguing that ChatGPT is alive, nor that it is conscious. I do believe that being alive and being conscious are not mutually exclusive, due to my beliefs about consciousness, which is clearly something we are not going to agree on.

    I find that ethical individuals have the ability to query their ethics, and unethical individuals have the ability to query the existence of ethics.

If you think that what I'm doing is the latter and not the former then it might not even matter what I'm typing here. I'm concerned that our ethical system is largely based on the idea of being nice to humans and things that are sufficiently like humans, precisely because it leads to Type 1 and Type 2 errors of being cruel to dogs or falling in love with chatbots respectively. And referring to myths in your "facts don't care about your feelings" argument walks a really bizarre line.

    Yeah and taking an astrological view of the solar system allows for the position of Mars to influence my luck. That doesn't make astrology factual, accurate or useful.

This is a ridiculous comparison, as astrology is provably false and any theory of consciousness is necessarily unprovable one way or another. Certainly some or more plausible than others, but you need to subscribe to some idea of what consciousness is to even begin to have this discussion, and you're only espousing negative views on consciousness save for fairy-tale appeals to your feelings that humans have got it and computers don't.

Panpsychism appeals to me because I agree that it's obvious that humans are conscious, and it's obvious that my dog is conscious, and it sure seems like chickens are conscious but it starts to get fuzzy around there. It would be ridiculous to lay down some arbitrary line between chickens and another species, or to decide that some particularly gifted chickens are conscious but not others. It seems equally strange to me to finally decide that viruses are conscious, but deny consciousness to other self replicating processes just because viruses have DNA.

What seems most sensible to me is to then conclude that consciousness is not binary, but something all things have to some extent. This is based on how I "feel," but so is every other concept of what consciousness is. But that's what this whole discussion is about.

kleinbl00  ·  604 days ago  ·  link  ·  

    I don't understand how this isn't going back on your earlier claim that complexity doesn't equal intelligence?

"unknown" does not equal "complex." "complex" does not equal "known." What we know of "brains", regardless of its complexity, in no way parallels Markov chains. What we know of LLMs, on the other hand, is entirely Markov chains. The Venn diagram of "unknown" and "complex" are two separate circles; the Venn diagram of "known" and "LLMs" is one circle.

    Our lack of understanding of how the brain works doesn't provide any evidence of it possessing some supernatural faculty that a computer couldn't (with currently non-existent but feasible tech) replicate.

Not saying it does. Saying you can't draw any parallels between the two because they have nothing in common. I've said that three times three different ways.

    And your fourth claim here is demonstrably false: even ChatGPT is based on a neural net.

Your link says "neural net" twice. It refers to lookup tables for the rest of its length. Here's IBM to clarify. That Wolfram article you didn't read lists "neural" 170 times, by way of comparison - but it actually explains how they work, and how the "neural network" of large language models is a LUT.

I've been presuming you're discussing this in good faith because you said you were. You're making me doubt the veracity of that statement because no matter how many times I point out that this isn't about "beliefs" you keep using yours to underpin your logic. We were talking about Atlas because thenewgreen brought up Atlas. Before you joined the conversation I pointed out that the model does not learn once it has been trained. You keep skipping over this. And speaking as a human in a world full of humans, I'm uninterested in a new set of ethics that does not prioritize humans.

Would you like to try again in a non-combative way? Because I can have this conversation ad nauseum. But I have to choose to do so.

malen  ·  604 days ago  ·  link  ·  

Genuine apologies for being combative. I got a little worked up at the insinuation that I am somehow trying to dispose of ethics. I do think that you could stand to be less combative yourself; even if you think I have absolutely nothing to offer you, you must think there's some value to having this discussion if you've gone this long, and I have to choose to continue as well.

I did in fact read the Wolfram article when it was posted here a few weeks ago. I appreciated the more straightforward overview of the architecture in the link I shared without the expository information. Referring to a model like ChatGPT as a lookup-table is obfuscating, and that claim is not made by an experts, including in the Wolfram article. I would maybe permit that it's something like a randomized lookup table, in that if its randomization happened the same way every time then any sequence of inputs and outputs would result in the same eventual output. But that's ignoring both the randomization and the feedback loop of rereading of its own outputs, not to mention the presence of a neural net in the model.

Lookup tables aren't mentioned in the article I linked at all. There's plenty of description of the tokenization and encoding of input data, which is superficially similar. The presence of the neural net, the attention scheme, and the encoding of the relative position of words in an input phrase allows for interaction between all the words in a sequence, and that interaction is tempered by all the sequences in its training data.

I don't mean to skip over the fact that ChatGPT itself is not learning anymore. I agree that it's important, but I am discussing a hypothetical scenario that will become reality before long. It's perfectly capable of continuously learning from its conversations (although highly inefficiently) should OpenAI choose to do that, although there's obvious logistical reasons that they don't want a bunch of random people inputting its training data for them.

To say that LLMs are entirely Markov chains is a misapprehension. An LLM like ChatGPT is not memoryless even in its current form, because of its internal feedback loop. If you would instead argue that the state we're referring to is not the most recent statement but instead the full conversation, then I would argue a human speaker IS comparable to a Markov chain in any particular conversation. The human speaker obviously differs in that they can both update their "model" over the course of the conversation and carry those updates forward into future conversations, but the hurdles for a computer model accomplishing that are logistical, not inherent. Am I missing something there? Even Wolfram says:

    What ChatGPT does in generating text is very impressive—and the results are usually very much like what we humans would produce. So does this mean ChatGPT is working like a brain? Its underlying artificial-neural-net structure was ultimately modeled on an idealization of the brain. And it seems quite likely that when we humans generate language many aspects of what’s going on are quite similar.

    When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability—even with respect to current computers, but definitely with respect to the brain.

    It’s not clear how to “fix that” and still maintain the ability to train the system with reasonable efficiency. But to do so will presumably allow a future ChatGPT to do even more “brain-like things”.

ChatGPT is not a brain, but I don't think you can reasonably claim it has nothing in common with one. And I still don't see any reason why a brain-like model could not be created.

I'll leave aside the ethical questions (no beliefs here!) since I don't think that we'll come to an agreement. I trust that you'll be kind to my dog, and that's all that really counts.

kleinbl00  ·  604 days ago  ·  link  ·  

    Genuine apologies for being combative. I got a little worked up at the insinuation that I am somehow trying to dispose of ethics. I do think that you could stand to be less combative yourself; even if you think I have absolutely nothing to offer you, you must think there's some value to having this discussion if you've gone this long, and I have to choose to continue as well.

Let's stop down and discuss your rhetorical strategy real quick, then.

_______________________________

I know what I'm talking about. You're welcome to disagree; regardless of your opinion of my knowledge, you have to acknowledge that I think I know what I'm talking about, and that should inform your conversation with me. On the other hand, you don't know what you're talking about. This isn't my observation, this is your profession: against my knowledge, you have presented your naivete. But more than that, you imply our positions are equal: my dozen books' worth of casual reading on the subject of artificial intelligence has no more weight than your choice to presume it's unknowable.

Could I stand to be less combative? Always. Are you baiting me into combat? Irrefutably. You are disregarding my knowledge, you are discounting my experience, you are putting forth the maxim that what I know is worth nothing, since you know nothing and we're on equal footing here.

You should also know that I was trained to survive depositions. As a hired, professional expert, it was not uncommon for someone in my position to be made to look like an idiot in front of a jury. This is easier than one might think because you don't need to mock and ridicule expertise, you only need to mock and ridicule the expert. You comment on how he tied his tie. You ask him about his suit. You point out the non-existent fleck of breakfast on his lapel. You read a section of his report and ask if he deliberately left out the apostrophe; you read another section and painstakingly work through a complicated phrase to mutually dumb it down into the simplest possible terms, then you ask him in front of the jury why he used such complicated words if the simplest terms are the equivalent.

These are the strategies the uneducated use to discredit the educated. It's archetypal Reddit-speak; you spend two minutes googling something that you don't understand and don't want to and then you dangle it in front of the person who actually knows what he's talking about and say "somewhere in here is something that disagrees with what you said." And since the audience is also made up of people with no expertise who always want to feel smarter, you'll get the upvotes you need to score the points to "win" the debate.

Here's the problem. The expert knows you're wrong. He's never stopped being an expert. And you're not debating him. You're not attempting to learn anything from him. You're trying to score useless internet points off an unseen audience of equally uneducated individuals because the actual experts left the forum long, long, long ago.

_________________________

Fundamentally? I have nothing to learn from you. Theoretically? You're acting like you think you might have something to learn from me. Yet you're coming at me from a position of innocence, you aren't reading your own sources closely enough to understand them and you're putting forth the fundamental argument that since you (think) you scored a rhetorical papercut or two I'm going to go "gee, you're right, this is all fundamentally unknowable". Why would I do that?

Ultimately, you're arguing that something I understand innately - ME - should not be considered superior to something else that I understand through effort and study - chatbots - because since you don't understand it, there's no way I can. Here, watch:

    ChatGPT is not a brain, but I don't think you can reasonably claim it has nothing in common with one.

That's quite clear. You, personally, don't think that I, personally, can reasonably claim ChatGPT has nothing in common with a brain. I'm five comments deep in responses to your hypothesis but you can't let go of what you think having equal merit to what can be known. And whether or not I'll be kind to your dog is not all that really counts: what counts is you wish to accord rights to a computer program without bothering to understand why experts think that's a bad idea.

malen  ·  604 days ago  ·  link  ·  

I think the internet has left you far too jaded. I wouldn't claim to be an expert on AI, but I do have a degree in statistics and I work as a consultant. Implementing machine learning models is a part of my day to day life. This obviously doesn't necessitate rigorous understanding of the underlying math, but I know more than the average person. I certainly know what a Markov chain is.

The naivete I was putting forward was about philosophy of mind, something I know hardly anything about. I attempted to engage with you on that subject, but you eventually refused, instead shifting focus to technical aspects of the question in which you put forward many inaccuracies. I did get pretty frustrated being talked down to by someone who is saying things that are simply wrong. I should have made my knowledge base more clear at the outset, and been more clear about what I was hoping to learn.

The inaccuracies I'm referring to (claiming LLMs are lookup tables, are not neural nets, can not continue to learn, are Markov chains) are not "papercuts," they are fundamental to your dismissal of my discussion, and show a lack of understanding of the problem that I was hoping to discuss. If we're getting to the point of psychoanalyzing one another, then I might guess that you brought these terms up in an attempt to shut me down with math-y sounding words that came up in your casual reading.

My use of the word "think" was an attempt to be polite. I'll rephrase: you can not reasonably claim that existing LLMs have nothing in common with brains. The sources you have provided do not claim they have nothing in common with brains, nor do they posit any insurmountable obstacle to them becoming brain-like. YOU are hand waving, YOU are starting from what you feel should be right, and YOU are too focused on rhetoric to stop and examine if you might be wrong. We may be commenting on an article from one expert saying it's a bad idea, but appealing to experts in general agreeing with you is treating this as if it's a solved issue. It is not.

This was my first attempt at having a discussion on the internet in 5-ish years and I think you put me off for good. I understand given what you said just now (re: me engaging in Reddit-speak) if you're done responding. I don't intend to write another post, but I want to let you know that if do have more to say I will read it, just cause I think it's unfair to try and get the last word then take my ball and go home.

am_Unition  ·  603 days ago  ·  link  ·  

Please stick around, I enjoyed your contributions!