a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by malen
malen  ·  367 days ago  ·  link  ·    ·  parent  ·  post: You Are Not a Parrot

Genuine apologies for being combative. I got a little worked up at the insinuation that I am somehow trying to dispose of ethics. I do think that you could stand to be less combative yourself; even if you think I have absolutely nothing to offer you, you must think there's some value to having this discussion if you've gone this long, and I have to choose to continue as well.

I did in fact read the Wolfram article when it was posted here a few weeks ago. I appreciated the more straightforward overview of the architecture in the link I shared without the expository information. Referring to a model like ChatGPT as a lookup-table is obfuscating, and that claim is not made by an experts, including in the Wolfram article. I would maybe permit that it's something like a randomized lookup table, in that if its randomization happened the same way every time then any sequence of inputs and outputs would result in the same eventual output. But that's ignoring both the randomization and the feedback loop of rereading of its own outputs, not to mention the presence of a neural net in the model.

Lookup tables aren't mentioned in the article I linked at all. There's plenty of description of the tokenization and encoding of input data, which is superficially similar. The presence of the neural net, the attention scheme, and the encoding of the relative position of words in an input phrase allows for interaction between all the words in a sequence, and that interaction is tempered by all the sequences in its training data.

I don't mean to skip over the fact that ChatGPT itself is not learning anymore. I agree that it's important, but I am discussing a hypothetical scenario that will become reality before long. It's perfectly capable of continuously learning from its conversations (although highly inefficiently) should OpenAI choose to do that, although there's obvious logistical reasons that they don't want a bunch of random people inputting its training data for them.

To say that LLMs are entirely Markov chains is a misapprehension. An LLM like ChatGPT is not memoryless even in its current form, because of its internal feedback loop. If you would instead argue that the state we're referring to is not the most recent statement but instead the full conversation, then I would argue a human speaker IS comparable to a Markov chain in any particular conversation. The human speaker obviously differs in that they can both update their "model" over the course of the conversation and carry those updates forward into future conversations, but the hurdles for a computer model accomplishing that are logistical, not inherent. Am I missing something there? Even Wolfram says:

    What ChatGPT does in generating text is very impressive—and the results are usually very much like what we humans would produce. So does this mean ChatGPT is working like a brain? Its underlying artificial-neural-net structure was ultimately modeled on an idealization of the brain. And it seems quite likely that when we humans generate language many aspects of what’s going on are quite similar.

    When it comes to training (AKA learning) the different “hardware” of the brain and of current computers (as well as, perhaps, some undeveloped algorithmic ideas) forces ChatGPT to use a strategy that’s probably rather different (and in some ways much less efficient) than the brain. And there’s something else as well: unlike even in typical algorithmic computation, ChatGPT doesn’t internally “have loops” or “recompute on data”. And that inevitably limits its computational capability—even with respect to current computers, but definitely with respect to the brain.

    It’s not clear how to “fix that” and still maintain the ability to train the system with reasonable efficiency. But to do so will presumably allow a future ChatGPT to do even more “brain-like things”.

ChatGPT is not a brain, but I don't think you can reasonably claim it has nothing in common with one. And I still don't see any reason why a brain-like model could not be created.

I'll leave aside the ethical questions (no beliefs here!) since I don't think that we'll come to an agreement. I trust that you'll be kind to my dog, and that's all that really counts.





kleinbl00  ·  367 days ago  ·  link  ·  

    Genuine apologies for being combative. I got a little worked up at the insinuation that I am somehow trying to dispose of ethics. I do think that you could stand to be less combative yourself; even if you think I have absolutely nothing to offer you, you must think there's some value to having this discussion if you've gone this long, and I have to choose to continue as well.

Let's stop down and discuss your rhetorical strategy real quick, then.

_______________________________

I know what I'm talking about. You're welcome to disagree; regardless of your opinion of my knowledge, you have to acknowledge that I think I know what I'm talking about, and that should inform your conversation with me. On the other hand, you don't know what you're talking about. This isn't my observation, this is your profession: against my knowledge, you have presented your naivete. But more than that, you imply our positions are equal: my dozen books' worth of casual reading on the subject of artificial intelligence has no more weight than your choice to presume it's unknowable.

Could I stand to be less combative? Always. Are you baiting me into combat? Irrefutably. You are disregarding my knowledge, you are discounting my experience, you are putting forth the maxim that what I know is worth nothing, since you know nothing and we're on equal footing here.

You should also know that I was trained to survive depositions. As a hired, professional expert, it was not uncommon for someone in my position to be made to look like an idiot in front of a jury. This is easier than one might think because you don't need to mock and ridicule expertise, you only need to mock and ridicule the expert. You comment on how he tied his tie. You ask him about his suit. You point out the non-existent fleck of breakfast on his lapel. You read a section of his report and ask if he deliberately left out the apostrophe; you read another section and painstakingly work through a complicated phrase to mutually dumb it down into the simplest possible terms, then you ask him in front of the jury why he used such complicated words if the simplest terms are the equivalent.

These are the strategies the uneducated use to discredit the educated. It's archetypal Reddit-speak; you spend two minutes googling something that you don't understand and don't want to and then you dangle it in front of the person who actually knows what he's talking about and say "somewhere in here is something that disagrees with what you said." And since the audience is also made up of people with no expertise who always want to feel smarter, you'll get the upvotes you need to score the points to "win" the debate.

Here's the problem. The expert knows you're wrong. He's never stopped being an expert. And you're not debating him. You're not attempting to learn anything from him. You're trying to score useless internet points off an unseen audience of equally uneducated individuals because the actual experts left the forum long, long, long ago.

_________________________

Fundamentally? I have nothing to learn from you. Theoretically? You're acting like you think you might have something to learn from me. Yet you're coming at me from a position of innocence, you aren't reading your own sources closely enough to understand them and you're putting forth the fundamental argument that since you (think) you scored a rhetorical papercut or two I'm going to go "gee, you're right, this is all fundamentally unknowable". Why would I do that?

Ultimately, you're arguing that something I understand innately - ME - should not be considered superior to something else that I understand through effort and study - chatbots - because since you don't understand it, there's no way I can. Here, watch:

    ChatGPT is not a brain, but I don't think you can reasonably claim it has nothing in common with one.

That's quite clear. You, personally, don't think that I, personally, can reasonably claim ChatGPT has nothing in common with a brain. I'm five comments deep in responses to your hypothesis but you can't let go of what you think having equal merit to what can be known. And whether or not I'll be kind to your dog is not all that really counts: what counts is you wish to accord rights to a computer program without bothering to understand why experts think that's a bad idea.

malen  ·  366 days ago  ·  link  ·  

I think the internet has left you far too jaded. I wouldn't claim to be an expert on AI, but I do have a degree in statistics and I work as a consultant. Implementing machine learning models is a part of my day to day life. This obviously doesn't necessitate rigorous understanding of the underlying math, but I know more than the average person. I certainly know what a Markov chain is.

The naivete I was putting forward was about philosophy of mind, something I know hardly anything about. I attempted to engage with you on that subject, but you eventually refused, instead shifting focus to technical aspects of the question in which you put forward many inaccuracies. I did get pretty frustrated being talked down to by someone who is saying things that are simply wrong. I should have made my knowledge base more clear at the outset, and been more clear about what I was hoping to learn.

The inaccuracies I'm referring to (claiming LLMs are lookup tables, are not neural nets, can not continue to learn, are Markov chains) are not "papercuts," they are fundamental to your dismissal of my discussion, and show a lack of understanding of the problem that I was hoping to discuss. If we're getting to the point of psychoanalyzing one another, then I might guess that you brought these terms up in an attempt to shut me down with math-y sounding words that came up in your casual reading.

My use of the word "think" was an attempt to be polite. I'll rephrase: you can not reasonably claim that existing LLMs have nothing in common with brains. The sources you have provided do not claim they have nothing in common with brains, nor do they posit any insurmountable obstacle to them becoming brain-like. YOU are hand waving, YOU are starting from what you feel should be right, and YOU are too focused on rhetoric to stop and examine if you might be wrong. We may be commenting on an article from one expert saying it's a bad idea, but appealing to experts in general agreeing with you is treating this as if it's a solved issue. It is not.

This was my first attempt at having a discussion on the internet in 5-ish years and I think you put me off for good. I understand given what you said just now (re: me engaging in Reddit-speak) if you're done responding. I don't intend to write another post, but I want to let you know that if do have more to say I will read it, just cause I think it's unfair to try and get the last word then take my ball and go home.

am_Unition  ·  366 days ago  ·  link  ·  

Please stick around, I enjoyed your contributions!