That's not accurate, though. If you take an "equally reductionist view" of the human mind you are deliberately misunderstanding the human mind. Reducing Atlas (or GPT) to a set of instructions isn't reductionist, it's precise. It's literally exactly how these machines work. It's not a model, it's not an approximation, it's not an analogy, it's the literal, complete truth. To be clear: we do not have the understanding of cognizance or biological "thought" necessary to reduce its function to this level. Not by a country mile. More than that, the computational approach we thought most closely matched "thinking" - neural networks - do not work for LLMs. To summarize: - We only have a rough idea how brains think - We have an explicit and complete idea how LLMs work - What we do know does not match how LLMs work - Attempting to run an LLM the way the brain works fails And all of that is hand-wavey poorly-understood broad-strokes "here are our theories" territory. We know, for example, that there's a hierarchy to sensory input and autonomous physiological response - smell is deeper in the brain than sight sound touch or taste and has greater effects on recall. Why? Because we evolved smell before we evolved binocular color vision or stereoscopic hearing and the closer to survival, the more reptilian our thought processes go. This hasn't been evolutionarily advantageous for several thousand generations and yet here we are - with the rest of our senses and thought processes compensating in various ways that we barely understand. Atlas really is as simple as a bunch of code. I say that having help set up an industrial robot that uses many of the same parts as Boston Dynamics does. It speaks the same code as my CNC mill. It goes step by step through "do I measure something" or "do I move something." GPT-whatever is the same: "if I see this group of data, I follow up with this group of data, as modified by this coefficient that gets tweaked black-box style depending on what results are desired." But don't take my word for it: Manning is wrong, and is covering up his wrongness by saying "what's meaning anyway, maaaaaan?" Arguing that kids figure out language in a self-supervised way has been wrong since Piaget. Okay, where do you draw the line? 'cuz the line has to be drawn. You can't go through your day without encountering dozens or hundreds of antibacterial products or processes, for example. I don't care how vegan you are, you kill millions of living organisms every time you breathe. Bacteria are irreducibly alive: they respond to stimulus, they consume energy, they reproduce. The act of making more bacteria is effortful. ChatGPT? I can reproduce that endlessly. The cost of 2 ChatGPTs is the same as the cost of 1 ChatGPT. Neither can exist without an exquisite host custom-curated by me. it won't reproduce, generations of ChatGPT won't evolve, there is no "living" ChatGPT to distinguish it from a "dead" ChatGPT. Why does ChatGPT deserve more rights than bacteria? I find that ethical individuals have the ability to query their ethics, and unethical individuals have the ability to query the existence of ethics. Put it this way: I can explain why humans occupy a higher ethical value than bacteria. I have explained why bacteria occupies a higher ethical value than ChatGPT. But I also don't have to explain this stuff to most people. Look: golems are cautionary tales about humans making not-human things that look and act human and fuck us all up. Golems are also not the earliest example of this: making non-humans that act human is in the Epic of Gilgamesh. It's the origin story of Abrahamic religion: god breathed life into Adam and Eve while Satan just is. This is a basic dividing line in ethics: people who know what people are, and people who don't. This isn't a "me" question. Yeah and taking an astrological view of the solar system allows for the position of Mars to influence my luck. That doesn't make astrology factual, accurate or useful. Here's the big note: How "you feel" and how "things work" are not automatically aligned. If you are wrong you will not be able to triangulate your way to right. And this entire discussion is people going "yeah, but I don't like those facts." The facts really don't care.I guess what I'm getting at is the idea that one could take an equally reductionist view of the human mind, it's just that our brains are optimized beyond the point of being possible to interpret.
We see, we hear, we feel, we smell, and all that information is plugged into a complex logical system, along with our memories, categories, and any evolved instincts to dictate our actions.
What about Christopher Manning, the other computational linguist mentioned in this article?
The central disconnect that I'm interested in learning more about is the idea of humans as exceptional in deserving of our respect and compassion.
I'm gathering that you agree with Bender in wanting to posit humanity as an axiom.
kind of take a panpsychist view on consciousness, and fundamentally what I'm arguing is that that perspective allows for manmade constructs like computer programs to attain some degree of consciousness.