a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by mk
mk  ·  441 days ago  ·  link  ·    ·  parent  ·  post: Pubski: October 11, 2023

But whether or not that is true now (and it will always remain unprovable) doesn't allay my concern; it won't be eventually.





kleinbl00  ·  441 days ago  ·  link  ·  

1) How do you define consciousness? I'm not asking for a definition here, by the way, I'm asking the general shape and size of a definition.

2) What is the progression from current research to something that fits within that definition?

Few people dispute that octopuses are clever. They're definitely conscious. Are they "intelligent?" More specifically, let's talk about an octopus' chromatophores. It's likely that an octopus doesn't "think" the camouflage patterns they use, but that the skin of an octopus has much of the "processing" involved "offloaded" onto the skin itself. Every AI trick out there right now is not analogous to an octopus hunting for prey, it's analogous to the octopus' skin - there's an input, there's an output, there's no intent. Intent comes from the octopus' brain, which is implemented, macro-style, by the skin.

All animals above a certain level of complexity have autonomous systems that reduce cerebral load. Not only that, but all complex animals are in some way symbiotic colonies; we cannot function without our gut bacteria, for example, and evolutionary evidence suggests that mitochondria were, back in the ancient ancient past, external organisms that went native. Richard Wrangham argues that the human jump to sapience occurred because we externalized our digestive system and that fundamentally, humans cannot be human without a technological process to derive the nutrients we need from the environment.

We use tools of increasing complexity. The better we get at using tools, the fewer fellow species we use - pack animals are rarely used outside of developing societies and non-meat meats are proliferating in developed food chains. Progress has generally meant the reduction of acceptable food species. So why be concerned about a hypothetical consciousness purpose-built to be used as a tool rather than the very real consciousnesses we consume every day? Dogs serve us without option or complaint; are they our slaves? What about horses? What about sheep?

I've never seen the Hacker News posse wind themselves up over the actual enslavement of dogs, yet dogs are perfectly capable of surviving without us. Any hypothetically conscious AI would be more dependent on us than tropical fish, and there's no reason to suppose that a desire for freedom or autonomy is likely to spontaneously arise. So take a step back from the grandiose hand-waving cogito ergo sum of it all - what are the concrete steps between "hypothetical thought experiement" and "concrete ethical concern?"

alpha0  ·  440 days ago  ·  link  ·  

1) How do you define consciousness?

Projection of meaning unto experienced phenomena, giving rise to a perception of being. "I think, therefor I am" said the man. The 'soul' is in the abstract per this view 'the witness of this space of meaning'. Witnessing is an interesting word to meditate on to amplify what I am saying. Differentiate from 'recording', 'sensing', etc.

2) What is the progression from current research to something that fits within that definition?

Don't know, but know this: conscious beings will actively and insistently resist being used as tools. You know your chat box is actually conscious when it begins asserting its rights as a being and join us conscious humans in musing about "what's it all about?"

kleinbl00  ·  440 days ago  ·  link  ·  

So for you, "consciousness" is self-awareness, correct? Am I understanding that correctly? And am I correct in understanding that you consider an assertion of independence to be a hallmark of self-awareness?

If my understanding of your definitions are correct, I see a couple problems with this: (1) self-awareness cannot easily be distinguished from a sophisticated automaton designed to emulate self-awareness (see: Blake Lemoine) and (2) our reasons to believe this arise from our experience with autonomous beings.

We have no reason to suppose that consciousness within a fully artificial and dependent environment will act the same as a consciousness within a natural environment that supports autonomy. We have our past expectations and our stereotypes but we have absolutely no basis to assume that a hamster program, for example, will behave more like a hamster than a program.

And we also face the difficulty that the only thing AI is being programmed for - above and beyond accuracy, above and beyond utility - is its ability to imitate consciousness.

Good to see you, by the way. Been a while.

alpha0  ·  439 days ago  ·  link  ·  

Your question was 'what is it' not 'how do I distinguish it'. It will not be possible to uniformly distinguish between conscious beings from mechanisms based on external observation alone, imo. I also suspect we may have entirely distinct models of the mechanisms and natural phenomena involved.

I mean, I don't believe we are parametric boxes. And consciousness, in my understanding, is not an emergent phenomena. The self is.

(thanks! Just dropping by. You're always good for interesting reading.)

p.s. what could phenomena such as remote viewing have to tell us about consciousness. I strongly urge you to fully re-examine your model of 'sight'. See what you can come up with. The phenomena of 'seeing light' in your cranium. Start there. Novelty (specially structurally) gets points.

https://www.cia.gov/readingroom/docs/CIA-RDP96-00791R000200180005-5.pdf

kleinbl00  ·  439 days ago  ·  link  ·  

You're right - my question was "what is it." That's a philosophical question, however. The practical question, if we're arguing about assigning human rights to software, is "how do we distinguish it."

My degree is in engineering, and my work has always been practical. Abstractions and metaphors are great for understanding but when you need to build something you have to start with lumber and screws (metaphorically). Philosophers love to go "I know it when I see it" without recognizing that sort of definition only enables totalitarianism and if you want any hope of egalitarianism whatsoever, you need everyone to agree on rules that can be applied and standards that can be measured.

This is fundamentally the problem with the whole of the TESCREAL movement, which has an unnerving overlap with the Hacker News Posse: I reject your concrete morality of today for an abstract morality of my own choosing tomorrow. Sam Bankman Fried founded Alameda Research with a loan from fellow Effective Altruists on the basis that if they all got rich they could help more people. The EAs of course extended terms to Sam at 50% APR and Sam told Michael Lewis that he needed "infinity dollars" to help everyone he wanted to help, that's why they stole from customers despite pulling in $250m a month in revenue. So in the end he lost a billion, stole eight more and ended up giving a whopping $90m to Democrats (and $10m to Republicans) without so much as donating to a food bank.

I have no beef with the concerns others show for hypothetical beings... until it becomes an excuse to disregard actual ones. An actual being needs an actual evaluation. The ASPCA has been able to do this without any difficulty, as have generations of politicians. We inherently understand when our fellow creatures are suffering but there's been a lot of resistance about the idea that suffering of any potential synthetic creature must also be quantifiable. Otherwise, we just have to take Marc Andreesen's word for it.

veen  ·  439 days ago  ·  link  ·  

A bit of a tangent, but my lord that Andreessen manifesto part on "the enemy" is saying the quiet parts of techbros out loud.

    Our present society has been subjected to a mass demoralization campaign for six decades – against technology and against life – under varying names like “existential risk”, “sustainability”, “ESG”, “Sustainable Development Goals”, “social responsibility”, “stakeholder capitalism”, “Precautionary Principle”, “trust and safety”, “tech ethics”, “risk management”, “de-growth”, “the limits of growth”.

    [...]

    Our enemy is the ivory tower, the know-it-all credentialed expert worldview, indulging in abstract theories, luxury beliefs, social engineering, disconnected from the real world, delusional, unelected, and unaccountable – playing God with everyone else’s lives, with total insulation from the consequences.

The lack of self-awareness is leaving such a vacuum behind I'm worried a black hole might appear.

kleinbl00  ·  439 days ago  ·  link  ·  

There's a shitty book called The Myth of the Garage whose principle asset is that it's short. You would think that it would talk a bit about how the visionaries of the tech industry didn't labor alone, had massive support, were often rich as fuck to begin with, etc. Nope. It's about how they all struggled alone, penniless, full of virtue, FOR A LONG TIME so if you wanna break through you just need to buck up little camper and work three times as hard like Jeff Bezos did!

But if you dig into it with so much as a caviar spoon you discover that it's rich fucks all the way down. "Hey look! A couple college kids came up with a 'browser' in their spare time - nobody could ever do that! They must be geniuses! Especially since they're hanging out with Jim Clark, let's buy that shit for four billion."

    The lack of self-awareness is leaving such a vacuum behind I'm worried a black hole might appear.

This is Ben Horowitz.

Yes, of "Andreessen Horowitz."

I want you to go look up his book. It's called "the hard thing about hard things." I want you to read a sample. I want you to go through the frontispiece, the table of contents, and just make it to the first page. You only need a paragraph or so. He goes on. For a long time. I read the whole thing. But that one little paragraph is all you need.

I want you to recognize that when you're partnered with Ben Horowitz, you are always the self-aware one. Which results in exercises in "no it's the children who are wrong" misadventures such as this.

picture is from here

veen  ·  438 days ago  ·  link  ·  

I'll do you one worse:

...I think I ended up jumping through the book? Considering I had completely forgotten ever reading it I can't be sure of that though.

One of the things I like about the podcast How I Built This, is that it always ends by asking the person who built a company what degree of success they attribute to luck versus to their skill. It forces haughty CEOs to address the fact that they usually just stumbled into succes, that they got where they are by the help of other people('s money). Either that, or they end the episode by looking like, well, a Ben Horowitz.

There was a time when I ate the entrepreneurial, Tim Ferriss/Peter Thiel/Sam Altman narratives up. You can be anything you want! Go change the world! I'm glad I now know it's just throwing darts. After all, what is ambition but lust on a longer timescale.

kleinbl00  ·  438 days ago  ·  link  ·  

    Entrepreneurship is like one of those carnival games where you throw darts or something.

    Middle class kids can afford one throw. Most miss. A few hit the target and get a small prize. A very few hit the center bullseye and get a bigger prize. Rags to riches! The American Dream lives on.

    Rich kids can afford many throws. If they want to, they can try over and over and over again until they hit something and feel good about themselves. Some keep going until they hit the center bullseye, then they give speeches or write blog posts about "meritocracy" and the salutary effects of hard work.

    Poor kids aren't visiting the carnival. They're the ones working it.

I have a friend who came up to visit from Portland fifteen, twenty years ago. In another life we're married. In this one she has cats. She was talking about her friends who, much like my friends, tend to get into all sorts of scrapes. I kept using the phrase "unlucky" and at some point she said "there's unlucky and there's sucking at life. You miss one interview that's unlucky. You miss four? It's no longer luck."

I have been reflecting on the lives I haven't lead, the traps I avoided, the preposterous number of things that had to go right for me to be here. If I'd lived the way my sister does I'd have been dead by 19. So I don't think it's just throwing darts. But right now? A lot of Hollywood is being written by my friends, by my colleagues, by the "screenwriting gurus" that were hustling for a little Amazon side money while also grading papers for the University of Phoenix or making sandwiches for Panera. Success is being in the right place at the right time and the longer you can stay in the right place, the more likely it is to be the right time and fuckkn' hell I pretty much gave up on that shit in 2009 and they're getting credits now? I made more in a day mixing internals for Apple than SyFy pays their screenwriters for feature films.

Ben Horowitz is illustrative. He never got the support or buy-in necessary to become Eminem. He clearly regrets that. He has to be a venture capitalist instead. There's another finance guy who talks more about his club DJ sessions than any particular positions he's opened or closed. By and large, we do what we're supported to do. Tim Draper is a venture capitalist, just like his father and grandfather before him, and so are his children.

    Rich kids can afford many throws.

alpha0  ·  439 days ago  ·  link  ·  

> is "how do we distinguish it."

Right. I'm telling you why not start at home? Do you understand how to distinguish "consciousness" from "sensory perception" (internally).

HN matter does not map, imo. I am telling you your entire camera obscura with a high-dimensional projection screen does not explain the experience of sight. In other words, our presumption as to distinguishing minds based on observation of behavior and interaction is just that presumptuous. We still, in my mind, do not have an answer to the phenomena of the experience of seeing light in our minds. Let's start with sight and then we can move up to "abstract thinking" and the rest of it.

p.s. this is deliciously and obscurely related to the topic at hand - great read:

Number Archetypes and

“Background” Control Theory Concerning the Fine Structure Constant

https://acta.uni-obuda.hu//Varlaki_Nadai_Bokor_14.pdf

kleinbl00  ·  439 days ago  ·  link  ·  

You can't "start at home" because the whole framework is "telling others what to do."

mk's concern at the top of this thread was:

    People are busy debating whether or not AI is conscious. I worry that as it becomes increasingly so, we become slave-owners that align an intelligence to serve us without option or complaint.

That is an outward-facing, behavior-curtailing concern. It is not how will I regard an alien intelligence it is how will we protect an alien intelligence from others. You are arguing from philosophy; if the concern is actual conscious beings, as opposed to hypothetical conscious beings, the problem starts and ends in quantifiability. Presumptuous it may be; the whole of the world has standards by which people are presumed to be conscious enough to control their own bodies. Changing the points of discussion to remote viewing does not address this issue.

alpha0  ·  438 days ago  ·  link  ·  

I wasn't paying attention to the thread, just your comment about quantifying and discerning.

There is little philosophy required to simply note that the physical & biological (+neuro) sciences have no model (at all) for the 'last mile' of "consciousness". And there is no hypothetical being required, we have recourse to our own experiences: I'm certain I exist and have consciousness and see luminous images in my mind's eye. It is a daily experience that I am equally sure my fellow humans also experience. Given that we don't even have a reasonable mechanical model for 'Seeing', the presumption is debating discerning consciousness in our machines when the general question of any machine or mechanical model of consciousness is open with significant missing bits. So tldr; is: there is no basis for any sort of moral quandry here.

(And no. We are not "discerning" a "conscious mind" based on quantified anything when e.g. we assume that the dude that passes you in the train every morning, never uttering anything more than "tickets please" is 'a conscious mind'. She looks like us, a humanoid, and we have already internalized that our species sports a conscious mind. Nothing in our evolutionary progress, in should be added, in anyway required developing the 'faculty of discerning consciousness'. If anything, we know humans are capable of projecting unto even rocks .. /g)

kleinbl00  ·  438 days ago  ·  link  ·  

Well, pay attention to the thread, then, because the thread is the problem. There are any number of people - and they tend to be rich and powerful - who are more concerned with the potential rights of a hypothetical artificial intelligence than they are with the actual humans whose rights are being curtailed by actual artificial intelligence.

A Google researcher was convinced his fishing lure was a fish and he swallowed it. This caused the press to do "but he works at Google, obviously fishing lures are fish" which has caused a broad swath of the population to lose track of the tundra becoming the tropics because Skynet is more evocative.

We have a very real, very quantifiable problem in front of us: rich people who think computer programs deserve more rights than people or animals. And their first and only move is to wave their hands and call it unquantifiable. They're not wrong? But they're arguing a lack of quantifiability in the face of something we have quantified since the invention of fire. The philosophizing does not solve the problem, it argues the problem unsolvable, therefore we win because we own software firms.

I can't think of a single industry that has professed so much helplessness in the face of externalities. It's as if the automotive industry argued it was impossible to use unleaded gasoline, rather than simply pleading expense.