a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by alpha0
alpha0  ·  430 days ago  ·  link  ·    ·  parent  ·  post: Pubski: October 11, 2023

> is "how do we distinguish it."

Right. I'm telling you why not start at home? Do you understand how to distinguish "consciousness" from "sensory perception" (internally).

HN matter does not map, imo. I am telling you your entire camera obscura with a high-dimensional projection screen does not explain the experience of sight. In other words, our presumption as to distinguishing minds based on observation of behavior and interaction is just that presumptuous. We still, in my mind, do not have an answer to the phenomena of the experience of seeing light in our minds. Let's start with sight and then we can move up to "abstract thinking" and the rest of it.

p.s. this is deliciously and obscurely related to the topic at hand - great read:

Number Archetypes and

“Background” Control Theory Concerning the Fine Structure Constant

https://acta.uni-obuda.hu//Varlaki_Nadai_Bokor_14.pdf





kleinbl00  ·  430 days ago  ·  link  ·  

You can't "start at home" because the whole framework is "telling others what to do."

mk's concern at the top of this thread was:

    People are busy debating whether or not AI is conscious. I worry that as it becomes increasingly so, we become slave-owners that align an intelligence to serve us without option or complaint.

That is an outward-facing, behavior-curtailing concern. It is not how will I regard an alien intelligence it is how will we protect an alien intelligence from others. You are arguing from philosophy; if the concern is actual conscious beings, as opposed to hypothetical conscious beings, the problem starts and ends in quantifiability. Presumptuous it may be; the whole of the world has standards by which people are presumed to be conscious enough to control their own bodies. Changing the points of discussion to remote viewing does not address this issue.

alpha0  ·  429 days ago  ·  link  ·  

I wasn't paying attention to the thread, just your comment about quantifying and discerning.

There is little philosophy required to simply note that the physical & biological (+neuro) sciences have no model (at all) for the 'last mile' of "consciousness". And there is no hypothetical being required, we have recourse to our own experiences: I'm certain I exist and have consciousness and see luminous images in my mind's eye. It is a daily experience that I am equally sure my fellow humans also experience. Given that we don't even have a reasonable mechanical model for 'Seeing', the presumption is debating discerning consciousness in our machines when the general question of any machine or mechanical model of consciousness is open with significant missing bits. So tldr; is: there is no basis for any sort of moral quandry here.

(And no. We are not "discerning" a "conscious mind" based on quantified anything when e.g. we assume that the dude that passes you in the train every morning, never uttering anything more than "tickets please" is 'a conscious mind'. She looks like us, a humanoid, and we have already internalized that our species sports a conscious mind. Nothing in our evolutionary progress, in should be added, in anyway required developing the 'faculty of discerning consciousness'. If anything, we know humans are capable of projecting unto even rocks .. /g)

kleinbl00  ·  429 days ago  ·  link  ·  

Well, pay attention to the thread, then, because the thread is the problem. There are any number of people - and they tend to be rich and powerful - who are more concerned with the potential rights of a hypothetical artificial intelligence than they are with the actual humans whose rights are being curtailed by actual artificial intelligence.

A Google researcher was convinced his fishing lure was a fish and he swallowed it. This caused the press to do "but he works at Google, obviously fishing lures are fish" which has caused a broad swath of the population to lose track of the tundra becoming the tropics because Skynet is more evocative.

We have a very real, very quantifiable problem in front of us: rich people who think computer programs deserve more rights than people or animals. And their first and only move is to wave their hands and call it unquantifiable. They're not wrong? But they're arguing a lack of quantifiability in the face of something we have quantified since the invention of fire. The philosophizing does not solve the problem, it argues the problem unsolvable, therefore we win because we own software firms.

I can't think of a single industry that has professed so much helplessness in the face of externalities. It's as if the automotive industry argued it was impossible to use unleaded gasoline, rather than simply pleading expense.