a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
alpha0  ·  613 days ago  ·  link  ·    ·  parent  ·  post: AutoGPT

I've thought about Turing's idea more critically since the public advent of GPT and have reached some contrary conclusions.

First let's assume that the notion of 'learning by observing and interacting' is understood in its technical sense as promoted by AGI (sic) camp: a machine, like man, achieves thought & consciousness, becomes a mind, via the learning mechanism(s)'. So, whatever it is that we humans mentally experience is engendered by a learning process fully mediated by the sensory apparatus.

Now there is an interesting question that comes up: why do we have certainty that a random humanoid that we meet (whose birth we did not witness, thus provenance unknown), regardless of their level of apparent intelligence, is a conscious being? The sensory apparatus in the middle of our learning regiment from infancy has always only conveyed (superficially) measurable information. So it is purely an 'image'. And we project meaning unto images. This is what we do.

The only reason one assumes that the other person is conscious is because we assume they are like us. "It's just like me. I am conscious, so they must be too". I think our friend Rene's formulation may see something of a philosophical resurgence. "I am conscious, so they must be too". That is the -only- reason that we unquestionably accept that the other humanoid is conscious as well.

If you're with me so far, then you may agree that Turing idea is fundamentally flawed. Until and unless we can nail down consciousness definitively we will never be able to test via information exchange (interaction). Because our minds, we know, have been 'trained' on only superficial evidence. So we are by definition un-lettered in the art of determining the existence of minds in objects.