a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
alpha0  ·  435 days ago  ·  link  ·    ·  parent  ·  post: AutoGPT

Why fascinating?

The relationships between structural terms are learned, so e.g. author->paper forms can be correctly generated. Subject domains also have natural patterns, and semantic similarity. So right off the bat you have form and content patterns.

What it can not learn in training -- the actual notion of existance and not mere textual expression of the concept -- allow for creating believable, superficially credible, fantasies. You may find that it mostly gets the authors right and the papers are made up. It will rarely, if ever, misplace domain experts. You will not get a response listing a biologist writing a physics paper. But the paper is completely up for grabs. All it needs is a credible title and possibly a date. (Both of these will map to tight clusters in some semantic vector space.)

What is interesting is what this 'long tail of lies and misunderstandings' will mean in economical terms. If you consider the human replacement proposal, for every arc in a processing/operational graph -- resource -> (intelligent) processing -> product -- if the AI replacement is not 100% reliable, it will necessitate maintaining the pre-existing setup. So if the arc is highly complex, the value proposition is less attractive. The key, imo, is decomposing these processing/operating graphs into the simplest of transitions, so that the slow path can be both trivially (re)created, or, addressed with specialised ML box dealing with that simpler task. After that, the main question remains energy costs.

+ (We -know- it is not conscious; there is no always on runtime - at best a sporadic zombie. More reasonable q is is it sentient. I think not, but that is opinion.)