a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
veen  ·  36 days ago  ·  link  ·    ·  parent  ·  post: What’ll happen if we spend nearly $3tn on data centres no one needs?

    I'm dealing with the possibility that I am prepared for exactly 0% of the current and near-future reality of business and the ability to make meaningful money, because of the AI revolution.

This is such a good question though, because it feels like we're still in the Fucking Around stage and are slowly entering the Finding Out stage. Despite getting regularly starry-eyed at times around AI, I do fundamentally agree with kleinbl00 that AI will never get down to the last 20%, 10%, 5% of what we'd like it to do or be.

The strongest argument for being able to get down there came from an MIT prof I listened to on a podcast a while a go, who believes the "reasoning" paradigm is the fundamental discovery we need to get to some kind of jobs-market-destroying AGI. IF you can draw the right conclusions eventually by "reasoning" well enough, e.g. what happened with the IMO recently, AND we can significantly improve this reasoning process which was only really invented months ago over the next decade, then we might be able to get pretty far.

However - it's not really reasoning, is it? It's spitting out words it has seen before in that pattern. The IMO achievement was achieved through a Deep-Research-esque train of thought, but instead of 10 minutes it thought for hours, producing (multiple?) book-length text to eventually reach the right conclusion.

But what people need to understand about the IMO is that you're not doing some complex math equation; you're almost always writing a proof about something. It's the math equivalent of writing a case for a trial. Which means that the LLMs never actually need to calculate anything, they can just talk their way through it, which is something they can excel at. But it does not mean they can do Fourier transforms too. Or any other calculation for that matter, which is one of the points of that Apple paper.

So the solution space for LLMs is jagged and disjointed, to such a degree that it puts your most gerrymandered voting district to shame. If you have a straightforward set of problems that are mediated mostly or entirely through text and data, it can do remarkably well. If you just straight up give it the text it needs to mangle, because some LLMs now can handle up to a million tokens, it can consistently produce hallucination-free results.

But if you want to deviate into uncharted territory...well, good luck. Part of the reason LLMs are jagged/disjointed, is that every LLM is a lossy JPEG of the internet. It knows approximately what things are, and "things" here includes language and structure of data. It will not be able to produce much if anything outside of the bounds of a blurry version of all the words we've dumped on the Web.

What AI is really good at, is "a B+ version of a thing you could also find on the web / in a book". So yeah it'll write a website for you, it'll write a LinkedIn post, it'll do your homework. But again: a "thing" here also includes language and the structure of data, so it's also really good at summarizing, at changing text, at finding related words and questions. If the way you add value is solely or mostly by spitting back words at other people based on what they say and a knowledge base (e.g. call centers, translators, but also consultants, coaches, teachers) I'd be worried. But expertise that isn't purely mediated through screens or data will be fine for the foreseeable future if you ask me.