Had this same thought yesterday and I find it infinitely interesting as I was just a wee lad during the dot-com bubble. On a basic human level, risk management is out the window when possibilities start to get vested en masse and at face value. Friend of mine just got a $9mn check from a very well-known name in AI for the absolute dumbest piece of shit AI app I have ever witnessed. On the other hand, taking a look at big-picture credit risk, wonder if the collaborative effort of some of these leading companies (investing in infrastructure, integrating into the grid, investing in nuclear, consolidation also on the lender's side) is a result of lessons learned from dot-com? In other words, the leading AI companies are practically sharing everything except the models and physical data center buildings themselves, no? I wonder if this ecosystem is a sufficient hedge when the fat gets cut, or is it significantly more dangerous than dot-com, all else equal?
So there was the dot-com bubble, whereby it made sense for AOL to buy Time-Warner, and there was the telecom bubble, whereby Enron pivoted to calling themselves an "information company" and gajillions of miles of fiber were pulled without any consideration as to the last-mile connection to it. The dot-com bubble was all about "someday it will make sense to buy dogfood on the internet and that time is now" while the telecom bubble was all about "someday it will make sense to stream content on demand and that time is now." The dogfood didn't make sense until Republicans had crushed labor protections to the point where "the gig economy" was aspirational rather than dystopian but the "stream everything everywhere" canard was... ...well I mean watch. the telecom bubble happened when everybody realized that it doesn't matter if you have a massive fiber backbone everywhere, you need to connect everyone's houses to it and that didn't happen until massive investment in coax codecs allowed a big bunch of sideband stuff to happen on Comcast etc fifteen years later. Verizon pulled a bunch of fiber-to-the-home back in 2005-2006 to connect to that had-been-dead-for-ten-years fiber but they lost money on it. There was a screamingly obvious use case for FTTH but Netflix had just started and they were exclusively sending DVDs through the mail. The whole reason everyone pushed so hard into 5G is it's effectively the last mile without having to pull fiber or cable which (A) means "fuck you Comcast" (B) was thirty fucking years into the future when Global Crossing and Worldcom started selling their scifi future. The pie-in-the-sky vision for AI is "like Siri but slightly better." There was no part of AT&T's vision for the future that required 22 Hoover Dams worth of additional grid capacity. That's 90 billion kilowatt-hours, by the way, 600W of solar power generation for every household in america, to generate entirely un-dank memes. The costs-benefits analysis here is fucking dumb. Fuckin' Grok and his buds are projected to increase the power consumption of the entire country by ten percent in order to... what? Cheat on high school english tests better? Using assets that decrease in value by 30 percent per year? Everyone's use case for AI is "answer this question for me" and it fucking sucks at that. it's mostly useful for people who don't know how to use a search engine and aren't savvy enough to know when AI is lying to them. How the fuck is this investment going to amortize?
Very cool background, thanks for the history. Mainly labor and consumer marketing, right? Dramatically reduced need for skilled labor to build and ship a product, or deliver a payment-worthy service at scale. Dramatically increased impact and faster "rpm" of efforts to capture consumer spending or attention. e: Anecdote to explain where my POV is: I just left the Army a month ago. I'm dealing with the possibility that I am prepared for exactly 0% of the current and near-future reality of business and the ability to make meaningful money, because of the AI revolution. I know zero things about computer science. This weekend I completed putting together about 50% of the tech stack required for a piece of software I had an idea about while smoking my celebratory post-Army joint, entirely using ClaudeCode and then chatgpt to guide me through the rest. Most of my peers that got out of the Army are starting their MBA programs right now and I wouldn't be surprised if I surpass their marketability in the private sector by the time they graduate.How the fuck is this investment going to amortize?
This is such a good question though, because it feels like we're still in the Fucking Around stage and are slowly entering the Finding Out stage. Despite getting regularly starry-eyed at times around AI, I do fundamentally agree with kleinbl00 that AI will never get down to the last 20%, 10%, 5% of what we'd like it to do or be. The strongest argument for being able to get down there came from an MIT prof I listened to on a podcast a while a go, who believes the "reasoning" paradigm is the fundamental discovery we need to get to some kind of jobs-market-destroying AGI. IF you can draw the right conclusions eventually by "reasoning" well enough, e.g. what happened with the IMO recently, AND we can significantly improve this reasoning process which was only really invented months ago over the next decade, then we might be able to get pretty far. However - it's not really reasoning, is it? It's spitting out words it has seen before in that pattern. The IMO achievement was achieved through a Deep-Research-esque train of thought, but instead of 10 minutes it thought for hours, producing (multiple?) book-length text to eventually reach the right conclusion. But what people need to understand about the IMO is that you're not doing some complex math equation; you're almost always writing a proof about something. It's the math equivalent of writing a case for a trial. Which means that the LLMs never actually need to calculate anything, they can just talk their way through it, which is something they can excel at. But it does not mean they can do Fourier transforms too. Or any other calculation for that matter, which is one of the points of that Apple paper. So the solution space for LLMs is jagged and disjointed, to such a degree that it puts your most gerrymandered voting district to shame. If you have a straightforward set of problems that are mediated mostly or entirely through text and data, it can do remarkably well. If you just straight up give it the text it needs to mangle, because some LLMs now can handle up to a million tokens, it can consistently produce hallucination-free results. But if you want to deviate into uncharted territory...well, good luck. Part of the reason LLMs are jagged/disjointed, is that every LLM is a lossy JPEG of the internet. It knows approximately what things are, and "things" here includes language and structure of data. It will not be able to produce much if anything outside of the bounds of a blurry version of all the words we've dumped on the Web. What AI is really good at, is "a B+ version of a thing you could also find on the web / in a book". So yeah it'll write a website for you, it'll write a LinkedIn post, it'll do your homework. But again: a "thing" here also includes language and the structure of data, so it's also really good at summarizing, at changing text, at finding related words and questions. If the way you add value is solely or mostly by spitting back words at other people based on what they say and a knowledge base (e.g. call centers, translators, but also consultants, coaches, teachers) I'd be worried. But expertise that isn't purely mediated through screens or data will be fine for the foreseeable future if you ask me.I'm dealing with the possibility that I am prepared for exactly 0% of the current and near-future reality of business and the ability to make meaningful money, because of the AI revolution.
That "reasoning" paradigm used to be called "AI." It's recently shifted to being called "symbolic AI." And the shitty thing about "symbolic AI" is it's been five or ten years away since ELIZA. It doesn't scale. Throwing chips at it doesn't make it better. Throwing clock cycles at it doesn't make it faster. Throwing money at it doesn't make it more useful. This is why it was tossed aside for LLMs - when we were way back on the asymptote they appeared to scale like gangbusters! Even now you can get impressive results out of tiny models if you're willing to wait. And are willing to put up with them making shit up. I realized recently that my primary disappointment with AI is how fucking boring it is. What the fuck am I looking at? I honestly don't know. And that's the best part. AI used to surprise you. It used to show you things that had never been there before, made from things that were always there. There was a time when the question was "where's it going to go." It's never been anything but an extraordinarily fancy index but for a while there, truly interesting shit could happen by interrogating that index in places it had never been parsed. That wasn't useful to sell for $5.99/mo Chinese test-taker API calls, though, so it got wallpapered over in fits and starts. Everywhere AI can be interesting has been cut out of the index in favor of everywhere AI can be eHow. LLMs as originally implemented could give you insight into the uncharted territory between well-trod paths. LLMs as they are sold now have all that space wallpapered over with content calls. I mean, sure. fuck yeah math olympiad. But if it can't fuckin' figure out the Tower of Hanoi who fucking cares? And I mean... we had the Internet. We could look up solutions to the Tower of Hanoi. We could look up solutions to Math Olympiad proofs if we needed them. Now we can't even look that shit up anymore because it's all been poisoned with AI bullshit. And that was the point, really. There was no way to crack Google's search dominance without turning the Internet into a Tower of fucking Babel. So Microsoft hired OpenAI to poison the shit out of it, OpenAI fundraised like a mutherfucker on the idea that search engines were people too, fuckin' Musk thinks the solution to all the angry incels is to build a waifu into his social network and the mufukkin' economy had chunks carved off of it because someone asked "hey siri give me a tarriff platform."The strongest argument for being able to get down there came from an MIT prof I listened to on a podcast a while a go, who believes the "reasoning" paradigm is the fundamental discovery we need to get to some kind of jobs-market-destroying AGI.


Your POV is invaluable and I ain't gonna begin to question it. What I will do is drag this out into the light: You're talking software now. And what you're talking about is the ability to blue-sky something that didn't exist before. And as veen and I have discussed at length, if you aren't any good at coding, AI is probably better than you. And if you're willing to settle with AI's approximation of your wants and needs, you saved a lot of time and effort. But what if you actually know what the fuck you're doing? story time So I used to mix Big Brother. I mixed Big Brother for like ten years? Something ridiculous like 16 seasons. And 80% of that show is "match the fader to the face." The other 20% can be excruciatingly tricky and everyone in that room was fucking top of their game. We had a guy come out to mix the live show. Hot shit. Name of Klaus. And Klaus wasn't interested in debriefing with us, he knew if you had a lot of mics you put it on the automix, and who the fuck did we think we were? So Klaus, in between changes and other shit, just put the houseguests on automix, not knowing that that time he's doing set changes and stuff is when everyone on the show realizes they may be about to spend 20 hours hanging from a carrot or some shit. So while he's got that shit on automix, and he's featuring the evicted houseguest and the I-shit-you-not WIFE of the CHAIRMAN of PARAMOUNT, somebody breaks off from the pack and pees like a racehorse. Which drowns out the conversation the evicted guy and the WIFE of the CHAIRMAN of PARAMOUNT were having. On live TV. The censors had to bleep 30 straight seconds of audio before Klaus figured out what the fuck just happened to his career. 'cuz here's the thing - yeah you can fumblefuck your way to some mediocre app through liberal applications of ChatGPT. But you'll never hit anything mission critical and more than that, you'll never learn anything mission critical, and you'll never be exposed to anything mission critical, and you will sit there like a lump of shit for the rest of your life, "prompt engineering" your way through the Peter Principle. _____________________________________________ AI is a solid b-minus at everything it does. If you don't really need it, it won't really give you any reason to complain. It'll make mid-grade memes, it'll write mid-grade essays, it'll come up with mid-grade recipes for your leftovers without even mixing bleach and ammonia anymore. It's eHow for anything you ask of it. And sometimes you need eHow! but not very often! Because in general you can get to 80% without putting a whole lot of effort into it anyway! it's that last 20% that kicks your ass and AI will never get there. Never ever. Never ever ever ever ever, it's been asymptotic for two years now. What's an MBA for, anyway? I know several. They learn some business cases. You know business cases. You're not going anywhere in investment banking without nepotism anyway, and your best case with an MBA is what, Bain? It sure isn't McKinsey. Either way, you're basically learning how to be a suckup because suckups are necessary when people need to make hard decisions and need to blame someone else. The whole goal of an MBA program is to rub your shoulders against people who will pay you someday and frankly the curriculum could be replaced with advanced martini-drinking and nothing of value would be lost. I'm a jobs creator at this point. I think we're at like 11 employees? And I can divide them into two classes: people who do what they're told and are good at it and people who solve problems and are good at it. The do-what-they're-told crew are mine forever or until they move. We're a good place to work, we treat people with respect, people like working for us. Until their life circumstances change they will gladly exchange their skilled labor for my money. The problem solvers are gone as soon as we run out of problems for them to solve. Because it doesn't matter what their degree is in, it doesn't matter their training, they're absorbing the world and figuring out what needs to be done and offering suggestions and a course of action along with information about the problem they've encountered. We miss them when they're gone but they never belonged to us, they were just under our roof while they fledged their wings. And that's fine too. We don't have enough room for them to grow as much as they should and we try very hard to ensure they leave with more skills than they came in with. AI will give you answers. It won't teach you how to ask questions. If you can focus on learning how to learn you have exactly fuckall to fear from AI.Mainly labor and consumer marketing, right? Dramatically reduced need for skilled labor to build and ship a product, or deliver a payment-worthy service at scale.