What they did was demonstrate that making the gadget train itself algorithmically was pretty much as effective as running the gadget through a gauntlet of fine-tuning and tweaking, with certain important caveats. https://x.com/koltregaskes/status/1881446103180062872 Much of wypepo AI kerfuffle has been about "how badly does it fuck up when you give it stuff you know it's going to fuck up" with all the AI boosters constantly asserting "I'm sure it's just a glitch." All the buzz around Deepseek is about the fact that if you aren't even vaguely testing for fuckups, China got there hella faster and cheaper than anybody else. "Created a model" is a great button to put on it - "all models are flawed, some models are useful." I will again remind everyone that AI came for my job first. Dugan has been at it since 1974. Sabine Feedback Destroyers started showing up in the '90s. Izotope introduced "total mix" in 2010. And hey - a lot of DAWs will fuckin' transcribe now. But you don't know how to use a feedback destroyer, and if you try using it without understanding it, your room will sound like shit. I watched a $1500/hr sound mixer lose his job for trusting a Dugan card over his own ears. Totalmix was such a catastrophe that Izotope burned it off the internet and let's talk about those transcriptions, shall we? I've been watching a lot of Hoarders lately which is probably bad for me but I noticed that a lot of the transcription team was guys I know I've worked with before. And I twigged to the fact that now that I can do it in the box, all those guys are going to be typing a lot less. NOT A FUCKING ONE OF THEM IS GOING TO LOSE THEIR JOB because if you need transcription, you need accurate transcription. Lemme pull that out because i'm going to refer back to it: If you need transcription, you need ACCURATE transcription. See, I can now transcribe in the box, which means I can get transcriptions where I couldn't before. They're pretty close but they sure aren't ready to send; i need to tweak them. I can tweak, clearly. That extends my reach. Likewise, the transcription agencies are no doubt jumping all over AI transcription because it allows them to do more with less, lower their prices, increase their customer base and generally provide more for many - it's a job of terrible scutwork and experience-derived skillsets and they are CONSTANTLY looking for workers. And sure - there are outfits that are going to just use the AI without a transcription service but they weren't using the transcription service before. it's an added bonus for them. Because if you NEED transcription, you need ACCURATE transcription. You and I had an adventure whereby you recorded your fiancee's musical performance in a church. It didn't occur to me to say "by the way if you hear any annoying squeaks you want to eliminate them right away or they will absolutely dominate the performance" because I assumed you'd give that sucker a listen and flatten out anything obviously horrible. Thing is? You aren't a sound mixer - you aren't an expert - so you don't know what's easily fixable and what isn't. Each and every one of us has had a discussion about the fact that nobody can understand the TV anymore. Much ink and pixels have been spilled as to why - nobody wants to say "sound mixers aren't being hired anymore because the media companies are too cheap." It'd take me an hour to fix most bad television, trust me I watch it. But the shredditors sitting in the hot seat have no fucking idea how to do sound, they didn't train for that. They know everyone at home is just going to turn on the subtitles. Which are transcribed. By humans. Because if you need transcription, you need ACCURATE transcription and you know what? Transcribers make hella less than me. People love to make fun of closed captioning. What they don't realize is that's usually a volunteer position. It's some nice old lady down at the station, typing in short-hand real quick. She's likely to lose her job; she's mostly there for local content anyway and we all know that shit's gone. The transcription that doesn't need to be accurate is gonna be an AI extravaganza in a couple years because the local TV station doesn't care about being memed. Warner Discovery? NVidia was hyped to shit because all the techbros refuse to acknowledge that if you need transcription, you need ACCURATE transcription and since they (in general) understand exactly fuckall, including the big words in the prospectii they don't read, they're absolutely convinced they're nine months from having a robot girlfriend. It's not entirely their fault - for twenty years, advancement in professional tools has occurred because of the massive financial investment in consumer electronics. If you need a better CMOS chip for seventy million phones it's more likely to have a bigger R&D budget than a better CMOS chip for seven thousand ENG cameras, QED. A well-trained AI would argue that better consumer goods can be predicted to filter down to better professional goods. But since none of the techbros understand the professional goods, how they work, how they're used or who uses them, they're utterly unprepared to evaluate a situation where prior trendlines don't extend. Much like a well-trained AI. So the AI techbros have a choice - they can point out that DeepSeek sucks at reasoning and deal with the blowback over the fact that all their shit sucks at reasoning only slightly less or they can pivot to "this isn't ackshully an improvement" which, true, but what it lays bare is that the whole "training" thing is a fucking sham. And there goes the market. No shade - if a decent recording of your fiancee's musical performance was important you would have hired out to get it done right. As it was, it was fun and if it didn't work the biggest blowback would be disappointment. You came at it like a dilettante with a toy which was entirely appropriate. A professional with a tool would have solved those problems immediately and - if you were to try again, you would, too. There's a human pipeline between "dilettante with a toy" and "professional with a tool." We all know it, we all recognize it, and the AI Techbros have been big on leaning on "training" to convince us all that there's an AI pipeline, too. it's bullshit. It's STRAIGHT bullshit. There's "mistakes it's going to make over and over" and "mistakes that have been spackled over a piece at a time so that Tay doesn't start spouting Nazi slogans within twelve hours of meeting Twitter." What the DeepSeek paper says is that everyone else's "training" is, in fact, spackling and if you're willing to utterly disregard Tay and the Nazis you can have a model in minutes. We knew that in 2016 but Sam Altman figured he could WeWork it. And here we are. Right now? AI is, for all intents and purposes, at the "toys for dilettantes" phase. DeepSeek demonstrates that as toys for dilettantes go, Chinese crap is always going to be cheaper than American techbro nonsense. But more than that, it demonstrates that any aspect of AI that isn't "toys for dilettantes" is hand-applied spackle. And if your spackler isn't as good at your professional's job as the professional is, the toy will NEVER be a tool.Sidenote: unless I’m mistaken, the whole “they created a model on the cheap for 90% less training costs!” aspect of this is a bit misleading.