I had a discussion with an old buddy about LLMs yesterday. He's writing fiction and is using ChatGPT like a rented mule. He's got a character who's modeled on Andrew Tate but he wants him to be annoying, not a villain, so he'll type "give me ten things a sexist asshole would say about women that aren't awful." He's got a character who's a vampire so he'll type "give me a list of insults a vampire would use against townsfolk." Or he'll be analyzing plot points and he'll say "give me a list of movie scenes that would radically change the movie if they were absent." In each one he goes through and picks what he likes. In the last one he argues with it. I pointed out that he's basically using ChatGPT like an extended thesaurus and he agreed. I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant. But - I pointed out - when you ask it for an opinion it will fall down every time because it has absolutely no handles on any of its inputs and outputs. You can't ask it to tell you what scenes are crucial because it has no understanding of any of the concepts underneath. What it has is a diet of forum posts that it will never give you straight. Shall we play "how can chatGPT do my job?" 'cuz they've been trying to AI automate my job forever. See this guy? they were about $1500 back in '94. And what they do is analyze the audio signal passing through them looking for feedback, and then they drop one of eight filters on it. You can adjust the sensitivity to feedback, you can adjust the latch, you can adjust the release, you can adjust the aggressiveness. They were really big until about 2005 or so when it became cheap and easy to TEF sweep a room and ring it out to EQ out the frequencies that cause things to ring - I'm sitting here surrounded by ten speakers at 85dB and having spent an afternoon mapping and collating and inserting between 4 and 15 filters each channel I can't get feedback if I hold a condenser in front of left main. Could an AI have done that? fuck yeah. That would have been delightful. But not without me moving the mic sixty times so what time am I actually saving? That active seeking feedback reduciton thing has made it into machine tools - each servopak on my mill has more filters than that Sabine. And in general, the approach everyone takes is "set as many as you need to kill steady-state, use the roaming ones carefully" because who knows what modes you'll run into with this or that chunk of aluminum strapped down getting chewed up. Everything I've got is already a waveform. We've been using Fourier transforms to operate on them for 40 years. My life is nothing but math. And despite the fact that GraceNote has literally released every song they know about as training data, telling the AI "make my mix sound better" still fucking failwhales. Like, on a basic, simple level. It understands what the sonogram of a song should sound like but that's like reconstructing a fetus from an ultrasound. What you get is uncanny valley nightmare fuel. I don't need the mediocre middle of a million mixes, I need excellence. And excellence comes from humans because it is, by definition, not the mean. Anyone expecting that a machine purpose-built to give you a statistical average can give you only the good outliers is going to be disappointed for the simple fact that the machine doesn't understand "good" or "bad" it understands "highly rated" or "much engaged with." The machine thinks this is the best Jurassic Park cover ever made: And the only way you can deal with that is to nerf it out on a case-by-case basis. You could argue that LLMs are good for facts but not opinions but the problem is its method for handling facts only works for opinions. Are they useful? Yes. Are they a tool that will make big changes to a few industries? I don't see how they can't. Am I honestly excited to see their actual utility? You damn betcha. But where the world is now is this: People who don't understand AI inflicting it on people who don't need AI to the detriment of people who don't want AI. That's it. That's the game.
Ahh, of course, the feedback thing. I don't do anything live, so I can just get away with a pretty simple gate and headphones. No chance of loops. Hadn't really thought about how I would suppress feedback loops without killing the channel or at least lowering the volume. But now I completely get it. I got really close to connecting the dots a long time ago when I suggested basically TEF in a convo with you a few years back. My mistake was thinking about mixing. I was thinking about minimizing phase cancellations as a function of frequencies. But duh: My co-worker would bolt a plasma spectrometer with accelerometers on it to a vibration table with some special isolators between the instrument and mounting baseplate, and we'd shake them with a sine sweep survey starting from like 1 Hz up through, I dunno, 40 kHz or something like that, and a power spectrogram level was input to govern the amplitude around each frequency. JUST like what you're doing with mics? We do it too. We'd already calculated the approximate normal modes of the instrument from 3D CAD models (we used Ansys), and so we notched the input frequency spectral energy around the normal modes so we don't overdrive the thing during vibe testing. And then we shake it with the launch environment, a white-noise spectrum, still modestly notched around the normal mode frequencies (which might have needed slight readjustments from the sine sweep results). By the way, at GSFC, they have like a 10 foot diameter gramophone to just blast shit with. I'd guess it was for Saturn V's, hahah, but I don't know! Didn't get the story. (edit: ohhhhh, I think it might've been for cleaning, especially considering that it was being kept in one of the anterooms bordering a clean room. They must be using the thing to knock any loose particles off of equipment or instruments with sound. We did the same thing with an ultrasonic bath after de-greasing parts with trychloride, before the final isopropyl wipe down. They'd soundblast it after that. Probably a pretty clean room.) Which has its uses, heh, though perhaps mostly uncommercializable. Absolutely agree. The LLM is navigating topological features inside a parameter space. With boundaries, and curvature, yeah. It's what I'm doing for the magnetosphere, actually. Same kind of idea. Except with I dunno maybe a billion axes instead of the four I use. But yeah, sometimes if you move just a little bit in the parameter space from where you started last time, or you start off in a slightly different direction, the topology might map to some drastically different places. Occasionally they will conjoin into beauty. AISI; artificial idiot savant intelligence. Hadn't heard any AI tunes yet, and figured there was good reason for it. I don't go looking for them, and a really good one would have found its way to me by now if it existed. We don't, agreed. I only want it for selfish reasons. And I only want it if I can feel assured it isn't going to cripple society. So I don't want it. Nvm. Feels like we're all getting a better handle on the level of complexity to expect though. It'll change. Hopefully not too fast, this has apparently been jarring enough for the world already, but AGI in two years? I just don't think so, and I'm 100% sure that ASI isn't only three years out.What you get is uncanny valley nightmare fuel
I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant.
...people who don't need AI...
that sounds so fucking awesome Well what you're doing is ringing out the frequency response, right? You're trying to find constructive modes that are going to fuck you over while strapped in a rocket. You do that with an equalizer if it's sound or filters if it's an electromechanical system. I've linked this before, the eldritch magic starts at 3:35: For the record the last time I used ANSYS it was a command-line program that ran on a DEC Alpha. that sounds so fucking awesome You are grossly underestimating the ease with which bad mixes can be produced. The computer music cats have been doing "generative music" for a long time. It's easy as shit and doesn't require an LLM. Most of them are some form of neural network somewhere; "random ambient generator" has been an off-the-shelf product category for 20 years. Here's a free plugin for Kontakt. Here's a walk-through for Ableton.My co-worker would bolt a plasma spectrometer with accelerometers on it to a vibration table with some special isolators between the instrument and mounting baseplate,
and we'd shake them with a sine sweep survey starting from like 1 Hz up through, I dunno, 40 kHz or something like that, and a power spectrogram level was input to govern the amplitude around each frequency. JUST like what you're doing with mics? We do it too.
We'd already calculated the approximate normal modes of the instrument from 3D CAD models (we used Ansys)
By the way, at GSFC, they have like a 10 foot diameter gramophone to just blast shit with.
Which has its uses, heh, though perhaps mostly uncommercializable.
Hadn't heard any AI tunes yet, and figured there was good reason for it. I don't go looking for them, and a really good one would have found its way to me by now if it existed.
Absolutely. The normal modes. As it goes, first is the worst, second is the best, third is the one with the treasure chest. Sometimes it's "hairy chest", depends on the elementary school. When people use generative stuff in music well, it's noted. One of the most ridiculous arpeggio parts ever was made with Omnisphere's arpeggiator and then meticulously adapted for guitar. Probably took a little bit of practice (the rest of my life, in my case).Well what you're doing is ringing out the frequency response, right?