a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by mk
mk  ·  274 days ago  ·  link  ·    ·  parent  ·  post: OpenAI's Sora

Where’s the evidence of that asymptote?





kleinbl00  ·  274 days ago  ·  link  ·  

It's visual and obvious, dude. The 1x dog is a nightmare dog, the 4x dog is a fuzzy dog, the 16x dog is a less-fuzzy dog. But the 16x cat still has occasional spurious limbs.

It's obvious that the 16x cat is a sparkly cinematic 4k-lookin' cat but there's nothing in the model to demonstrate that a 64x cat is any less likely to pop an extra leg every now and then. Photorealistic renders of things that can't exist have been a staple since Deep Dream and what's clear is that the cost-per-pixel is linear while the quality-of-massed-pixels hasn't changed appreciably. Further, that accuracy isn't even a consideration - "close-up of a short furry monster kneeling" is of a short furry monster squatting and "can it tell the difference between kneeling and squatting" is NOT a throw-away problem. More than that, it's clearly not a focus of development.

Devac  ·  270 days ago  ·  link  ·  

    It's visual and obvious, dude.

I'd go with 'trivial' or 'left as an exercise for the reader'.

You're right, though. Generators seem to be less able to remove 'turbulence' from the output, but rather move it someplace else within it and hope for the best. Like, I tried to make some character art for my game, and it can pull off some handsome faces, for sure more detailed than I'd have patience to draw, but the clavicle-to-armpit areas look inexplicably like Munch's melted cheese period.

kleinbl00  ·  270 days ago  ·  link  ·  

    Like, I tried to make some character art for my game, and it can pull off some handsome faces, for sure more detailed than I'd have patience to draw,

And I think this is key. It's stupid to argue these problems won't be fixed. Give it a year and it'll pull off handsome faces without string cheese anatomy. But who's using that

You're using it for atmosphere and ambience around something where you would have simply done without. You weren't about to pay a human to draw those characters. This is very much like my own use of AI - "Hey Midjourney give me a picture of 'Fear and Loathing in Enumclaw' to share with five friends." One of those friends tried to get Microsoft Copilot to give him a logo for his studio; they were all awful. Three or four of us pointed out that he could get up on Fiverr and do infinitely better. Is that the argument, ultimately? That AI will do a better job than Fiverr? ...cuz... it's more expensive than Fiverr. It should. And also everyone on Fiverr is going to be hella better at using AI to get you what you want than you are.

The tools are always going to have shortcomings, all tools do. Professionals learn how to work around those shortcomings to do a better job faster. To me? Much of this discussion is "ZOMG nail guns are going to put framing carpenters out of business."

Devac  ·  270 days ago  ·  link  ·  

I'm not arguing those problems won't go away, or that it's any more or less than a tool. You can give me that much I hope.

And you're right that I wouldn't pay a human for those, at least unless those would be recurring NPCs or something like that. I do commission background sets regularly because 1) the free/cheap/generated ones are usually on par with what I can make, 2) what I can make suffers a severe pizazz deficiency. Lotsa bang for a buck, too.

kleinbl00  ·  270 days ago  ·  link  ·  

Yeah the best advice in nearly any endeavor is "hire the best expert you can afford and do what they tell you" and if you are paying artists for a campaign that is fuckin' awesome. No shade intended.

The business model of all these AI companies, on the other hand, is "get people who would never pay experts to pay us because they don't believe in expertise."

am_Unition  ·  267 days ago  ·  link  ·  

If it's like a nail gun, then it's still problematic, because almost every new piece of tech disproportionately benefits the capitalists. Maybe the number of framing carpenters stays the same, but they're upping output, building houses quicker, and a proportionate rise in wage is doubtful, or at least atypical. The builders and real estate investors profit even more, hurrah!

Even if this all never becomes a Thing, I think it'd be cool to have a university or public-funded LLM unleashed on everything public domain and voluntarily (lawfully) donated libraries and content. Do you think it'd be worth it?

lol now I'm imagining a Trump admin. procedure for "expertise codebase corrections", governing what is allowed to be input when like the executive branch LLM is allowed to assimilate feedback from expert-level critique.

    PURPOSE OF CODEBASE CORRECTIONS

      -- TO ASSIST PROGRAM WITH TOP-LEVEL ASSESSMENTS OF HURRICANE SCIENCE AND PREDICTIONS

      APPLICATION

      -- EMERGENCY ALERT INSTRUCTION

    -- FORECASTING

    -- EDUCATION

    -- GLOBAL WARMING INTEGRATION

      PARTICIPANTS

      -- DR. WILLIAMSON, UNIV. FL

    -- DR. NAKITOSHA, TOKYO UNIV.

    -- BILL ACKMAN, BILL ACKMAN

    -- SAM ALTMAN, MULTI-TRILLIONAIRE

    -- DONALD TRUMP, LORD

    -- VIRTUAL SHARPIE, DONALD TRUMP

    -- NUCLEAR BOMB, U.S./DONALD TRUMP

k back to reality. If something good gets put on iPhone, that could be the push towards mass adoption that'd matter. People would get used to using it. Apple's, of course, already way in deep with it financially, too, but hasn't deployed much of anything yet.

Self-driving cars? Too hard of a problem to solve, especially without privatizing infrastructure. NFTs? Right-click "save as" for the digital, seek state-enforceable means of ownership for the physical. Crypto? I have a debit card. This? The only issue I can see is what you've already NAILED, mr. framing carpenter, the legal field. But I still think big parts of this stuff are going to make it into our lives. (Already has, to a degree. The TikTok algo is probably the most successful implementation so far, financially.) Obviously I don't mean only Sora or images, but stuff like accruing or building any type of content hyper-shaped to your tastes, learning a new language, making it code for you, or, apparently, for some people, falling in love with an algorithm and feeling devastated when you're locked out of your profile or your hard drive crashes.

Oof, hey, if you wanna watch it fail, you could try to have it teach you how to play an instrument. That would be content. "LLM, please write a story about a man who asked an LLM to teach him how to play an instrument, but was met with extreme failure." This was pretty good, even without the twist, but my wife called it about halfway into the thing: "They probably made ChatGPT write the ChatGPT episode". Yup. They did.

I really do think people will use this on a massive scale, and pretty quickly. Some jobs will be lost, and some jobs will be created. Not terribly sure how much of each.

kleinbl00  ·  267 days ago  ·  link  ·  

    If it's like a nail gun, then it's still problematic, because almost every new piece of tech disproportionately benefits the capitalists.

and there it is.

Fundamentally, everyone in a capitalist society is a capitalist, either voluntarily or involuntarily. I agree fully - tools can definitely be used to the advantage of one social class over another. We have no newspapers, for example (middle class) because of the annihiliation of classified ads (lower class). Farming is concentrated (upper class) because of the mechanization of individual agriculture (lower class).

But going "this tool is the problem" is an utter and total waste of time if what you're trying to do is protect society.

Are LLMs plagiarism machines? Mos def. Are they useful without plagiarism? Prolly not. Do we have mechanisms in place to protect against plagiarism? Hell yeah all that has to happen is for the techbros to learn they're not above the law.

yet when I say "it's all plagiarism" what I get, EVERYWHERE, is "no no man it's fuckkn eldritch magic that will doom us all."

am_Unition  ·  270 days ago  ·  link  ·  

We'll tell our grandchildren "we used to make our own handsome faces".

I'm usually not on the side of techbros, but I do think LLMs and image/video stuff is some of the most disruptive technology to come along in about a decade, maybe more. But to be fair, I dunno why deepfakes haven't been more impactful, it's kind of a similar vein. Maybe the most exciting thing is the possibility that this will eventually destroy the internet by eventually feeding its outputs back into inputs until the web fractalizes into nesting outrage bubbles interspersed with fake cute animal .gifs.

Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math. Like being able to publish something novel. I think an LLM's best chance would be going the experimental route, sifting through public-domain data and finding something the existing literature had missed. It might have the hardest time doing some of the hand-wavey stuff theorists do to get analytic results, when you need a deeeeeep understanding of exactly what the maths represent, or the motivation for using a certain approach or approximation, etc.

Anyway, I hope you are well. :)

kleinbl00  ·  270 days ago  ·  link  ·  

    I'm usually not on the side of techbros, but I do think LLMs and image/video stuff is some of the most disruptive technology to come along in about a decade, maybe more.

True or false: image creation is an area in which you have practice and expertise.

    But to be fair, I dunno why deepfakes haven't been more impactful, it's kind of a similar vein.

See, you're going "everyone is an idiot but me." Stop that. It's because if you want the fake to work it has to be carefully crafted to not stretch credulity. "Huh, look at all the Taylor Swift nudes! I wonder if any of them are real!" -no one

    Maybe the most exciting thing is the possibility that this will eventually destroy the internet by eventually feeding its outputs back into inputs until the web fractalizes into nesting outrage bubbles interspersed with fake cute animal .gifs.

Here's my gremlin opinion:

Microsoft funds OpenAI because they KNOW it's poisoning Google.

Example: We've been watching Hotel Hell with dinner. One of the games we play is "what happened after Gordon left." This involves a web search - and it's a perfect web search for AI. It's content nobody really cares about, driven by a large mass media exposure with a long tail (the episodes aired in 2012). Now - check this out.

That's an AI-generated website. It's also the top hit for something on Hotel Hell. If you dig into any of the blogs dedicated to "where are they now" reality TV updates you learn the place closed in 2020. If you look on Trip Advisor, you see that the last review was in 2020. But if you look on Facebook, Yelp, Kayak or anywhere else, there's a link farmer with a phone number and an email address who totally doesn't have a hotel but will absolutely take your credit card number! Bing's results aren't much better, but then, Microsoft doesn't make their money from search and never will so fuck search.

    Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math.

LLMs have no deep understanding, so they'll never come for anything that requires deep understanding. Shit, LLMs have no understanding. How many legs does an ant have? how many pawns on a chess board? These are the constraints that hobble an LLM, they don't make them better, so they're never going to grok that shit. If you need something that knows how many fingers hands should have, you need something other than an LLM.

am_Unition  ·  267 days ago  ·  link  ·  

    True or false: image creation is an area in which you have practice and expertise.

Kind of. Learned Photoshop in high school, messed with Illustrator recently. I script command line image manipulation (ImageMagick) and video (ffmpeg). I'm only artsy enough to upset my Christian mother sometimes. I think you're asking about that, specifically, and no, I'm not the best painter, sculptor, drawer, logo-designer, or whatever. Sora's doing wayyyyyy better than me.

    See, you're going "everyone is an idiot but me."

Kinda, but I'm an idiot too. Just hopefully not about this. You've posted stuff yourself that shows how quick so many people are to be fooled by some AI images. People are busy. They're in a hurry.

    Microsoft funds OpenAI because they KNOW it's poisoning Google.

Oh this is absolutely true.

    We've been watching Hotel Hell with dinner. One of the games we play is "what happened after Gordon left."

My wife and I have literally done this for years. And Kitchen Nightmares, too.

The website scam is pretty solid, there's gonna be a lot of that. It's already illegal, I'm sure, and companies should get in big trouble if their LLM is an accessory to fraud. The litigation surrounding stuff that's in a more morally gray area will be thrilling, I'm sure. One way or another.

    LLMs have no understanding.

I understand. Ha no, but it's kinda the ol' "magic is science we don't understand yet" thing. If it's passing the Turing test, it will feel intelligent. Indistinguishable, most of the time. It's easier and more productive to talk to online than at least half of America. And you can just photoshop out the extra digits and save yourself potentially hours upon hours of time without having to synthesize too much, my dude.

Again, yeah, legal stuff's gotta get sorted, but this tech is mos def my bet for most disruptive in this generation. Like a 15-year span. It'll be: cell phones -> internet -> social media -> LLMs. Wish I could tell you what I thought was next. Would if I could.

kleinbl00  ·  267 days ago  ·  link  ·  

FUN FACT: The Turing test was about "can you tell if I'm gay" not "can you tell if I'm a robot."

It's like that goddamn Potter Stewart quote - you throw it in my face it reveals that you've found a platitude to model your understanding on, not a theory.

Devac  ·  270 days ago  ·  link  ·  

    We'll tell our grandchildren "we used to make our own handsome faces".

Cyrodiil's Jesus! No, I tried generating something a touch less 4chan-does-Amnesia and more Balkan Romani without the perpetually disappointed look.

    Since I'm self-righteous, I'd like to think one of the last things it'll come for is physics and math.

Theory is much less about hand-waving connections between deeply understood parts and more about doing the math with as little preconceived ideas as possible. Don't imagine what atom/potential/sun is, calculate and interpret what comes out, see if anyone tested something similar / calculated it in a similar regime. Propose an experiment, try to make a feedback loop with someone (or something) that'd bounce ideas back. It's everything else that ought to be automated, 'cause the amount of paperwork they try (underline: try) to pile on me is just fucking ludicrous.

The problem is that models aren't better at determining they're wrong than humans, and are unlikely to learn it since their very nature is numerical bias. And, frankly, LLM/models/AI/whatever should have less of a problem replacing philosophy, because doing proper math requires pencils, paper and a wastepaper basket for wrong ideas... whereas philosophers seem to only ever need the first two.

Otherwise, I kinda stopped paying attention to anything that isn't directly related to my interests tbh. Seems like everyone is losing their shit over anything and everything in the news/work/word holes, while I'm tackling the deeper mysteries of is it better to keep seeing someone with a 3-year-old and see where it leads or cut it loose before things get difficult for the kid moreso than us.

    Anyway, I hope you are well. :)

Same to you. We gotta do some meetup. I wanted to organize one in January, but my health took a dip, maybe it's time to try again.

am_Unition  ·  267 days ago  ·  link  ·  

Agree, the complexity put into making sure conclusions are correct-ish is going to be hard to replicate.

Philosophy deserves every burn. Sorry. But only a little.

We use machine learning pretty commonly now in my field. It's been harder for some of the older folks to grasp exactly how it works. But yeah, an algo isn't going to drive it right or understand the shortcomings. Not sure why you'd want a middleman, either.

I'm not losing my shit, no worries. Well, kinda. I'm always at least kinda losing my shit, though. And hey, kids are.. a lot... but I will say, men of much less resourcefulness than yourself have found fulfillment in adopting a kid. I struggle with patience, personally.

I'll try to make the meetup, but my schedule really clears up in mid April.

Devac  ·  267 days ago  ·  link  ·  

    Philosophy deserves every burn. Sorry. But only a little.

Eh, I'm being my usual exaggerated dismissive, but it's sad that the two most visible to me camps are essentially "it's only so unbiasedly rational of us to consider how many AGI could dance on the needle's head" and "mathless/IFLS quantum vibes" types. It's not even that I don't see the merits of those two, let alone philosophy at large, but that I have absolutely no fucking interest in either yet they keep talking at me like I'm a lobotomite for not caring.

And no, wasoxygen, I'm not calling you out specifically, it's just how you Yudkowsky-ites communicate. We're cool, I hope.

    Not sure why you'd want a middleman, either.

Well, ML/whatever excels at finding patterns, even if it can't/won't explain them. Having a tool that goes "exploring these parameter spaces is most likely worthless" or even "isn't it funny how second order solitons only form when this parameter is divisible by 17?" may be invaluable to a right person who can find context to those observations. That's the "(or something)" in my previous comment.

Tying this to "making sure conclusions are correct-ish is going to be hard to replicate." <- that's the bottleneck as far as I can see. First you have to separate seeds from chaff, and then make sure those seeds aren't blighty or cleverly disguised angry bears. I wouldn't mind science becoming (even more) akin to computer-assisted chess, though. Tools are tools, experts use tools better, so that checks out too.

    Losing shits and meetups

Wasn't singling you out here, though I hope you take care of yourself and wife. And it's not like I don't understand or lack the presence of mind to understand why people are so agitated. I simply can't keep dealing with it. It's been two goddamned years, and I can't even force myself to go to Ukraine anymore. I haven't seen the worst, and it's too much. Focusing on what I can affect has to be enough for me right now.

As to meetups: no worries, I can make another one in April or May. They're about as informal as flip-flops anyway.

am_Unition  ·  265 days ago  ·  link  ·  

Since I'm like public journaling now instead of just allowing thoughts to pass through my head without any reinforcement and then showing up to hubski like "oh, I don't have anything", I'll give an example. "If I tried to LLM at work".

There's a global model of the magnetosphere and surrounding solar wind environment that I run through a public website. I query the model for a certain day or time that I want (step 1). Wait a few days, then I look through the results and do the science (step 2).

For step 1, there is no benefit in having a program input the date and time with a few choices that I make for which sub-components of the magnetosphere model I want to use, because it takes about five minutes. For step 2, the way that I look through the data requires an entire methodology in which I'm using outputs from the model to re-input back into the next time-step for visualization. I'm tracing magnetic field lines through time/space and the magnetosphere as it convects (I've automated it using a python webcrawler and maths to produce a movie). The idea that I could simply ask an LLM to do this is pretty funny. It's so specialized that I can guarantee it would fail immensely to know wtf I meant when I said "take the results from this model run and show me a movie of magnetospheric convection. I want bundles of magnetic field lines that pass through the reconnection site near satellite XYZ emphasized". I think the amount of additional information I would need to feed it for the thing to even come close is infinite, because it's probably never going to give me something good. More on that below. But let's say that it does. It's the game of "how do I know it's right?" again. I've gotta inspect all of the code that it wrote to do it, and I can guarantee that it's gonna be an implementation that's a way different structure than mine. I'm going to put in so much effort checking it that I'm not going to save an iota of time.

OK, so I have my video, one way or the other. I can now look through it and do the actual science, linking it into an analysis of data from that satellite. There is simply no fucking way that any LLM or AGI on the foreseeable horizon could do this. Doing the science means comparing the new m'sphere model outputs to the existing data analysis, linking new interesting/publishable physics of the two, discussing how this is different or similar to previous studies, and thinking about how the results can be applied towards the next step. It requires a deep understanding of how this contributes to the field. This is at least approaching ASI territory.

Furthermore, for the science, the LLM or whatever it is has no interest in images. It only cares only about model outputs. It would actually have to perform the conjugate of what I have to, and take the images from previous movies of magnetosphere convection and put them into a form for comparison with the magnetosphere model output data. The whatever it is will have to know how to transform the data into formats suitable for comparison, and then it'll have to have correctly ingested the publishing record to form a pseudo-understanding of everything. Can't imagine the lengths it would have to go to output something like "we can see that if the only difference is a Y-component reversal of the upstream magnetic field in the solar wind, the reconnection site moves southward towards the spacecraft, because the X-line is shifting to accommodate cusp reconnection relocating from the cusps on the dawn and north and dusk and south quadrants to the dawn, south and dusk, north quadrants, respectively". Would the Whatever know that it'd be good to run the magnetosphere model I used for the period of time used in the previous study, which used a completely separate m'sphere model, to factor in the differences between the two models that might explain the behavior instead? Does it know that it's important to comment on the distance from the satellite to the reconnection site? Is the data analysis conclusion that the satellite is at a reconnection site actually wrong? Are there shortcomings in the m'sphere model that help explain why the m'sphere model's reconnection site differs from where we actually found it?

It's obviously not advisable to expect this inside of two or several decades. Maybe it could build me a movie, but I doubt it. Unless I am guaranteed a running instance of my efforts to coach it is preserved and always available should I achieve a successful/correct movie once, or that any new pseudo-understanding I had to lead it to is properly assimilated into the root system, there's no reason to even begin trying. Correct me if I'm wrong, but that's not something publicly available yet, and I can see massive hurdles to it ever happening. lol, what am I gonna say? "That's right! You finally did it. Now, don't forget how to do this the next time I ask, I don't want to have to spend another seven months filling in the gaps in your understanding of this again"? Hahahha

"Filling in gaps of understanding" deserves a dissection, because it's more general, not just for physics or science, but for anything. The process looks like hell. Because, like we've said, the LLM doesn't know what's "correct", it's not going to ask you any substantive questions. It's going to output what it outputs, and you'll have to look at the outputs, and tell it why it's wrong. Iteratively. Having it fix one thing could break another. It could even infinitely diverge instead of ever converging on the solution you want it to. This all assumes that you know what you're looking for, what "right" means. And then, even if it does get things right, yeah, unless you work at the company that owns the LLM, it's all forgotten when you close the instance.

Job security. Job security for all!

kleinbl00  ·  265 days ago  ·  link  ·  

I had a discussion with an old buddy about LLMs yesterday. He's writing fiction and is using ChatGPT like a rented mule.

He's got a character who's modeled on Andrew Tate but he wants him to be annoying, not a villain, so he'll type "give me ten things a sexist asshole would say about women that aren't awful." He's got a character who's a vampire so he'll type "give me a list of insults a vampire would use against townsfolk." Or he'll be analyzing plot points and he'll say "give me a list of movie scenes that would radically change the movie if they were absent."

In each one he goes through and picks what he likes. In the last one he argues with it. I pointed out that he's basically using ChatGPT like an extended thesaurus and he agreed. I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant. But - I pointed out - when you ask it for an opinion it will fall down every time because it has absolutely no handles on any of its inputs and outputs. You can't ask it to tell you what scenes are crucial because it has no understanding of any of the concepts underneath. What it has is a diet of forum posts that it will never give you straight.

Shall we play "how can chatGPT do my job?" 'cuz they've been trying to AI automate my job forever.

See this guy? they were about $1500 back in '94. And what they do is analyze the audio signal passing through them looking for feedback, and then they drop one of eight filters on it. You can adjust the sensitivity to feedback, you can adjust the latch, you can adjust the release, you can adjust the aggressiveness. They were really big until about 2005 or so when it became cheap and easy to TEF sweep a room and ring it out to EQ out the frequencies that cause things to ring - I'm sitting here surrounded by ten speakers at 85dB and having spent an afternoon mapping and collating and inserting between 4 and 15 filters each channel I can't get feedback if I hold a condenser in front of left main.

Could an AI have done that? fuck yeah. That would have been delightful. But not without me moving the mic sixty times so what time am I actually saving?

That active seeking feedback reduciton thing has made it into machine tools - each servopak on my mill has more filters than that Sabine. And in general, the approach everyone takes is "set as many as you need to kill steady-state, use the roaming ones carefully" because who knows what modes you'll run into with this or that chunk of aluminum strapped down getting chewed up.

Everything I've got is already a waveform. We've been using Fourier transforms to operate on them for 40 years. My life is nothing but math. And despite the fact that GraceNote has literally released every song they know about as training data, telling the AI "make my mix sound better" still fucking failwhales. Like, on a basic, simple level. It understands what the sonogram of a song should sound like but that's like reconstructing a fetus from an ultrasound. What you get is uncanny valley nightmare fuel.

I don't need the mediocre middle of a million mixes, I need excellence. And excellence comes from humans because it is, by definition, not the mean. Anyone expecting that a machine purpose-built to give you a statistical average can give you only the good outliers is going to be disappointed for the simple fact that the machine doesn't understand "good" or "bad" it understands "highly rated" or "much engaged with." The machine thinks this is the best Jurassic Park cover ever made:

And the only way you can deal with that is to nerf it out on a case-by-case basis.

You could argue that LLMs are good for facts but not opinions but the problem is its method for handling facts only works for opinions. Are they useful? Yes. Are they a tool that will make big changes to a few industries? I don't see how they can't. Am I honestly excited to see their actual utility? You damn betcha. But where the world is now is this:

People who don't understand AI inflicting it on people who don't need AI to the detriment of people who don't want AI.

That's it. That's the game.

am_Unition  ·  265 days ago  ·  link  ·  

Ahh, of course, the feedback thing. I don't do anything live, so I can just get away with a pretty simple gate and headphones. No chance of loops. Hadn't really thought about how I would suppress feedback loops without killing the channel or at least lowering the volume. But now I completely get it. I got really close to connecting the dots a long time ago when I suggested basically TEF in a convo with you a few years back. My mistake was thinking about mixing. I was thinking about minimizing phase cancellations as a function of frequencies. But duh:

My co-worker would bolt a plasma spectrometer with accelerometers on it to a vibration table with some special isolators between the instrument and mounting baseplate, and we'd shake them with a sine sweep survey starting from like 1 Hz up through, I dunno, 40 kHz or something like that, and a power spectrogram level was input to govern the amplitude around each frequency. JUST like what you're doing with mics? We do it too. We'd already calculated the approximate normal modes of the instrument from 3D CAD models (we used Ansys), and so we notched the input frequency spectral energy around the normal modes so we don't overdrive the thing during vibe testing. And then we shake it with the launch environment, a white-noise spectrum, still modestly notched around the normal mode frequencies (which might have needed slight readjustments from the sine sweep results). By the way, at GSFC, they have like a 10 foot diameter gramophone to just blast shit with. I'd guess it was for Saturn V's, hahah, but I don't know! Didn't get the story. (edit: ohhhhh, I think it might've been for cleaning, especially considering that it was being kept in one of the anterooms bordering a clean room. They must be using the thing to knock any loose particles off of equipment or instruments with sound. We did the same thing with an ultrasonic bath after de-greasing parts with trychloride, before the final isopropyl wipe down. They'd soundblast it after that. Probably a pretty clean room.)

    What you get is uncanny valley nightmare fuel

Which has its uses, heh, though perhaps mostly uncommercializable.

    I also pointed out that if you ask an LLM "give me the stochastic mean of this vector through a set of points" you are using the LLM as it was intended to be used - it will give you the mediocrity every time and, because it's basically a hyperadvanced Magic 8 Ball every now and then it will be brilliant.

Absolutely agree. The LLM is navigating topological features inside a parameter space. With boundaries, and curvature, yeah. It's what I'm doing for the magnetosphere, actually. Same kind of idea. Except with I dunno maybe a billion axes instead of the four I use. But yeah, sometimes if you move just a little bit in the parameter space from where you started last time, or you start off in a slightly different direction, the topology might map to some drastically different places. Occasionally they will conjoin into beauty. AISI; artificial idiot savant intelligence.

Hadn't heard any AI tunes yet, and figured there was good reason for it. I don't go looking for them, and a really good one would have found its way to me by now if it existed.

    ...people who don't need AI...

We don't, agreed. I only want it for selfish reasons. And I only want it if I can feel assured it isn't going to cripple society. So I don't want it. Nvm.

Feels like we're all getting a better handle on the level of complexity to expect though. It'll change. Hopefully not too fast, this has apparently been jarring enough for the world already, but AGI in two years? I just don't think so, and I'm 100% sure that ASI isn't only three years out.

kleinbl00  ·  264 days ago  ·  link  ·  

    My co-worker would bolt a plasma spectrometer with accelerometers on it to a vibration table with some special isolators between the instrument and mounting baseplate,

that sounds so fucking awesome

    and we'd shake them with a sine sweep survey starting from like 1 Hz up through, I dunno, 40 kHz or something like that, and a power spectrogram level was input to govern the amplitude around each frequency. JUST like what you're doing with mics? We do it too.

Well what you're doing is ringing out the frequency response, right? You're trying to find constructive modes that are going to fuck you over while strapped in a rocket. You do that with an equalizer if it's sound or filters if it's an electromechanical system. I've linked this before, the eldritch magic starts at 3:35:

    We'd already calculated the approximate normal modes of the instrument from 3D CAD models (we used Ansys)

For the record the last time I used ANSYS it was a command-line program that ran on a DEC Alpha.

    By the way, at GSFC, they have like a 10 foot diameter gramophone to just blast shit with.

that sounds so fucking awesome

    Which has its uses, heh, though perhaps mostly uncommercializable.

You are grossly underestimating the ease with which bad mixes can be produced.

    Hadn't heard any AI tunes yet, and figured there was good reason for it. I don't go looking for them, and a really good one would have found its way to me by now if it existed.

The computer music cats have been doing "generative music" for a long time. It's easy as shit and doesn't require an LLM. Most of them are some form of neural network somewhere; "random ambient generator" has been an off-the-shelf product category for 20 years. Here's a free plugin for Kontakt.

Here's a walk-through for Ableton.

am_Unition  ·  264 days ago  ·  link  ·  

    Well what you're doing is ringing out the frequency response, right?

Absolutely. The normal modes. As it goes, first is the worst, second is the best, third is the one with the treasure chest. Sometimes it's "hairy chest", depends on the elementary school.

When people use generative stuff in music well, it's noted. One of the most ridiculous arpeggio parts ever was made with Omnisphere's arpeggiator and then meticulously adapted for guitar. Probably took a little bit of practice (the rest of my life, in my case).

Devac  ·  264 days ago  ·  link  ·  

    Correct me if I'm wrong

Dunno, probably not, but I think you could instantiate one that can when they can and freeze its learned ability, so the whole hoping it doesn't forget might go away.

But I have no idea. Don't write that much code or work with raw data these days, so bibliographic aid is just about all it can do for me in an hour of need. Otherwise, it's about as tangential to my goings-on as it can get.

When I tried that 'explain paper' site, it left enough of a distaste for me to roll eyes and move past. Between absolutely fucking insisting that some unrelated mathematical concept[0] is absolutely crucial to explain my question and rephrasing a circular argument until I got bored and left, I probably won't bother again for quite a while.

Unfortunately, the above experience mean I'm unlikely to trust LLMs with stuff I don't know a lot about. Also, I kinda regret writing anything in this thread and will probably just add more tags to my ignored list. Fun company notwithstanding - too much hassle, too few fucks left.

[0] - I wrote and deleted 900 word footnote of jargon about orbits of the coadjoint representation groups and operators in de Sitter space, so let's pretend I said Tits index and wiggled my eyebrows in an amusing way.

am_Unition  ·  264 days ago  ·  link  ·  

    ... I'm unlikely to trust LLMs with stuff I don't know a lot about.

That is the only way to fly, in my opinion, and we haven't discussed this much (edit: well nah we kinda have), but people aren't going to use it like that, obviously.

Don't blame you for any filterings. I kinda like livening up this place. It's LLM season on hubski, baby. But one last quick story! I'm a couple miles from home standing in line to order a burger (probably in flip flops again) and a guy gets in the to-go line. Says "Order for so-and-so", and the cashier checks the order tickets. Nothin'. He says "I called such and such number". She refers to some post-its behind her, and sees that it's the other branch across town that he called and ordered from. He then says "watch", pulls up his phone, and goes "Siri, call Restaurant X on Street Y" (where we are), and it was replicable, it dialed the other branch again. He goes "so it's not my fault. I should get some food for free, I already paid". And I think he did. And he cut everyone in line. I wasn't in a hurry, it was nice to have front row seats for such a prescient demonstration.

It's gonna be a fun time.

Devac  ·  264 days ago  ·  link  ·  

    I wasn't in a hurry, it was nice to have front row seats for such a prescient demonstration.

When every foodhole in Warsaw connected with delivery service overnight, outgoing orders had much much higher priority. So, during pandemic, you had a crowd of deliverers, normal line that moved at snail's pace, and a nearby crowd of people who placed their orders in an app to game the system. This lead to a situation where people from the last group placed order to <restaurant's address> and added comments like "I'm the one wearing a brown hat with a gigantic pompom" or "I'm already behind you."

Insert something about follies of idiots with access technology. I don't know, I barely slept since Friday.

am_Unition  ·  264 days ago  ·  link  ·  

Yeah. Gonna be a lot of LLM Florida man stories.

    barely slept since Friday

Same. But I do like checking back in here when I hit a roadblock at work. It's synergistic.

Good luck with your coming week. Mine's gonna be crunch time, but I think I'm almost ready. Peaceeeee

am_Unition  ·  267 days ago  ·  link  ·  

I could honestly benefit from an involved re-visiting of philosophy, but it doesn't really feel terribly necessary, all things considered, at the moment. This is my field's IFLS. Except not, because it's just flat out wrong, as opposed to a flavorful interpretation of quantum mechanics.

No worries, fam is good. I'll periodically re-enter an "oh shit, it's fascism!" check-in phase, but I try to keep it Stoic more of the time, these days. Like, I'm not chanting the serenity prayer, just wishing for the same thing more often than I used to.

    They're about as informal as flip-flops anyway.

OH, that reminds me: For your consideration, I'd like to submit the most American thing ever done, possibly. I wore my flip-flops to the McDonald's in downtown Bern, Switzerland, while unknowingly incubating covid that I'd gotten on the plane ride. A homeless woman outside goes "PLUGH, l'Americaine!", and all I could do was think to myself "I know, right?".

kleinbl00  ·  267 days ago  ·  link  ·  

We used "machine learning" at an undergrad level in 1998.