a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment
am_Unition  ·  265 days ago  ·  link  ·    ·  parent  ·  post: OpenAI's Sora

Since I'm like public journaling now instead of just allowing thoughts to pass through my head without any reinforcement and then showing up to hubski like "oh, I don't have anything", I'll give an example. "If I tried to LLM at work".

There's a global model of the magnetosphere and surrounding solar wind environment that I run through a public website. I query the model for a certain day or time that I want (step 1). Wait a few days, then I look through the results and do the science (step 2).

For step 1, there is no benefit in having a program input the date and time with a few choices that I make for which sub-components of the magnetosphere model I want to use, because it takes about five minutes. For step 2, the way that I look through the data requires an entire methodology in which I'm using outputs from the model to re-input back into the next time-step for visualization. I'm tracing magnetic field lines through time/space and the magnetosphere as it convects (I've automated it using a python webcrawler and maths to produce a movie). The idea that I could simply ask an LLM to do this is pretty funny. It's so specialized that I can guarantee it would fail immensely to know wtf I meant when I said "take the results from this model run and show me a movie of magnetospheric convection. I want bundles of magnetic field lines that pass through the reconnection site near satellite XYZ emphasized". I think the amount of additional information I would need to feed it for the thing to even come close is infinite, because it's probably never going to give me something good. More on that below. But let's say that it does. It's the game of "how do I know it's right?" again. I've gotta inspect all of the code that it wrote to do it, and I can guarantee that it's gonna be an implementation that's a way different structure than mine. I'm going to put in so much effort checking it that I'm not going to save an iota of time.

OK, so I have my video, one way or the other. I can now look through it and do the actual science, linking it into an analysis of data from that satellite. There is simply no fucking way that any LLM or AGI on the foreseeable horizon could do this. Doing the science means comparing the new m'sphere model outputs to the existing data analysis, linking new interesting/publishable physics of the two, discussing how this is different or similar to previous studies, and thinking about how the results can be applied towards the next step. It requires a deep understanding of how this contributes to the field. This is at least approaching ASI territory.

Furthermore, for the science, the LLM or whatever it is has no interest in images. It only cares only about model outputs. It would actually have to perform the conjugate of what I have to, and take the images from previous movies of magnetosphere convection and put them into a form for comparison with the magnetosphere model output data. The whatever it is will have to know how to transform the data into formats suitable for comparison, and then it'll have to have correctly ingested the publishing record to form a pseudo-understanding of everything. Can't imagine the lengths it would have to go to output something like "we can see that if the only difference is a Y-component reversal of the upstream magnetic field in the solar wind, the reconnection site moves southward towards the spacecraft, because the X-line is shifting to accommodate cusp reconnection relocating from the cusps on the dawn and north and dusk and south quadrants to the dawn, south and dusk, north quadrants, respectively". Would the Whatever know that it'd be good to run the magnetosphere model I used for the period of time used in the previous study, which used a completely separate m'sphere model, to factor in the differences between the two models that might explain the behavior instead? Does it know that it's important to comment on the distance from the satellite to the reconnection site? Is the data analysis conclusion that the satellite is at a reconnection site actually wrong? Are there shortcomings in the m'sphere model that help explain why the m'sphere model's reconnection site differs from where we actually found it?

It's obviously not advisable to expect this inside of two or several decades. Maybe it could build me a movie, but I doubt it. Unless I am guaranteed a running instance of my efforts to coach it is preserved and always available should I achieve a successful/correct movie once, or that any new pseudo-understanding I had to lead it to is properly assimilated into the root system, there's no reason to even begin trying. Correct me if I'm wrong, but that's not something publicly available yet, and I can see massive hurdles to it ever happening. lol, what am I gonna say? "That's right! You finally did it. Now, don't forget how to do this the next time I ask, I don't want to have to spend another seven months filling in the gaps in your understanding of this again"? Hahahha

"Filling in gaps of understanding" deserves a dissection, because it's more general, not just for physics or science, but for anything. The process looks like hell. Because, like we've said, the LLM doesn't know what's "correct", it's not going to ask you any substantive questions. It's going to output what it outputs, and you'll have to look at the outputs, and tell it why it's wrong. Iteratively. Having it fix one thing could break another. It could even infinitely diverge instead of ever converging on the solution you want it to. This all assumes that you know what you're looking for, what "right" means. And then, even if it does get things right, yeah, unless you work at the company that owns the LLM, it's all forgotten when you close the instance.

Job security. Job security for all!