Personally I do think there is an argument to be made that self-driving cars might be in the realm of "this can't work because physics", if we define "physics" in a particular way: this can't work because the physics of compute tech and AI cannot reach the levels of safety that we might demand from an actual, fully self-driving vehicle. Because, let's be honest, it depends on a) Moore's law continuing, b) an unattainable amount of data or some unforeseen leap in ML that allows it to reason outside of its training data. I came across this which argues that GPT-2 can't reason beyond its training data. Much like GPT, the dominant idea of AVs is that they would be either able to preload everything they could ever possibly encounter, or that they could reason outside of that dataset through some kind of magi- I mean AGI. We know through Waymo/Google that the former is pretty much impossible in the near future, and we know through Tesla and GPT that the latter is also impossible in the near future, especially if what that paper argues is a fundamental part of all ML models and AGI turns out to be just a rabbit pulled out of the hat.
Sure we could quibble about that: it sounds like you're arguing "GPT and similar models for intelligence will never be capable of the improvisation necessary to adequately replace human drivers." I think that's quite possibly in "iron law" territory - their approach to intelligence is to find the middle ground between multiple inputs within a known data space. Meanwhile, improvisation depends on the interpretation of an unknown data space. On the other hand, I don't think "self-driving cars are forever impossible" because GPT and other approaches to AGI/LLM/etc are not the only possible approaches. To the contrary, I suspect we're starting to see a realization that LLMs and GPT are useful in specific niches of problem-solving, and if those tools are restricted to those niches everyone benefits. That will hopefully free up some research time and capital to attack the problems where GPT/LLMs fall down. I'll say this: I haven't seen any approach that I would trust to take the wheel sans a whole bunch of nerfing. I don't know how you would get there from here. But I don't think that's "iron law" territory, just "no idea how we'd do that" territory.
"No idea how we'd do that" is a good way to condense it. But it just seems like every promising foray into improvisation done by computers ultimately turns out to be a fata morgana or a false positive. What hopes I had before this week have been put to bed now. Put differently, how long must we search (and how many billions do we/Google/rich people spend) before we ask ourselves the hard question whether this is doable at all. That may very well turn out to be a failure of imagination on my part, but personally I feel like the position of "this is, for the forseeable future, iron law" is preferential to "it might work some day (we just don't know how, and have no practically attainable idea of how)".
I have seen four, possibly five forays into artificial intelligence in my lifetime. None of them have panned out in a "three laws of robotics" / cogito, ergo sum sort of way but by sheer process of reduction, we keep getting closer to an answer. That answer might be "never." Researchers keep swinging at the ball, though, which leads me to believe they think there's an outside chance they'll hit it out of the park. By way of contrast, in college I sat through a two hour lecture from a couple guys who had modified collective pitch helicopters with broadcast transmitters, gyros and servos to put a stereoscopic VHS-grade turret on an aerial platform. Their build cost was around $20k, their bird was an easy 50 lbs of nitromethane-powered terror and it required a tremendous amount of skill between two operators to manipulate. I had Costco pizza for lunch? And they've got FPV drones for $50 that will fit in your shirt pocket. There are some problems that can be brute-forced through existing technology and some problems that can't. My personal sense of the state of self-driving cars is that our technology will totally drive a garbage truck without much drama but getting you to work via the freeway is never going to be safe enough. Is it a leverage problem or a breakthrough problem? I don't know enough to have an opinion... but I know enough that I think GM is being pretty shady about it.