Sure we could quibble about that: it sounds like you're arguing "GPT and similar models for intelligence will never be capable of the improvisation necessary to adequately replace human drivers." I think that's quite possibly in "iron law" territory - their approach to intelligence is to find the middle ground between multiple inputs within a known data space. Meanwhile, improvisation depends on the interpretation of an unknown data space. On the other hand, I don't think "self-driving cars are forever impossible" because GPT and other approaches to AGI/LLM/etc are not the only possible approaches. To the contrary, I suspect we're starting to see a realization that LLMs and GPT are useful in specific niches of problem-solving, and if those tools are restricted to those niches everyone benefits. That will hopefully free up some research time and capital to attack the problems where GPT/LLMs fall down. I'll say this: I haven't seen any approach that I would trust to take the wheel sans a whole bunch of nerfing. I don't know how you would get there from here. But I don't think that's "iron law" territory, just "no idea how we'd do that" territory.