I think most of the idea is crazy. First of all, the only outcome of us being too frightened to make the AI ourselves or hobble it to the point it doesn’t work is that other, much worse actors will not only get their first, but have an AI that is much less restrained than whatever AI you are scared of. In fact, this is a much worse outcome. The military is absolutely building AI, so are Russia, China, Iran, and so on. Black rock is probably working on one. Guess what? Absolutely none of those groups give the smallest amount of attention to the idea that AI might make a decision that harms people. And so the AI race is at current much more likely to be won by people with no concern about the AI moral compass than those AI doomers that pride themselves on being cautious about AI. And most of the fears seem to come from movies and TV shows, not anything that these robots do or have done. We have miles of film of shitty 1980s and 1990s movies that decry AI as doomsday science. But “it happened in Terminator movies” is not even to the level of a real argument. It’s no more realistic than being afraid of space exploration because there might be Klingons out there. If humanity wants to stagnate at 2010 levels of technology, fine, but at the very least I think it should be based on observation rather than stupid movies. If these kinds of people had been listened to in 1600, we’d have never built tge new world. If we’d have listened in 1900, we would not have electricity in our homes. I’m on board with maybe not letting “kill all humans” be a life goal. But I think the dangers of technophobic people is going to do much more long term harm than AI could. AI can already detect cancers better than humans.