The title of this article did it's job in piquing my interest. Though I feel author didn't go into as much detail as I would've liked, the central point is still interesting.
- I suspect that this basic imperative of bodily survival in an uncertain world is the basis of the flexibility and power of human intelligence. But few AI researchers have really embraced the implications of these insights. The motivating drive of most AI algorithms is to infer patterns from vast sets of training data – so it might require millions or even billions of individual cat photos to gain a high degree of accuracy in recognising cats. By contrast, thanks to our needs as an organism, human beings carry with them extraordinarily rich models of the body in its broader environment. We draw on experiences and expectations to predict likely outcomes from a relatively small number of observed samples. So when a human thinks about a cat, she can probably picture the way it moves, hear the sound of purring, feel the impending scratch from an unsheathed claw. She has a rich store of sensory information at her disposal to understand the idea of a ‘cat’, and other related concepts that might help her interact with such a creature.
If the "humans think with their whole body" argument is correct, then is there any way for an AI to attain the human model of intelligence other than building an autonomous humanoid AI robot? One that is essentially a human replica, apart from the fact that it would be mechanical and digital instead of biological.
Uh... The premise is that human intelligence evolved as a biological imperative for survival. That much, I can get behind. The article also says that modern AIs work in two dimensions (picture of cats, text), while humans operate far better perception of things. That much is fair, but I fail to see how it diminishes the capabilities an AI system could have. We already have scanners that sniff bomb material, and we have understanding of how molecules interact with the sensors in our nostrils: build artificial olfactory off that! Other senses might be much easier, much harder or completely unnecessary to build, but it's not something that can't be achieved. An AI is not to be built like a human being, because it will never be a human being. It will never have the same biological imperatives because it's artificial. It needs not live like we do. It doesn't require the same biological impulses to drive it. It is, by its very definition, not human. We're not the most efficient biological structure, either. There are animals and simpler creatures that do things better than we would ever be able to. Just look at the dog's acute olfactory. Look at some of the sea creatures being effectively immortal. We're damn effective, but the human body has nothing on things we can build. It's not that an AI can't have imperatives of its own, either. People have barely dabbled with it yet, for good reason. Nothing stops us from building an AI with a purpose, simple as it may be still. But that's not the point. The article's rather poorly-shown idea is in the last sentence: This is the point: that we have to give the system context. An AI has to learn from real-world circumstances, not lab conditions. You can't build immunity while living in a bubble. But who dares unleash a freshly-built AI into the real world?<..> we have little hope of achieving this goal unless we think carefully about how to give algorithms some kind of long-term, embodied relationship with their environment.
I do think there is some potential cross-purpose confusion that can happen here. A commentator on the article makes this point: So when we have got a chess playing program that can beat the best human being, we have made some advance in the first project. But we know that the way the program has been set up to analyse the chess game is completely unlike the way that people approach the problem. We have made use of the strengths of the machine, we haven’t emulated a brain. This why I was careful to include the detail of 'human model of intelligence' in my OP. I understand that as a species we are by no means the ultimate biological entity. But, for example, there are people out there trying to design AIs that create music. Specifically, music that is pleasurable to us as humans and that can pass off as humanly made. Sure, you could hypothetically make an AI with all sorts of qualities and senses that far surpass humans, but the resulting music would be far from human-like. It could even be in a frequency range we can't hear. But I agree that for the more practical applications of AI (i.e. a system that can do things better than us), making it human-like isn't really the end goal or useful. Rather, like you said, it's about giving the system a way to contextualize itself in the environment, however that may be. But I do agree that the article itself is lacklustre. I posted it mainly for the titular concept, rather than it's contents.An AI is not to be built like a human being, because it will never be a human being.
We seem to have two completely dissimilar styles of projects which often seem to come under the artificial intelligence heading; one is to get machines and software to do useful or perhaps challenging or perhaps interesting things, whereas the other is to understand how humans do what we do in the way of thinking.