AI is mathematics and programming. There are quite a lot of people who wish we'd given machine learning the more descriptive, if boring, name "computational statistics." But programming is unlike math in that we don't get to have foundations, and so we reach for metaphors. Back in the 50s people were thinking of compilers as an AI application, because they didn't have a theory of compilers and taking a program description in a high level language and producing and executable program looked a lot like handing a programmer a specification to implement. Needless to say, compilers can be smart, but they aren't intelligent. But thinking of programs in terms of cognition is a useful thing to do, because the metaphor can guide us to a solution to problems we only know how to state in terms of what a person does. It's part of the fun and part of the weakness of computing that most of our good ideas come from daydreaming. That's great and good as long as we don't forget the distinction between what we're imagining and the actual technology. Artificial Intelligence is programming, not the Great Work.
Is there another capability that you would watch out for as a more significant sign?
Isn't a technological question. Yudkowsky is a great popularizer of decision theory, but when he, and every other transhumanist, start predicting the future and giving you apocalyptic and transcendent pictures of where AI is going, they've taken off their technologist caps are are playing prophets and alchemists, and they don't even have the awesome illustrations. Let me riff on your last question:
Suppose BetaGo maintains a flock of complex go-playing-programs generated by genetic algorithms, and uses the best ones to beat human champions. Perhaps no one could explain how the winning algorithms work. Would that be meaningfully different from what we call intelligence
Suppose GammaGo is as complicated as it needs to be, but as easily comprehensible as the textbook alpha-beta pruning tic-tac-toe program. Is that meaningfully different than what we call intelligence? Of course. It's a program that plays Go, and that's it. So is it incomprehensibility that makes the Go-playing program you're imagining look intelligent? As Wittgenstein said, there are no surprises in logic; either the program you're picturing is comprehensible, because it is a program and thus an application of logic, or it's science fiction. It is easy to drift into science fiction when thinking about AI, especially if, like Yudkowsky, you do much more daydreaming about AI than writing programs.