If so it kind of fails at it. Flight is well defined, intelligence isn’t. Nobody doubts that an airplane flies, but lots of people—including myself—would never call alpha-go an intelligent machine.
A decade or even a year before AlphaGo, most people would have said an AI capable of playing Go at superhuman levels was intelligent.
Intelligence is a mystery. As soon as we can build a rational system that demonstrates a particular kind of intelligence, the mystery and perception of intelligence is lost.
I disagree on a philosophical level. Intelligence has always been—and always will be—a moving target. As soon as we discover something that we previously thought was unique to humans, we redefine what intelligence is to exclude that trait.
Previously using tools was a sign of intelligence until we discovered that other animals (even insects) do that. Then it was required that you had to make tools, until Jane Goodall showed us that other apes do that. Until AlphaGo we had this idea of particular games, always upping the size of the decision tree. And now it looks like we are in a crisis and just have this vaguely defined general intelligence, a term with quite a racist history, that nobody actually knows what means.
Me personally am of the opinion that we need to ditch the word intelligence in science and technology (it can remain in philosophy). It has all the same problems of a grand unifying theory in cosmology, and a whole host of more (see Stephen Jay Gould to discover those).
Calling it a mystery is being optimistic, rather, I would call it a misdirection.
That's because we use the question "Who can solve this task?" to estimate difficulty, and we assume a priori that computer solvable tasks don't require much intelligence.
Per Moravec, it's tended to be the opposite. The tendency to assume that excellence in chess - a game machines play by lookup table - is a sign of extreme intelligence because people aren't particularly good calculators and most of them haven't memorised many openings, but walking around interacting with an environment and forming goals as a result is something any dumb animal can do.
> A decade or even a year before AlphaGo, most people would have said an AI capable of playing Go at superhuman levels was intelligent.
Citation needed. We had machines playing other games at better-than-grandmaster level by that point. People talked about AI's struggles to play another perfect information game with very simple rules because it couldn't brute force parse every single permutation as an indication of how limited it was, not an indication that true intelligence was pruning search trees.