“But is it intelligence?”
The rise and spread of AI brings along endless discussions of what constitutes intelligence. A debate which, even if we limit ourselves to computer science, goes back not just to Weizenbaum’s Eliza, but to Turing and von Neumann.
It is also a debate that technical people should not abandon to others. Scientists and engineers are often wary of philosophizing, preferring problem-solving and action. The risk, however, is to relinquish the discussion to people who do not necessarily understand the technology. Current discussions of AI put this phenomenon on display. A typical example is the definition of intelligence. As I will discuss in this post, in the hope of helping to clarify and focus current debates, the source of many disagreements is that people rely, often implicitly, on two radically different notions of “intelligence.”
“It only appears to understand”
A recent example illustrates one of the two views. In mid-December, the French National Assembly held hearings on artificial intelligence and invited a philosopher of science, Olivier Rey. (A video on the official site shows a grand total of four attendees, suggesting that French MPs do not find AI a topic worth their time.) Throughout his speech, Rey explains that “artificial intelligence is not intelligence” because the program “does not understand.” I find this kind of assertion fascinating. I am not saying it is wrong; just fascinating.
What does it really mean? I can, with some assurance, say that I “understand” the basics of linear algebra. Even so, if you ask me to find the eigenvalues of a medium-size matrix I might occasionally make a mistake, and if you ask me to prove one of the theorems in the field, I might occasionally get stuck. If you ask an LLM the same questions, it will get the answers right much of the time, but may also occasionally mess up (“hallucinate”). What enables me to say that a human like me (or for that matter, Mr. Rey, who I see from his LinkedIn page shares an alma mater with me, a school that you cannot enter if you do not know about eigenvalues) is more or less intelligent than Claude or Gemini? I actually suspect that the LLM will get answers right more often than I do, but that hunch is irrelevant: both the LLM and I will get many answers right and some wrong.
I can hear many possible retorts, but they are all of the same tenor: the AI tool only appears to understand; my case is different, I really understand. It does not matter that I make mistakes once in a while, they are only superficial mistakes, whereas the tool’s hallucinations shows that it has no clue whatsoever. These are all beautiful arguments—and worthless. Worthless because they do not satisfy the basic criterion of scientific arguments: they are not falsifiable. Falsifiability would mean that we can construct a reliable experiment to test that a human, or tool, does not just apply a theory but somehow “understands” it—or not. It is hard to fathom what such an experiment would look like. Experiments devised so far can only measure outcomes. Think of the Turing Test, or Searle’s Chinese Room arguments. Both competent humans and today’s state-of-the-art AI tools will pass them. Throw in enough complexity and tools often fare better than humans. Does it mean they are more intelligent? Do they understand less? Might they possibly (horribile dictu!) understand more?
We are back to the unanswered question: what does it mean that a human or a tool “understands” a question or concept?
American versus European views
I believe that much of the debate is due to clashing understandings of what intelligence is about.
The clash reminds me of the shock I experienced when, as a student at Stanford, I first came to the legendary AI lab, SAIL, then at its zenith with such luminaries as John McCarthy (the founder), Art Samuel, Zohar Manna, and Terry Winograd. “Intelligence” was on everyone’s lips and I vividly remember discovering that the widely accepted working definition was “the ability to adapt to new situations, and learn from experience.” That view was scandalous to me, coming from a European intellectual perspective. These Americans, I thought, are so utilitarian, prosaic, earth-bound, pedestrian, mercenary! Surely there has to be something deeper to intelligence than knowing how to react to circumstances: you have to understand the situation. I had studied Latin and knew that etymology was on my side: intelligo means “I understand.”
As I soon found out, the issue was not just with me, but reflected a difference between continental European and Anglo-Saxon views. The Larousse definition, for example, starts with “the set of mental functions whose goal is conceptual and rational knowledge.” Hence the basic schism between those who consider intelligence as the ability to understand (like me back then, and Mr. Rey today) and those for whom it is the ability to cope.
The European view rests on a long and fascinating tradition of explaining things and, as a result, sounding very smart. The French in particular have made a specialty of writing the definitive account of a country, explaining it to the world in general and in particular to the country’s own gobsmacked natives, on the basis of one glorious in-and-out trip. Tocqueville is the most famous example, but there is also Roland Barthes on Japan. Not French and not harmless, we have Marx and Freud who respectively “understood” all about human history and human psychology and explained it to us. It is really petty to point out that these theories had zero success in predicting future outcomes. Or that in the first case, their main result was to destroy countries and civilizations and lead to the death of millions of people. Who is to quibble about such minor outliers when these theories make us “understand” by “explaining” so intelligently!
Serious scientific theories do explain, too, and make us understand complex things. The difference is that they predict correctly, and are falsifiable. The way relativity made us understand the basics of time and space was not just to present convincing ideas, but to predict that, under the circumstances of a certain eclipse, at a certain place, light would bend not by 0.87 arcseconds, as Newton would have had it, but by twice as much. Had the measurement been different, Eddington would have disproved the theory.
The difference between the two concepts of intelligence—ability to understand, versus ability to act successfully—is also the difference between deductive approaches, which start from a theory and attempt to verify it through facts, and inductive ones, which start from facts and build up a theory. It is a deep difference, going back far in the history of thought. Among philosophers—simplifying things, as detailed views can be more nuanced—we find, on the conceptual/deductive side, Descartes and Kant, and on the empirical/inductive side such English and American thinkers as Hume, John Stuart Mill, and behaviorists typified by Skinner.
Contrasting the two views
The appeal of the first view (the one that says “I am intelligent because I understand”) comes from its elegance and its propitiousness to making powerful speeches. Its fundamental limitation is the difficulty of validating or falsifying it. On just about any topic, conspiracy theorists (including Marxists and Freudians) also make beautiful speeches. If you and I both have explanations for something, but they are incompatible, how do I convince you that mine is right and yours is wrong?
The appeal of the second view (the one that says “I am intelligent because I can make predictions that turn out right most of the time”) comes from its practicality. But how do we know that what it describes is really intelligence and not just careful record-keeping?
Old-AI, with its expert systems and logical deduction tools, was of the first kind, deductive. The consensus is that it failed. Modern-AI is almost entirely (at least in the current, obviously intermediate state of evolution) of the second kind, inductive. Modern-AI is machine-learning: it builds answers to new queries by extrapolating from a large body of validated answers to previous queries. In its flagship areas of application, the new answers should be right most of the time. Is it intelligence? Is the human-level translation of today’s translation tools intelligent? Is a vibe-coding tool more intelligent than the programmer who uses it, or less? Is a medical-image analysis tool which produces fewer false negatives and false positives than a Stanford Hospital radiologist more or less intelligent than that doctor? For that matter, are non-AI programs such as a compiler intelligent (after all, no human would be able to compile a 100,000-line program in any reasonable time and with any serious likelihood of correctness)?
With recent advances in AI, it becomes ever harder for proponents of intelligence-as-understanding to continue asserting that those tools have no clue and “just” perform statistical next-token prediction. After all (borrowing a set of examples from Kian Katanforoosh in a Stanford lecture), today’s deep-learning systems can complete sentences such as: “I poured myself a cup of . . .” (how is that not “understanding” co-occurrence patterns?); “The capital of France is . . .” (how is that not understanding geographical connections?); “She unlocked her phone using her . . .” (how is that not understanding semantic connections?); “The cat chased the . . .” (multiple connections are plausible, so how is that not understanding probability?); “If it is raining, I should bring an . . .” (connecting a condition with its consequences, so how is that not understanding inference?).
What do people mean, then, when they say “AI is not intelligence because it does not understand what it is talking about?” Since they do not define what “understanding” is, I think that all they mean is “AI does not understand in the same way as I do.” That is a very tenuous argument; not that different from saying “Airplanes do not fly (do not believe your own lying eyes!) because they do not fly in the same way as birds do.”
As you can see, I tend today to think that I was wrong back then and find much to like in the “American” interpretation, empirical and inductive. But my changes of mind (a mind that remains open to more changes in the presence of new arguments and new technology developments) are not the subject of this discussion. What it has illustrated is that, regardless of your personal preference for either of them, there are two fundamentally different concepts of intelligence, and that discussions of the “I” in “AI” are pointless unless they specify which one they use.

Bertrand Meyer is a professor and Provost at the Constructor Institute (Schaffhausen, Switzerland) and chief technology officer of Eiffel Software (Goleta, CA).
Submit an Article to CACM
CACM welcomes unsolicited submissions on topics of relevance and value to the computing community.
You Just Read
Two Concepts of Intelligence
© 2026 Copyright held by the owner/author(s).