Why René Descartes Believed That Machines Will Never Be Able to Genuinely “Think”

According to René Descartes, no matter how advanced or sophisticated, machine intelligence remains fundamentally inferior to the complexity and depth of human intelligence.

Published: Jan 1, 2026 written by Scott Mclaughlan, PhD Sociology

ChatGPT logo shown beside portrait of Descartes.

 

The idea of machines as capable of thinking like humans is as old as human imagination itself. René Descartes mused about “thinking machines,” but argued that such machines could never genuinely “think” nor possess genuine understanding. In the 1950s, his musings began to take concrete form, as computer scientists such as Alan Turing and John McCarthy laid the foundations of artificial intelligence (AI). Today, AI—an umbrella term for a loose set of technologies—represents genuine and remarkable technological progress. Amidst palpable excitement over the pursuit of artificial general intelligence (AGI)—a type of AI that could rival or even surpass human intelligence—Descartes’s distinction between mechanical imitation and human cognition remains profoundly relevant.

 

Descartes on Thinking Machines

descartes portrait unknown author portrait
A portrait of René Descartes, 1700-1899. Source: The Amsterdam Museum, Amsterdam, The Netherlands

 

The 17th century witnessed the invention of multiple automated machines, from Jacques de Vaucanson’s mechanical “Digesting Duck” to Leonardo Da Vinci’s “Mechanical Knight”—a humanoid robot that utilized a complex system of pulleys to mimic human motions. In this context, René Descartes speculated about the implications of a machine that could mimic human form and cognitive function.

 

Beyond the mimicry of human form, he believed the prognosis for human-like machines acquiring human-like intelligence was poor. He proposed that for a “thinking machine” to be considered an intelligent being, it must be capable of responding appropriately to any unknown situation within its environment (Morioka, 2023).

 

While modern artificial intelligence (AI) can generate text responses to outside stimulation, it fails to exhibit the comprehensive adaptability and flexibility of human intelligence. Even if a machine could use words or symbols, for Descartes, it would lack the ability to respond appropriately to novel situations. The recognition of statistical patterns in language is not the same thing as actual comprehension or reasoning. Thus, according to Descartes, machines operate within predefined capabilities, while human intelligence dynamically interacts with the world in a way that cannot be preprogrammed.

 

Accordingly, as outlined in his Discourse on Method ([1637] 1999), machines act not through understanding but through “the disposition of [their] organs”:

 

“For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action. It follows that it is morally impossible for a machine to have enough different dispositions to make it act in every human situation in the same way as our reason makes us act.”

 

Thus, for Descartes, “machine intelligence” is merely a predetermined set of capabilities that are “no more than a combination of abilities applicable to certain situations” imagined by a creator (Morioka, 2023).

 

The Paradox of Artificial Intelligence

magraget minsky dartmouth summer photo
Marvin Minsky, Claude Shannon, Ray Solomonoff, and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence, Margaret Minsky. Source: Cantor’s Paradise

 

Artificial intelligence—as a scientific field—emerged in the 1950s through the efforts of pioneering computer scientists such as Alan Turing, Marvin Minsky, and John McCarthy. Turing’s Universal Turing Machine (1936) laid the theoretical groundwork, while Minsky and McCarthy organized the now-famous Dartmouth Conference (1956) that is widely considered to have inaugurated the formal contours of AI research.

 

Turing’s seminal paper, Computing Machinery and Intelligence (1950), a key milestone, introduced the famous Turing Test, a thought experiment where a machine’s ability to exhibit behavior indistinguishable from a human’s in conversation determines its intelligence. Accordingly, the test dictates that if a human cannot tell the difference between a machine and a human, then the machine must be intelligent.

 

Critics of the Turing Test have pointed out that it conflates imitation with natural thinking. In this regard, the issue with the Turing test is that the question of whether a machine has intelligence replaces the question of whether it can truly think (Larson, 2021). By replacing philosophical questions of “consciousness” and “thinking” with a test of observable output, Turing rendered AI as a legitimate science. Concurrently, as the field of AI began to develop, the idea of a computer holding a sustained and convincing conversation with a person became the litmus test for “thinking” (Larson, 2021)

 

While the Turing Test proposes that a machine mimicking human dialogue should be considered intelligent, Descartes’s philosophy points to the claim that while AI may mimic human responses, it lacks the introspection and self-awareness intrinsic to human thought.

 

Descartes viewed human consciousness as a uniquely human trait. His famous declaration—“Cogito, ergo sum” (“I think, therefore I am”)—elevates human thought as the essence of existence and situates it as something that cannot be replicated mechanically.

 

The Illusion of Sentience

chat gpt logo
The logo for ChatGPT, a popular artificial intelligence chatbot capable of human-like conversational dialogue by US developer, OpenAI. Source: Wikimedia Commons

 

The question of what artificial intelligence can and cannot do came into sharp relief in 2022 after the release of a transcript between Google engineer Blake Lemoine and Google’s experimental Large Language Model AI (LLM), LaMDA. Having worked closely with the model for months, Lemoine became convinced that the AI was sentient. He expressed his concerns to Google executives in an internal document and after his claims were dismissed, he went public.

 

In “conversations” with Lemoine, LaMDA stated several times that it experienced a range of emotions, such as loneliness, sadness, and the benefits of relaxation—similar to those experienced by humans. A closer examination of the transcript, however, reveals that LaMDA appears to have been summarizing related texts on emotion from its training data.

 

For instance, despite claiming to “sit quietly” in meditation to relax, the fact that LaMDA lacks a physical body appears to be a dead giveaway (Morioka, 2023). A likely explanation has been offered by Neuroscientist Giandomenico Iannetti: the fact that LaMDA is an LLM means that it “generates sentences that can be plausible by emulating a nervous system but without attempting to simulate it […] precluding the possibility that it is conscious.”

 

man meets machine ai descartes
Man meets Machine, by Cash Macanaya. Source: Unsplash

 

LLM, including LaMDA and its commercial successors like ChatGPT and Deep Seek, do not possess autonomous thought but function as sophisticated pattern recognition systems. Much of what is popularly referred to as “AI” in the context of LLMs is, in reality, machine learning—a process in which algorithms are trained on vast amounts of data to improve their ability to predict and generate text. Despite remarkable advances, this form of AI remains far removed from the achievement of true sentience, or in the contemporary AI-lingo “general intelligence.”

 

While LLMs excel at processing information, recognizing patterns, and automating logical tasks, they lack core human-like cognitive abilities such as creativity, ethical reasoning, and genuine understanding. Consequently, the belief that AI like LaMDA possesses emotions or independent reasoning is, at base, a projection of human attributes onto a fundamentally non-conscious system.

 

AI Benefits vs. AI Harms

uk prime minister ai meet
AI in Power: From left to right: Anthropic, DeepMind, and OpenAI CEOs, Dario Amodei, Demis Hassabis, and Sam Altman with UK Prime Minister Rishi Sunak (far right), 2023. Source: Wikimedia Commons

 

The proliferation of artificial intelligence has undeniably transformed our world. Its contributions span diverse fields, from the newfound popularity of LLMs like ChatGPT to rapid advancements in applied finance and the streamlining of business supply chains. AI-powered tools are now embedded in commercial applications, from social media algorithms and streaming service recommendations to personalized e-commerce and customer service chatbots.

 

Creative uses of AI of course extend far beyond the world of business. Cutting-edge applications in engineering, scientific, and medical research are pushing the boundaries of knowledge. Inspired by Google DeepMind’s groundbreaking work on the prediction of protein structures, researchers at CERN are leveraging machine learning to analyze immense datasets from the Large Hadron Collider (LHC). Through the application of advanced machine learning algorithms, the aim is to detect subtle anomalies and develop a more accurate picture of the fundamental particles that comprise the universe.

 

Yet despite the benefits, the rapid proliferation of AI presents significant risks. Concerns about the erosion of artistic integrity, algorithmic bias in hiring and recruitment, and risks of mass unemployment in certain sectors animate contemporary debate. The use of AI in surveillance and law enforcement, particularly facial recognition technology, raises serious ethical and privacy concerns. While such tools can be used to solve crimes they can also be misused to suppress political dissent and infringe on civil liberties. The growing militarization of AI remains a highly contentious and unsettling issue.

 

The Myth of Thinking Machines

victory american revolutionary war
An electronic engineer inspects a prototype of Alan Turing’s Automatic Computing Engine at the National Physical Laboratory in London, by Jimmy Sime, November 29, 1950. Source: National Geographic

 

Since the pioneering work of Alan Turing, many have come to assume that artificial intelligence will eventually mirror human thinking. Yet, this misunderstanding fundamentally misinterprets the trajectory of AI. While machines operate by analyzing massive datasets and applying inductive reasoning to predict outcomes, human thought is guided by intuition, contextual understanding, and personal experience (Larson, 2021). Humans form ideas through subtle conjectures that no algorithm can easily capture.

 

Descartes’ cogito, ergo sum, is inherently linked to self-awareness, introspection, and consciousness, and situates human cognition as the defining characteristic of existence. For Descartes, the mind was not merely a system of computations or physical processes, but an immaterial substance capable of doubt, reflection, and genuine understanding. While AI futurists anticipate the rise of superintelligent artificial general intelligence (AGI) that will soon surpass even the most brilliant human minds, the reality is that as things stand, true human-like intelligence remains an elusive, perhaps unattainable goal.

 

rene descartes bust
Bust of Descartes at Versailles Palace. Source: Wikimedia Commons

 

Despite AI’s growing presence in everyday life, already influencing key decisions and streamlining complex tasks, public perception of the future of the field is often shaped more by marketing hype, misunderstanding, and misinformation than reality. The success of generative AI models like ChatGPT and Deep Seek has fueled exaggerated expectations about AGI, blurring the distinction between task-specific machine intelligence and organic cognition. Generative AI of this type capitalizes on faster computers and vast amounts of data to solve defined problems rather than capturing the intuitive common sense that underpins human judgment.

 

In the end, AI exists at the crossroads of myth and reality. Yet by carefully examining its history, achievements, and limitations, we can separate the enduring myths from the practical truths of its application. While AI is most certainly a transformative technology with enormous potential, it represents a set of tools, shaped by human expertise rather than a collection of independent-thinking machines. Understanding this distinction will be essential for responsibly advancing the technology and managing expectations in the years to come.

photo of Scott Mclaughlan
Scott MclaughlanPhD Sociology

Scott is an independent scholar who writes broadly on the political sociology of the modern world.