The Myth of Thinking Machines
Two Opinions on the Dangers of AI
The concept of thinking machines remains entirely a psychological construct. Artificial intelligence applications cannot fundamentally shape human existence; they cannot model, transform, or design our experiences in truly novel ways. This means that individual and interpersonal experiences inhabit a world that is categorically different from the realm of artificial minds. Large language models and other types of generative AI cannot reach the depth and complexity of our daily activities because they are merely wishful idealizations that are not capable of capturing the vibrant, fluid, and continuous nature of human communication. Artificial intelligence remains a useful, task-oriented tool under our responsibility, but it should not be mistaken for an experience-altering revelation. The gap between human consciousness and artificial systems is not merely technological but ontological — rooted in the fundamental nature of embodied experience and authentic engagement with the world.
Mind Over Mechanism
Machines do not have a mind; they are unable to think, feel, or experience. In a recently published article in Aeon, Alva Noë argues that computers “don’t actually do anything.” What he means by this is that computers are not autonomous; they don’t engage with the world as self-sufficient beings. Artificial intelligence models are neither morally responsible nor do they actively engage with the world and the objects and events in it. They work within a predetermined framework that is designed to deliver a specific output. In other words, artificial intelligence models are developed for specific purposes; their existence remains prearranged.
“The story of technology” writes Noë, “has always been that of the ways we are entrained by the tools and systems that we ourselves have made.” We are the authors of this story; it didn’t write itself. Large language models are therefore tools that we use to navigate through the world. These applications are made by us and for us; they are the products of intelligent beings that can be useful for a vast number of problems. Computational power is an efficient way of automating specific tasks. Machines can be used by intelligent beings to solve problems, but they are not themselves intelligent. As Noë writes, “If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.”
Our experiences are too complex to be successfully replicated by artificial systems. Muñoz, Bernacer, Noë, and Thompson argue in Why AI will never be able to acquire human-level intelligence that the biological foundation of human intelligence cannot be replicated by large language models, which, despite their practical applications, will never achieve true AGI due to their fundamental lack of physical embodiment. Our experiences cannot be separated from our body; perception and cognition are embodied processes that are meaningless outside of our corporeal existence. Functioning brains in vats remain an irrational fear because the brain is inseparable from the body. Outside of a body, the brain loses its function. Therefore, a perfectly functioning brain outside a body remains an unattainable idealization. Disembodied machines that mimic human cognitive behavior are the product of our imagination because they resemble an unfounded fear or hope that cannot become reality. Thinking machines are not based on rational thought; they are the product of psychological projections.
Goddu, Noë, and Thompson argue in their reply to Yildirim and Paul that LLMs don’t know anything because they do not perform any tasks of their own. Large language models are tools that we use; they perform our tasks. In the words of the authors: “Models are tools, not agents, and they are our tools, constructed to serve our interests and values.” Artificial intelligence is therefore always dependent on our instructions. Without our guidance, it loses its purpose because it doesn’t have an intrinsic need that sets its actions into motion.
Noë argues in Rage against the machine that “the human animal is a creature of disturbance.” In the absence of disturbance, he writes, there is nothing: “no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness.” Machines can’t be bothered, as they do not experience the world around them. They are task-specific devices that fulfill particular functions and solve predefined problems. A machine doesn’t speak, it only answers. As Noë writes: “Speech is breathy, hot movement that always unfolds with others, in context, and against the background of needs, feelings, desires, projects, goals and constraints.”
Human language is fluid, experienced, and spontaneous; it’s not defined by specific constraints and instructions like an artificial language model. The complexity of interpersonal communication cannot be replicated by a machine because it cannot be cut up and inserted into an alien mind. This also means that our fear of thinking machines remains ungrounded because an artificial replica of our mind is not manufacturable. In the same way, artificial intelligence models will not fundamentally affect our personal interactions, as the grasp of the artificial cannot reach that far. Our capacity for self-reflection and interpersonal relationships is distinctly human; it cannot be replicated and transformed by artificial systems. Who we are remains largely a mystery to ourselves, and the artificial is incapable of assisting us in our endeavor of self-discovery. Our own nature remains deeply enigmatic, and the path of self-discovery is an individual responsibility.
Noë comes to the conclusion that computers “don’t have concerns of their own”, and because of this, they are unable to invent a new language. What he means by this is that we “are not merely word-generators” but “makers of meaning”. We actively engage with the world, with ourselves, and with others. It is precisely this activity that makes communication meaningful, but it seems that we have distanced ourselves from genuine communication. Noë is aware of this detachment when he writes that we are nowadays so much at home “under the regime of the keyboard that we don’t even notice the ways text conceals the bodily reality of language.”
Conclusion
The myth of thinking machines remains an irrational fear or a wishful idealization. While artificial language models are powerful tools developed by human expertise, they lack consciousness, autonomous thought, and genuine engagement with the world. They operate solely within programmed parameters and are incapable of generating authentic meaning or experience. Our capacity for self-reflection and meaningful relationships remains uniquely human. Artificial systems, despite their utility, cannot truly enter or replicate the realm of human life.
Reply by Andreas Matthias
I think that Luka’s article lacks a strong argument. It repeats multiple times that machines will never think like humans, but there is little evidence provided for this claim in the article. The embodiment argument is, on its surface, interesting, but it is easily shown to miss the mark, since LLMs can today reason about bodily functions, emotions, fears, etc without having a body. The experience of a body, expressed in human language and then used to train an AI model, is sufficient to provide the LLM with a working understanding of what bodies do, how they behave, and how they are experienced. Look, for example, at this reply from ChatGPT:
(User:) Tell me what painting my bedroom’s walls bright red would feel like and how it might affect my sleep.
(Microsoft Copilot): Painting your bedroom walls bright red can definitely create a bold and energetic vibe in the space. Red is a strong color that often evokes feelings of passion, excitement, and warmth. It can make a room feel cozy and vibrant, but it might also feel intense and stimulating.
In terms of sleep, the color red can have some significant effects. Here are a few considerations:
Stimulation: Red is known to increase energy levels and stimulate the mind, which might make it harder for you to relax and fall asleep. It’s a color often associated with activity and alertness.
Heart Rate: Studies have shown that red can increase heart rate and blood pressure. This might contribute to a more restless sleep if your bedroom is enveloped in this intense color.
Mood: While red can be invigorating and boost your mood during the day, it might be too stimulating for a restful night’s sleep. Soft, cooler colors like blue or green are generally recommended for bedrooms to promote relaxation and calmness.
Lighting: The intensity of the red color can also be affected by the lighting in your room. Natural light might make the red appear even brighter, while dim lighting could create a more subdued atmosphere.
If you’re really drawn to red, you might consider using it as an accent color rather than painting all four walls. This way, you can enjoy the vibrant touch without it overpowering the room. Perhaps some red pillows, a throw blanket, or artwork could provide that pop of color while maintaining a more restful environment.
I don’t see that Copilot’s “fundamental lack of physical embodiment” has done much to diminish its apparent “understanding” of the psychological and physiological effects of colour, the mechanics of sleep and the factors that can promote or disturb it, and many other insights that can only be acquired through having a human body. Of course, “understanding” here is in quotation marks, because all LLMs are only stringing words together without actually having an internal representation of concepts that would amount to genuine understanding. But from a behaviourist perspective, this is good enough, and it is certainly also good enough in practice. When was the last time that you questioned if the person sitting opposite you in a coffee shop has “genuine” experiences of embodiment, or whether they are just a philosophical zombie? And did you think of opening their skulls up to look inside in order to settle the question? Producing a plausible and human-equivalent simulation of understanding, a “pretend-understanding” that stands up to examination is as good, in all practical respects, as the real thing.
As Noë writes, “If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.”
This also is not convincing, in my opinion. Pencils have existed forever, but have not sparked an AI revolution. LLMs did. So there is obviously something that LLMs do, that pencils do not. First, they have agency, even if it is not “human autonomy” in any real sense. Pencils and shoes are only reacting to human action in a deterministic, predictable way. Their actions do not exhibit anything like human intelligence. Even calculators only reply with a number to a short question about numbers. LLMs, in contrast, exhibit a wide variety of (speech) behaviours: they can explain things on various levels of complexity, understand and answer questions, write poems, and even handle delicate human emotions convincingly. Copilot again:
(User:) Please write a short poem in Haiku style on the beauty of a flower seen after a spring rain.
(Copilot:) Gentle petals blush, Spring rain kisses, fresh and soft, Bloom in pure delight. 🌸
There’s something truly magical about seeing a flower after a spring rain, don’t you think? The way the droplets glisten on the petals, almost as if the flower is basking in its own newfound beauty. Nature’s little moments like these can be so inspiring.
I have never seen a shoe do something like that, or a pencil. Neither would a chimpanzee, a dolphin, an octopus, or a small human child be able to produce such a response. This reply points to something fundamentally different that is happening here: the AI has access to the world of human experience in a way that no other artefact or being has, with the exception of humans themselves. Putting it in the same category as cigarette lighters sounds more like wishful thinking, or a prayer, than the result of rational thought.
Attributing this misconception to a perceived authority does not improve its quality as an argument and constitutes a well-known fallacy.
Our experiences are too complex to be successfully replicated by artificial systems.
This might be so, but this does not mean that AI systems do not threaten our livelihoods and our societies. When loom operators and clothes weavers were made redundant in the Industrial Revolution, they could also have argued that no machine will ever be able to replicate a human-made cloth. That may be true, but they were made redundant anyway, because we found out that nobody really needed a human-made cloth. The machine-made stuff, although inferior, was good enough.
The same may be true of AI. Who cares if our experiences are “successfully replicated.” As long as AI can create an artwork that is good enough, it will put artists out of work. If it can translate human speech well enough, it will make translators redundant. If it can write emails well enough, it will replace office workers. And so on. AI is already been used in courts (often secretly and illegally), more officially in predictive policing scenarios, in lecture design and academic paper writing, by children’s book publishers, and in medicine, where it regularly outperforms human doctors in diagnoses (see a long list of studies online). Therefore, your next statement, in my opinion, is also fallacious:
This also means that our fear of thinking machines remains ungrounded because an artificial replica of our mind is not manufacturable.
The two things, whether we can create a replica of human minds and whether our fear of AI is ungrounded, are not related. You do not need to create a replica of our mind in order to have a reason to fear something. We fear atomic bombs although they are not replicas of minds. We fear spiders although they don’t replicate minds. This sounds funny as a reply, but it points to a logical problem with the initial statement. What Luka is asserting there is a an implication: If an artificial mind replica is not manufacturable, then we do not need to fear thinking machines. But this is not correct. There are many other reasons to fear thinking machines that have nothing to do with replicating minds: for example, thinking machines can take away our jobs, they can create impenetrable bureaucracies and threaten human dignity, they can replace human creativity and swamp marketplaces with their output, so as to drown out human works. This is already happening in music, where AI-composed and performed pieces present a serious competition to human composers. Here is an example from Germany:
This song rose in the charts and got 2 million views on YouTube, in this one video alone. There are countless other videos playing the same song, often with millions more views each. At the same time, countless young musicians struggle to get an audience of hundreds for their compositions.
A technology can pose a threat on its own terms, not only if it is a successful copy of a human being. We fear AI because it will take our jobs, do what we do now, but better, and because we will be subjected to bureaucratic decisions made by AI systems and based on big data, without proper consideration of our human needs. This is already happening. None of this requires the threatening systems to be replicas of human minds.
Artificial intelligence is therefore always dependent on our instructions. Without our guidance, it loses its purpose because it doesn’t have an intrinsic need that sets its actions into motion.
The same is true of the Russian soldiers who are now attacking Ukrainian civilians, and it was equally true of Hitler’s armies when they lay waste to Europe in the Second World War. They did not have an “intrinsic need” to do any of what they did. The actions of every soldier, of every policeman on duty, of most employees in a big company or in public service are not “set into motion” by their “intrinsic needs” to act. They are dependent on the instructions of their superiors. They do what they do, good or bad, because they follow these instructions and this guidance. Still, you can’t well argue that Hitler’s armies did not present a danger just because they were dependent on their leaders’ instructions. Quite the opposite: the mindless following of instructions can be, as many studies on fascism have shown, the breeding ground for the greatest dangers that come from obedient individuals. If robots will become dangerous to humanity it will likely not be because they are autonomous, but precisely because they are not: Because they will be endlessly obedient executors of some dictator’s will.
Final comment by Luka Zurkic
While our perspectives on artificial intelligence may differ, it is certain that AI’s growing ability to process language and simulate thought will reshape the way we communicate with each other. As these technologies evolve at a rapid pace, we must constantly update our understanding of what AI systems can and cannot do. Most importantly, we have a collective responsibility to ensure that AI remains a tool for human empowerment, rather than a weapon of control and oppression. The stakes are too high to leave these decisions to chance or to those who might misuse such a powerful technology.
◊ ◊ ◊
Luka Zurkic and Andreas Matthias on Daily Philosophy: