Luca Possati on Transhumanism
Philosopher interviews
Luca M. Possati is researcher at the University of Porto, Portugal. Educated as philosopher, he has been lecturer at the Institut Catholique de Paris and associate researcher of the Fonds Ricoeur and EHESS (Ecole des hautes études en sciences sociales). He is associate editor for Humanities & Social Sciences Communications.
His research focuses on the philosophy of technology and in particular on the relationship between neuropsychanalysis, affective neurosciences, and artificial intelligence.
If you like reading about philosophy, here's a free, weekly newsletter with articles just like this one: Send it to me!
Thanks to you for this interview. The conference intends to be a way to make a critical assessment of the current research on transhumanism. The central questions are what transhumanism is today and how it has evolved. To ask these questions again is neither useless nor rhetorical at all. Due to the current development of digital technologies and artificial intelligence, the debate on transhumanism is today more alive than ever.
There is not a single definition of transhumanism. Transhumanism is not a well-defined philosophical doctrine.
I would say that transhumanism is more a general world-view that implies a certain evaluation of human nature and the relationship between human nature and technology. We can find transhumanist ideas in literature, art, cinema, but also in politics or science. For example, Teilhard de Chardin does not use the expression “transhuman,” but he can be called a transhumanist thinker because of his view of the cosmos and human evolution in it.
As I said, you can find transhumanist ideas also in literature: for example, think about Houellebecq’s novel The Possibility of an Island. I would say that one of the best definitions of transhumanism was given by Max More:
Transhumanism is very close to posthumanism. However, I would say that post-humanism is more of a philosophical doctrine; a way to overcome the culture-nature dualism and set up what Latour calls a symmetrical ontology. In the post-human vision, there is no such idea of human improvement intended as an acceleration of human evolution through technology and science. There is no vision of the future.
Houellebecq’s novel The Possibility of an Island: A worldwide phenomenon and the most important French novelist since Camus, Michel Houellebecq now delivers his magnum opus - a tale of our present circumstances told from the future, when humanity as we know it has vanished. Get it now!
Amazon affiliate link. If you buy through this link, Daily Philosophy will get a small commission at no cost to you. Thanks!
I wouldn’t say that transhumanism wants to overcome the concept of cyborg.
Haraway’s “Cyborg Manifesto” can also be read in a transhumanist sense. I think that what is truly essential in transhumanism is a certain vision of the future of humanity. This is evident in the 2009 “Transhumanist Declaration,” which begins as follows:
Now, I do not think that transhumanism is a political vision in the classical sense, i.e., that it implies the adoption of certain right or left views. I mean that transhumanism is political in the sense that it draws a future path for humanity. However, the political nature of transhumanism is still an open question. There are many different positions.
I think that, at least in the classical thinkers of transhumanism (for example Huxley, Theillard de Chardin, Fiodorov), the acceleration and empowerment of humans through technology is considered as a natural and inevitable evolution of humanity. And I think this is also More’s opinion.
In the transhumanist views, the improvement of the human body and mind are not merely temporary solutions, but they are always also part of a vision of the future and of the purpose of humanity — in what I would call a “philosophy of history.” Today, with the extraordinary technological development that we have achieved, humanity is experiencing an extraordinary moment and has the concrete possibility of solving some of its historical problems, such as death or ageing.
It depends on which authors we consider. As I said, there are many interpretations of transhumanism. For example, Ray Kurzweil sees the solution to the problems of ageing and death in nanorobots that can repair our body and increase its functionality. Human identity is split off from biology. This form of engineering of the living body concerns the entire human species, not an elite. I do not see in Kurzweil, at least in his most important texts, an elitist tendency.
As for the values, even in this case it depends on the authors we consider. The interpretations can be very different. For example, in the 2009 declaration we read:
I think that transhumanism is not necessarily a moral vision of the world, even if its vision of the future involves moral effects and values. However, another thesis could also be advanced, namely that technological development and the empowerment of humanity could eliminate many moral problems.
I believe that psychoanalysis can help a lot to understand AI and therefore to guide its future developments. I would like to underscore two points.
First, AI and psychoanalysis are theoretically very close. They both arise from the same philosophical gesture: the decentralization of the human subject. The subject is no longer master of the mind. A de-subjectivation of the psyche takes place.
Second, I argue that technology is influenced and conditioned by the unconscious dynamics of the human psyche.
The key questions are the following: Why does a human being want to be a machine or like a machine? What are the unconscious desires and fears that lead the human being to AI? (I am mainly referring to machine learning and deep neural network systems.)
The purpose of my book The Algorithmic Unconscious (Routledge 2021) is to show the unconscious identification mechanisms that govern AI. Several studies have proposed using AI to improve psychoanalysis and, in general, psychology.
Luca M. Possati: The Algorithmic Unconscious.This book applies the concepts and methods of psychoanalysis to the study of artificial intelligence (AI) and human-AI interaction. It develops a new, more fruitful approach for applying psychoanalysis to AI and machine behavior. Get it now!
Amazon affiliate link. If you buy through this link, Daily Philosophy will get a small commission at no cost to you. Thanks!
The predominant approach studies the transformations of personal identity through social networks, gaming, augmented reality, interactions with robotics, and simulation software. Sherry Turkle’s works are the most popular example of such an approach. While Turkle moves from AI to psychology and psychoanalysis, I take the opposite approach: from psychoanalysis to AI.
I want to underline two essential features of my approach: a) the centrality of the psychoanalytic concept of projective identification and b) the reinterpretation of psychoanalysis in terms of actor-network theory, i.e., the sociology of science elaborated by Bruno Latour. While Turkle mainly analyzes projection’s effects from AI to humans, I analyze the effects of projection from humans to AI. I connect psychoanalysis and AI through the mediation of Latour’s actor-network theory, which is a sociological model to analyze scientific facts and technology.
I believe that this way of analyzing AI and human-AI interactions helps us clarify many aspects of AI, and above all two problems: a) the opacity of AI, i.e., clarifying the decisions taken by AI systems that often are unclear even to those who designed them; b) the problem of AI control, i.e., the ability to create super-intelligent machines that do not cause harm to humans.
Roman V. Yampolskiy: The Uncontrollability of AI
The creation of Artificial Intelligence (AI) holds great promise, but with it also comes existential risk.
Thanks to you for this great opportunity to share my research.
◊ ◊ ◊
Luca M. Possati is researcher at the University of Porto, Portugal. Educated as philosopher, he has been lecturer at the Institut Catholique de Paris and associate researcher of the Fonds Ricoeur and EHESS (Ecole des hautes études en sciences sociales). He is associate editor for Humanities & Social Sciences Communications.
His research focuses on the philosophy of technology and in particular on the relationship between neuropsychanalysis, affective neurosciences, and artificial intelligence. His approach combines philosophy, psychology, and digital ethnography. The study of the formation and development of algorithmic biases through psychoanalytic methods is at the core of his project.