Can AI write philosophy?
How Jasper AI will shake up education
If you like reading about philosophy, here's a free, weekly newsletter with articles just like this one: Send it to me!
What is Jasper AI?
Jasper is an online AI program that uses the now famous OpenAI GPT-3 system to produce new text from any kind of prompt or example text.
This sounds a bit technical and boring, until one realises what this means: you can provide any kind of prompt to Jasper and will produce something to fit that prompt. Let’s try an example: I’m going to write this prompt paragraph:
My mother was born in 1953 in London, England. She studied history and microbiology at King’s College and went on to work for a big pharma company that made vaccines that they shipped to South America and Africa.
I can now ask Jasper to continue from here. Here’s what Jasper adds to my previous text:
After a few years, she met my father and they got married. They had me in 1977 and my sister two years later. We lived in a small town outside of London until I was ten, when we moved to the United States because my father got a job offer he couldn’t refuse.
My mother has always been a fierce advocate for her children. When we were younger, she would always stand up to our father when she felt we were being treated unfairly. She was the one who pushed us to be successful in school and to pursue our passions. My mother has always been my biggest supporter, and I am grateful to have her in my life.
… and so on. I could continue pressing Ctrl-J, which is the command you give to Jasper to get on writing, and it would keep adding to the story ad infinitum.
Now this is surprising enough. I am myself an AI ethicist, but I had not assumed that the technology would be quite that good already. It is amazing what Jasper can produce from any piece of prompt.
Beyond that, Jasper can also execute commands or requests given to it in natural language. For example, you could say:
Write down a list of 10 things to take onto a day trip to the beach.
Pressing Ctrl-Enter at the end of this, will execute the command. Here’s Jasper’s output:
- Sunscreen
- Chapstick
- Hat
- Sunglasses
- Swimsuit
- Towel
- Sandals
- Snacks
- Water Bottle
- Camera
I’m not sure that I know what a “chapstick” is, but the rest seems pretty sensible.
Now back to philosophy. Having spent most of my life writing philosophy papers and blog posts, I was naturally curious how Jasper would manage producing an educational article about Kant’s moral philosophy. I assumed that I didn’t know much, just as a student wouldn’t, but that I was trying to write something for a short undergraduate assignment for an introductory ethics course. Here is what happened.
The article below has been written entirely by Jasper, with just a minimal amount of guidance by me regarding its overall structure, but no changes to the actual text. If you’d like to see live how I made this article, and how you (or your students) can use Jasper in your (or their) work, then please watch the video which goes through the whole process of generating this very article you see below. The generation of the complete article took me less than 20 minutes and required almost no knowledge of Kant’s ethics.
After the AI-written article, I append a few thoughts on what all this means for us as philosophy teachers.
To try Jasper out for yourself, you can use this link, which will give you (and me) each 10,000 words for free:
5-day Jasper trial referral link
How will Jasper and similar tools affect your class? Tell me what you think in the comments.
A short overview of Kant’s ethics (written entirely by Jasper.AI)
What is Kant’s Categorical Imperative
Kant’s ethical theory is based on the Categorical Imperative, a moral principle that dictates what actions are right and wrong. The Categorical Imperative can be broken down into two forms: the Universalizability Principle and the Humanity Formula. The Universalizability Principle states that an action is only morally permissible if it could be universalized, or if it could be applied to everyone without leading to contradiction. The Humanity Formula states that an action is only morally permissible if it treats human beings as ends in themselves, and not as means to an end. This means that we should never use others as a means to our own ends, but rather respect their autonomy and dignity.
The Universalizability Principle
Kant’s Universalizability Principle is based on the belief that only morally permissible actions can be universalized. This means that an action is only morally good if it could be applied to everyone without leading to contradiction. For example, imagine that you are driving in a car and see a child standing in the middle of the road. You have two options: you can either swerve around the child, or you can hit the child. If you universalize the action of hitting the child, then it would mean that everyone would be allowed to hit children with their car whenever they wanted. This would obviously lead to contradiction, as it would be morally wrong to allow everyone to hit children with their car. Therefore, the only morally permissible action is to swerve around the child.
The Humanity Formula
The Humanity Formula is based on the belief that human beings should be treated as ends in themselves, and not as means to an end. This means that we should never use others as a means to our own ends, but rather respect their autonomy and dignity. For example, imagine that you are a doctor and you have a patient who is in need of a kidney transplant. You also have a healthy patient who is willing to donate one of their kidneys. The Humanity Formula would dictate that you should not take the healthy patient’s kidney without their consent, as this would be treating them as a means to an end (saving the other patient’s life). Rather, you should respect the healthy patient’s autonomy and dignity by getting their consent before taking their kidney.
Criticisms of Kant’s Categorical Imperative
There are a number of criticisms that can be made of Kant’s Categorical Imperative. One of the most common criticisms is that it is difficult to apply in certain situations. For example, the Universalizability Principle can be difficult to apply in cases where there are multiple possible actions, all of which could be universalized. It can also be difficult to apply in cases where universalization would lead to contradiction. In addition, the Humanity Formula can be difficult to apply in cases where treating someone as an end in themselves would conflict with other moral principles, such as the principle of self-defense.
Another common criticism of Kant’s Categorical Imperative is that it is too idealistic. This is because it assumes that everyone would be willing to universalize their actions, and that everyone would treat others as ends in themselves. However, in reality, there are many people who do not act in accordance with these principles. For example, there are people who would not hesitate to hit a child with their car if it meant that they could save their own life. In addition, there are people who routinely use others as means to their own ends, without any regard for their autonomy or dignity.
Finally, some critics argue that Kant’s Categorical Imperative is too divorced from human emotions and practical concerns. This is because it focuses on abstract principles, rather than on the concrete reality of human life. For example, the Universalizability Principle does not take into account the fact that some actions may have different consequences for different people. In addition, the Humanity Formula does not take into account the fact that some people may be more deserving of our respect and consideration than others.
Despite these criticisms, Kant’s Categorical Imperative remains one of the most influential ethical theories in the history of philosophy. It provides a powerful framework for thinking about morality, and its principles continue to be relevant in modern times.
Final thoughts on Kant’s Ethics
In Kant’s ethics, the Categorical Imperative provides a framework for thinking about morality in a principled way. It is based on the belief that human beings should be treated as ends in themselves, and that we should always act in accordance with moral principles. While there are some criticisms that can be made of Kant’s ethics, they remain influential in modern times.
Your ad-blocker ate the form? Just click here to subscribe!
What I learned writing with Jasper
Writing this paper with the assistance of Jasper.AI was a unique experience that I didn’t have before: the sense of “sculpting” a text, as if I was forming something out of clay. Instead of writing the individual words, the person who controls Jasper is more in control of the overall message and the outline of the text, the flow of the arguments, and the selection of what content should be in the piece and what should be left out, what should be emphasised and what should be played down. One never actually needs to engage with the text itself, which is generated quite perfectly and without any significant errors (at least when I tried it) by the program.
The whole experience is more like coordinating the work of a number of brilliant but slightly unreliable ghostwriters. They will produce solid content even on topics that the coordinator doesn’t know much about, so it’s not like one has to spoon-feed Jasper with the factual details. When I asked it to write on “Kant’s Ethics,” it came up entirely by itself with the Categorical Imperative, its various forms, its own examples, and a number of very nicely phrased criticisms. This means that any student can write a good paper on Kant without actually knowing much about the topic.
On the other hand, Jasper does need this coordinating guidance. Left alone, it tends to repeat itself and to produce somewhat superficial content that sounds a bit like marketing filler (and perhaps much of that marketing content on the Internet has indeed been produced by something like this program). The human operator needs to provide a clear structure for the content that the program can then flesh out with text, arguments and information.
Also, Jasper (like any AI tool based on statistical analysis of text) can occasionally come up with totally made-up facts or arguments, especially when it cannot find any relevant information in its training corpus. It is then the job of the human operator to check and weed out whatever crazy inventions the program will put into the text. But I must say that in both cases I tried (see the video above) the content produced was great and did not contain any false information or other errors.
When writing with Jasper, the creation of the text becomes a collaborative work: the human part is guiding the AI, and the program is following the guidance to write the actual text. Instead of typing, the human’s role becomes organising, planning, checking and evaluating.
This is not necessarily a bad thing. Since ancient times, artists used to have apprentices who would do the repetitive work, while the master artist would do the overall planning. Although this has been a common mode of working in the arts, it never caught on in writing or philosophy on a larger scale. Some writers do work in this way: James Patterson, for example, the world’s best-selling author of thrillers, is known for working with a number of collaborators who actually write down the text of the books following their master’s outline. I am not sure that it would be wrong in any way to do something similar with the help of AI, or in the area of philosophy.
Cheating possibilities aside, if I am writing a philosophy paper on, say, some point in robot ethics, more than half of what I’ll be writing will be introductory passages, an overview of previous work on the topic, examples and mind-numbing formal things like the list of references (in a slightly different format for each journal). Is it not a legitimate use of an AI tool to let it help me with these things, so that I can concentrate on writing what really counts, which is the core of my own argument, my unique insight that I want to communicate?
Of course, such tools can be misused (like any tool) in the wrong hands. Students will (if they haven’t already) begin submitting AI-generated papers as homework for their school or university courses. What can we do as teachers to prevent such a use of AI?
Forget trying to spot papers that were written by AI systems. As the video above shows, Jasper can generate a near-perfect paper in crisp, engaged English, using its own examples and explanations, within minutes, and no one will be able to reliably distinguish it from a human-written paper.
Perhaps, with technologies like Jasper becoming widely available, the time of the homework writing assignment is drawing to a close. Perhaps we need to see students write their essays under supervision, so that we can verify that they actually wrote them. Or perhaps we might just raise the bar. If anyone can write a simple paper about Kant in 15 minutes and without actually knowing anything about Kant, then perhaps such a paper should get an F grade, even if nothing is wrong with it. Perhaps we need to require even the simplest student papers to be creative and innovative in a way that goes beyond what an AI program can produce. But will we always be able to tell?
◊ ◊ ◊
Thanks for reading! What do you think? Tell me in the comments!