The proliferation of texts written by language models such as ChatGPT is an important issue, as it can lead to considerations beyond those directly linked to language. A fundamental discussion has arisen regarding the texts produced by these models, based on data entered by their users. At the end of the process, it is no longer known who the author really is.
This issue is even more pertinent when it comes to authorial texts, such as literary and philosophical works. This is part of a larger phenomenon, which refers to the difficulty, lack of willingness and even inability to think and write. Hence the many people’s delegation of these activities to machines, which only increases these deficiencies.
However, this does not mean that language models are useless. It is valid to argue that they are useful in less uncertain and more codifiable situations, such as technocratic and techno-scientific matters, in which the work is often mechanical, sequential, standardized and repetitive. These tasks are often said to have “no soul”, and therefore can be done by machines.
Nevertheless, when it comes to expressing complex ideas, reasonings, feelings and emotions, as is the case of fictional and philosophical writings, the situation is different. Technocratic/techno-scientific writings are instrumental; philosophical/fictional writing is existential. They are complementary, not mutually exclusive.
Understanding and maintaining this complementarity is fundamental to the well-being and survival of the human species. Men cannot kill techno-science, but the opposite can happen. For this to happen, it is enough that some powerful ruler be sufficiently stupid and driven by self-deception.
As our current geopolitical situation shows, these two hypotheses are very likely. We live in a time in which it is increasingly difficult to differentiate between truth and falsehood, and the language models can worsen the situation.
Human beings who are alienated from the human condition, especially those who delegate the task of thinking and writing to machines, may become unreliable and lose credibility. Frank Herbert, said in his book Dune (1965): “Once men turned their thinking over to machines in the hope that his would set them free. But that only permitted other men with machines to enslave them.”
A highly recommended article, “The mirage of machine reasoning”, from The Structural Lens (June 2025), says that these language models are important technological advances in terms of pattern recognition and text generation. However, the distinction between the simulation of reasoning and reasoning itself remains unachieved.
Even if this is achieved in the future, those who think and write without resorting to such models retain a valuable human skill in terms of originality and credibility. The texts produced by language models (LLM, or Large Language Models, and their ilk) have easily identifiable quirks. Computer scientist Timnit Gebru ironically calls them “stochasic parrots.”
We are talking about important technological achievements, which can nevertheless differ from human reasoning, make mistakes and provide inaccurate information. Still, it is understandable that many people feel fascinated by them. One of the reasons for this is that thinking, reading and writing have never been very popular in our societies, and in many of them have always been despised.
In any case, being proud of texts produced by ChatGPT, based on data that you inserted into the system can be a bit embarrassing. Using human ghostwriters would be more realistic – but either way, it’s better to come up with your own ideas.
We can count on AI for many other things, but when it comes to producing original and good-quality ideas, this is not possible. Some publishers are already beginning to adopt a seal, that warns that their books were produced without the help of these models – the “AI free” seal, which guarantees that the texts were written by humans.
AI systems are useless when it comes to human problems such as dilemmas, hesitations, feelings and emotions. Machines cannot like or dislike someone; they cannot hesitate in uncertain situations or make philosophical, political and aesthetic choices, as everything about them is impersonal, schematic, statistical, and many real-life circumstances are far from being like that.
No peculiarly human situation can be resolved solely by searching for and identifying patterns, because mechanical complexity is computational and that of human beings is existential. When a computer reaches its maximum level of effectiveness in its attempts to reduce the complexity of a system, a baseline level of complexity and uncertainty will still remain in that system.
In the case of living systems, this complexity is inherent to life itself. To destroy it is to destroy the system, which is often done through binary logic, which classifies everything in dichotomous terms. This is the logic of “you’re either with me or against me”, which has led to the rise of many ideologies – and also to their collapse.
This is also the case with the dichotomy between war and peace: everyone claims to want peace, but they cannot get rid of war. This has already been defined as the ultima ratio regis (the last argument of kings), to justify violence when diplomacy fails. Nowadays, when diplomacy is often reduced to an exchange of threats and bravado, war does not begin after it, but with it.
It is a question of being aware and not deceiving oneself. It has long been known that human history is the chronicle of a continuous war, in which today’s winners will be tomorrow’s losers and vice versa. This is a problem that AI, no matter how sophisticated, will never be able to solve. Here the myth of always guaranteed progress doesn’t work, because its logic refers to human wishes, not to human reality.
The impossibility of completely eliminating the complexity of systems (human or otherwise) without destroying them is easily observable, as long as those who deal with them are not victims of self-deception. It is amazing to realize how many people do not understand this and probably never will.
This impossibility is summarized in this sentence from the above-mentioned article from the platform The Structural Lens: “Current approaches to reasoning enhancement may have reached a fundamental limit that cannot be overcome through incremental improvement in training procedures, architectural modifications, or computational scaling.”
But let’s not deceive ourselves. In one of the possible scenarios, this discussion will continue, driven by human curiosity and technological, political and economic interests. Machine intelligence will continue to increase and human intelligence will continue to decrease, until thought becomes impossible – if we haven't already self-destructed. Then the rest of the natural world, freed from our aggressiveness, will be able to regenerate and breathe in relief.
Prof Mariotti, gosto quando explica que a IA generativa é eficiente para criação de assuntos tecnocientíficos, mas incompetente para expressar ideias complexas, raciocínios, sentimentos e emoções. Concordo plenamente. Costumo privilegiar reflexões com meus alunos do TIDD/PUCSP, a partir dos seus contextos e significados, o que vale dizer que focamos nas ideias sustentadas pelos sentimentos, emoções e experiência pessoais. Tudo isso pensando na proteção da capacidade de pensarem e criarem a partir de suas subjetividades e assim enriquecerem suas pesquisas. O objetivo é que não sucumbam à tentação de usarem somente a IA generativa para desenvolverem seus textos. Tomara que eu esteja contribuindo para isto!!!