When knowledge is lost, what should we know?

Artwork by Olivia Bessant

Artwork by Olivia Bessant

Niccolò Fantini considers the impact of increasingly powerful AI technology.

The Economist recently held its annual essay competition for young people, posing the question of which fundamental changes should be undertaken in the fight against climate change. After a winner was chosen and announced, the magazine conducted an experiment: it fed the same question and a brief description of the writing task into an AI algorithm called GPT-2, which also produced an essay on the topic. The result, which can be found on The Economist’s website, is quite astonishing. 

In around 400 words, this ‘robot-contester’ discussed, elaborated on and put forth practical solutions to one of the most pressing global issues. Reading the final product is a mind-blowing experience, and the quality of it hardly resembles one that would be expected from a machine. The level of analysis and the depth of criticism are especially astonishing given that we usually accredit these abilities as key discriminators between human and artificial intelligence. When we are done with reading the AI-Essay, we are left with a disturbing question: what will humans be left with once technology seriously challenges the necessity of our defining abilities?

If it consoles you a bit, judges of the actual Youth Essay competition reviewed the AI-Essay with the same parameters applied to the human candidates, and awarded it a much lower score. Hopefully that will console you a bit. Indeed, most judges rejected the essay due to lack of originality, poor construction of arguments, an excessive amount of rhetorical questions, and an imbalanced structure. Unfortunately, this only means that the matter is much more serious than it appears. If we have reached a point where we hope to find proof of the inferiority of AI’s performance, it only implies that we have (consciously or not) entered into a competition with technology where our upper-hand is no longer guaranteed. Human beings are facing one of the greatest collective challenges our species has ever encountered: finding our place in reality. 

We have always assumed our superiority, claiming that this position is derived from our predominance in knowing. The capacity to rationalise the world and the universe according to what we deem to be ‘true’ is dependent on our ability to obtain, dissect, and make sense of knowledge. Galileo Galilei, most famously, opposed the Christian vision of the Earth being located at the centre of the universe. For many, this was a crucial and revolutionary discovery, yet the Church tried to suppress it precisely because it would have radically changed our self-understanding. In short, our species would no longer be at the centre of everything, but just an additional, random outcome of creation. If this revelation marked a heavy blow to human consciousness, the consequences were still relatively contained. Indeed, it was thanks to Galileo and his human capacities that this dramatic revision of understanding took place. It was a triumph of the human mind, and it just confirmed the role of our race as the great orderer of all things. What the twenty-first century’s AI revolution means, however, is that our supposed superiority as almighty, knowledge-possessing, and rational champions might soon be lost.

In his 21 Lessons for the 21st Century, Yuval Harari writes, “if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated artificial intelligence of computers might only serve to empower the natural stupidity (sic!) of humans.” It is a bold claim, but, as unpalatable as it might seem, it points in the right direction. Human consciousness is, ultimately, what we all rely on when it comes to making sense of the world around us. Our organisation and engagement with society is based on what we think we are and of what we perceive we are capable. 

Already in the 4th century BCE, Greek philosopher Protagoras was somehow anticipating Harari’s thoughts when he claimed that “of all things the measure is Man, of the things that are, that they are, and of the things that are not, that they are not.” We build social structures, moral codes, economies, religions and much more on the (nearly dogmatic) assumption that we are legitimised in doing so. We believe to possess, by virtue of our superior perceptive capacities, the right to order the world according to our image. While this has been accepted as truth for much of the past, this idea of infallibility has increasingly come under scrutiny over the last decades. 

Returning to the topic of climate change, we feel responsible for the planet we inhabit and how we treat it largely because we place ourselves in such a high position as to bear responsibility for its survival. Vegans and vegetarians commonly argue that it is because we sit at the top of the hierarchy and have the innate ability to make an ethical choice that we should take a proactive stance. These and other environmentally-oriented ideas are rooted in a special way of making sense of our species. Arguments like these would lose ground under their feet if humans were to forfeit their current standing and were instead shifted to being mere passive bystanders of events. Have dogs ever been responsible for the deeds of their masters? Their masters were, for sure.

So, in light of incipient change, we should ask who is going to be the master, and why. We should prioritise re-discovery of what being human actually means. Who are we? What are we here for? If algorithms are already arguing and ‘thinking’ the same way we do (even if, at least currently, with inferior results), then we should pause for a moment and seriously consider what we are useful for. After Galileo, the (Western) world was forced to undertake a similar project, and settled for a knowledge-based conception of humanity, in which science and rationality triumphed. Now we cannot rely on these anymore. If humanity is going to be lost, we should start now looking for ways in which to guide it out of the maze.

This article was originally published in Issue 724 of Pi Magazine.

OpinionNiccolò Fantini