Head of AI at SNGLR Group, artificial intelligence consultant for the European Commission – Joint Research Centre
Professor of philosophy, former vice-president of the Bioethics Committee of Ulss 9 in Treviso and councillor for culture of the Municipality of Treviso
The field of technology does not always willingly offer its ‘conceptual’ cooperation to cultural and artistic initiatives. The two worlds rarely meet, just long enough to exchange services, and even more rarely appreciate each other. One of the main reasons for this is the mistrust shown by those working in technology towards the often-over-imaginative interpretations produced by the minds of artists, who tend to take certain technical concepts to extremes, ignoring or trivialising others. It is not uncommon for those involved in technology to ridicule the narrative devices used by the authors of films or books to overcome a technological impasse. Computers that solve everything at the push of a button, pointless light games, far-fetched man-machine interfaces, laptops that connect to alien spaceships. Narrative loopholes of this level reveal all too well a basic lack of communication between art (or at least part of it) and technology.
Another key reason is the quiet security of those working in technology, well aware that sooner or later all sectors will have to come begging at their door. The tech world doesn’t need to convince anyone: just release new products, ground-breaking inventions, devices that make life easier, and sooner or later everyone will be lining up to buy them and subscribe to their services. From the latest smartphones to robots that clean the house, from online shopping to messaging apps that replace many obsolete forms of communication. The feeling of those who work in technology innovation is that the rest of the world will always come to drink from their fountain. A long period of fat cows that shows no sign of slowing down, and which reassures the average technologist that he or she will always be on the winning team, since the continuous adoption of the latest innovations by all sectors and social groups is ever-growing and unstoppable.
The only exception to this automatism is, somewhat ironically, artificial intelligence.
AI must be accepted, or it will be rejected
Contrary to other technology segments perceived as ‘harmless’ (with the obvious and curious exception of 5G), public adoption of artificial intelligence technologies is far from a foregone conclusion. In order for AI to be used, it requires the necessary and explicit acceptance by future users, be they end users or mere passive observers, whom we nowadays call stakeholders.
We cannot think of imposing disruptive AI technologies from above. We would end up with incredibly determined movements of resistance or even rejection. We have seen this with facial recognition – see the European Commission’s position on the matter – and we will see it in a few years’ time when self-driving cars are increasingly involved in fatal accidents on our roads. In those cases, it will be exceedingly difficult to get the narrative across – statistics at hand – that self-driving cars cause far fewer accidents than human drivers. Numbers and statistics matter little when fear takes common sense hostage: what frightens humans is the lack of control, and when you are afraid, numbers go out the window. Irrationally, people will prefer 1,300,000 deaths on the roads (as many as there are in the world every year) caused by human driving rather than ten or twenty times fewer but caused by machines.
A self-driving car that makes a mistake and crashes an innocent family into a tree will be all over the news for days. Ten drunken drivers who wipe out as many families will receive nothing more than a few paragraphs in the local newspapers, with readers shrugging their shoulders (in our country, Italy, according to official figures, there is one road death every three hours). Human errors, even those with terrible consequences, are forgiven because in the end we are all human, and by absolving others we try to create a more benevolent society towards our faults, present, past and future. The mistakes of machines will never enjoy the same interested indulgence.
At any rate, whatever the reasons that lead to a dismissal of this technology, humans still have their finger awfully close to the ‘off’ button and are not afraid to use it. So, if we want AI to enter our lives (personal, work and social) with the aim of improving them, it will have to be explicitly accepted by civil society. To achieve this, we will have to make sure that people trust AI, and to trust it, they have to know it.
Pleased to meet you, I’m AI
The reality, however, is that people today are not in the least bit familiar with artificial intelligence. Even those who work with it are not fully aware of all its ramifications, boundaries and potentials. One of the main reasons for this lies in the difference in speed at which scientific research and society’s acquisition of new concepts are moving.
On the one hand, research is proceeding apace: the level of sophistication of AI models is increasing year by year, and ever-larger investments are reducing the time between scientific discovery and market application. Something only theorised a few years ago, tomorrow we will find it in an app on our mobile phone or perhaps on our Smart TV.
On the other hand, society acquires and ‘digests’ novelties at a much slower pace and from disparate sources. Films, books, TV series and comics are the vehicles we use to give shape to concepts that are slow to be metabolised. Technical journals, conferences and popular science works unfortunately reach only a fraction of the population and are light-years away from enchanting the amygdala in the same way as a good movie scene might do.
In the meantime, however, the field of AI is becoming bigger and more pervasive, which as an obvious consequence attracts the interest of those involved in other fields: legal, regulatory, political, social, economic, ethical, religious, psychological, and so on. So many people from so many different fields are called upon to try to govern and decide the paths that artificial intelligence technologies will take in our society. People who do not speak the same language as those who have developed and know how to make these technologies work.
Artificial intelligence, a matter of culture
AI does not arrive suddenly from God knows where, it is not brought by the stork, and it is not born under a cabbage leaf. Like everything human, it is a matter of culture, in other words the result of man’s own activity: the construction of his own world.
Arnold Gehlen makes it clear that while the animal inhabits the Umwelt, man inhabits the Welt. Animals inhabit “their” environment and only their environment: we do not find giraffes at the North Pole or penguins in the desert. Animals live in their own environment and generally adapt to it, more or less. Man inhabits the Welt, the whole world; he does not adapt himself to the world but adapts the world to himself, modifies it at will by bringing his faculties into play. This is culture: the complex process of man’s construction/interpretation of the world. It can and does do this because it has the ‘equipment’ to do so: plenty of neurons, cerebral plasticity, a well-equipped sensory system.
In this sense, artificial intelligence is a matter of culture, like everything else. The complex process of man’s construction/interpretation of the world, culture, must be understood with the use of appropriate tools, all of which are also expressions of cultural processes. Explaining the transition from Gothic to Renaissance architecture requires the use of a variety of knowledge: construction techniques, new materials, philosophy, art, the ruling system, etc. In short, reading tools: if we do not have the tools, we will not be able to understand the world. If we do not use all this and more, we will understand nothing, and the most we will feel is wonder, amazement or even disquiet at the novelty that has emerged.
So, the question we have to ask ourselves is: what are the cultural tools we need to understand what AI is?
To give an example, there are those who really think that Asimov’s three laws of robotics can be applied to contemporary artificial intelligence (news flash for those who think so: you can’t) but sometimes it is hard to convince them otherwise. Then, of course, when we talk about AI in the military, everyone will think of the Terminator movies. For law enforcement they will quote Robocop, android rebels will recall Westworld or Blade Runner.
Before we go any further, we will have to decide whether these are the right (cultural) tools to understand what AI is or whether we need different ones. The increasingly pervasive presence of AI in the life of our world confirms what we have known for a long time but sometimes forget, namely that understanding a cultural fact requires (imposes) a multidisciplinary approach. Is it difficult? Is it arduous? Certainly, but it is the only way to escape superficiality and ‘hearsay’.
People from other disciplines will start listening to machine learning engineers explain how convolutions work in a neural network. Those from different backgrounds will sit down at the AI table bringing with them, but in a conscious way, a set of notions influenced by the cultural products they have consumed. Films, books (non-fiction and fiction), TV series. These are not things you put on your resume, but you certainly have them in your head when you start talking about algorithms and their impact on people’s lives. There’s nothing wrong with that: just know that we have them in our heads so that we don’t confuse them with the real tools for understanding.
Culture is the lingua franca
It is evident that a message skilfully conveyed through books and films can be more powerful than a thousand scientific publications. Culture is the ‘lingua franca’ par excellence because it speaks to everyone and reaches everyone. For this reason, AI practitioners cannot ignore this important vehicle of social debate. Through culture, concepts are transmitted and fears are exorcised, people meet and, if necessary, confront each other. With a book, a film, a piece of music or a comic strip, a creative person has the opportunity to communicate how he or she sees the future and AI in our lives, gathering the support of other people who feel the same way. For its part, the AI practitioner has a duty to take note of these inputs, because it will find useful elements in them to understand how its next creations might be received. And maybe it will have to make an effort to respond, again through cultural vehicles, to make everyone understand the intentions of the research, the goals that the scientific world is trying to achieve, the way researchers imagine the future of our society.
Certainly, the suggestions of iconic films such as ‘2001: A Space Odyssey’, series such as ‘Star Trek’, books by Asimov, Gibson, Dick, Heinlein frequently appear among the main reasons why scientists, engineers or software developers fell in love with their profession long before they started. But these are just suggestions that may nonetheless be useful in convincing us that we should start paying more attention to the intersection between AI and culture. How is artificial intelligence represented? What are the most emphasised aspects? Which ones are ignored? And how do new AI products and services impact on our culture?
It would be wise to move from suggestions to awareness of what is happening: there is an area of AI that is gaining ground, namely generative artificial intelligence. Paintings, images, music, texts created by AI models are becoming less and less distinguishable from human creations. Does this mean that art created by artificial intelligence will become part of our cultural heritage? We always address the legal aspects of the issue, but we will have to start really tackling those of art and creativity in general.
The fact is that AI is not a foreign body that “from outside” creeps into our lives: it is, as we said, a fact of culture, of construction/interpretation of the world. From here on, we urgently need to equip ourselves with the right tools to understand it.