mercoledì 30 agosto 2023

COMMON SENSE AND CHATGPT

 Common Sense does not live in ChatGpt.

The inability of AI systems to access the meaning of what they process makes them 'inhuman'.  The alienation of those who have to train them exacerbates the problem.

 The machine lacks a body, so its learning is based only on words.  To remedy this, it is not enough to increase the amount of text, especially if it is generated by other AI programmes.

 

- by VINCENZO AMBRIOLA

  "Dad, what is a sycamore tree?" Most of us would not know how to answer this question. Perhaps some know it is a tree, from the Gospel passage of Zacchaeus. And then from reading The Shadow of the Sycamore by John Grisham. The cooler ones might remember Down by the sycamore tree, a famous jazz tune by Stan Getz. But how many would be able to recognise a sycamore tree in a botanical garden? In our minds, words are linked both to real-world experiences through the five senses and to other words, in what constitutes an enormous semantic network managed by billions of neurons and synapses that connect them. The word sycamore, if present in this network, could be linked to the concepts of tree, book or jazz. That is why we would be able to answer a curious child.

Generative artificial intelligence, the one behind the operation of ChatGpt and many other similar systems, works by linking words together. Words are analysed in the sentences in which they appear, calculating the probability that they are associated with other words. When the user dialogues with the artificial intelligence system, the response is constructed using the neural network within it, word by word according to a process that also uses some randomness. Formulated at two different instants, the same question can result in a different answer. The substantial difference that exists between these systems and the human mind is the absence of information coming from reality and the total dependence on the words used during their learning. Basically, it is a-sensory knowledge combined with a predominantly statistical manipulation of words (treated as abstract symbols). Interaction with reality is thus what characterises us humans and transforms a continuous flow of information into verbally encoded knowledge.

Much of this knowledge constitutes 'common sense', a heritage passed on from generation to generation that enables us to survive in our environment and interact with other human beings, sharing the profound meaning of words. Words that for us are not just symbols, but semantic markers of reality. Common sense has always been and still is the subject of study and research. The development of computer science is closely linked to the many possible ways of capturing and representing it in so-called ontologies and using it in deductive, inductive and abductive automatic reasoning systems. Since the time of Aristotle, logic has been the greatest conceptual challenge for mankind.

  "We can know more than we can say," so wrote Michael Polanyi in his 1966 essay The Unexpressed Knowledge, in which he presented his research on implicit or tacit knowledge. At a time strongly influenced by a rationalist approach, the birth of electronic calculators and the ideas of Alan Turing, Polanyi questioned the possibility that a human being could have complete and explicit control over everything he knew. This was a strong and orthodox position, which rejected the project of formally codifying knowledge and then using it in a computational context.

  Generative artificial intelligence systems do not possess the equivalent of common sense, let alone an implicit knowledge of what they have learnt. They do not need common sense for survival, and, above all, it cannot be acquired through a sensory apparatus that they lack. They learn only through the words used in training and interact with humans only through these words. They are not able to move us with a look or calm us with a caress. To do this they would have to have a body, but then they would be robots and not conversational bots. In a recent article, Christopher Richardson and Larry Heck describe the state of the art of research projects that aim to add common sense into generative artificial intelligence systems. The negative conclusion of this study leaves no doubt when it states that "current systems exhibit limited common sense reasoning capabilities and negative effects on natural interactions".

 Training an AI system using a large amount of text is not enough. The neural network within it can be confused by relations between words that produce responses without any (common) sense, also called hallucinations. Further training is required. In many parts of the world (Kenya, Nepal, Malaysia, the Philippines, India), hundreds of thousands of individuals spend their day in front of a computer screen, interacting with the system, correcting its responses, identifying hallucinations. A tedious and repetitive job often underpaid and on the edge of economic survival. In 2007, Fei Li, then a professor at Princeton and an expert in AI, declared that to improve the quality of image recognition it would be necessary to manually tag millions of images and not a few tens of thousands. She was right and her strategy caused a new springtime for artificial intelligence.

Human nature wants the person in control of the machines and not subject to their domination. A group of researchers at Rice University recently discovered that the humans in charge of training AI systems, to relieve a repetitive and tedious operating procedure, use, however, these very same systems, in a way violating the indications they receive. In practice, the indications given by the trainers are generated 'synthetically' by the systems they are training, in what we can metaphorically call a 'computer incest' that could slowly degrade the quality of the trained neural network. The same problem is encountered when creating a new AI system, using texts from the Internet, which are increasingly generated by other AI systems.

Generative artificial intelligence is still in its infancy and, as such, is destined to grow and evolve. Its training based on human-produced texts makes it inevitably human-like. Its enormous computing power makes it perceived as superhuman, when for instance it can perform tasks that would be inconceivable to us in terms of the amount of calculation. The absence of common sense, however, reveals its inherent and inevitable inhumanity. At the present time, the most likely hypothesis is that AI will become increasingly superhuman (increase in knowledge and performance) but also increasingly less inhuman (evolution of algorithms, sense management). In the background remains the hypothesis that at some point some form of artificial consciousness may emerge that would even make it autonomous.

 

 www.avvenire.it

 

Nessun commento:

Posta un commento