The inability of AI systems to access the meaning of what they process makes them 'inhuman'. The alienation of those who have to train them exacerbates the problem.
The machine
lacks a body, so its learning is based only on words. To remedy this, it is not enough to increase
the amount of text, especially if it is generated by other AI programmes.
- by VINCENZO AMBRIOLA
Generative artificial intelligence, the one behind the
operation of ChatGpt and many other similar systems, works by linking words
together. Words are analysed in the sentences in which they appear, calculating
the probability that they are associated with other words. When the user
dialogues with the artificial intelligence system, the response is constructed
using the neural network within it, word by word according to a process that
also uses some randomness. Formulated at two different instants, the same
question can result in a different answer. The substantial difference that
exists between these systems and the human mind is the absence of information
coming from reality and the total dependence on the words used during their
learning. Basically, it is a-sensory knowledge combined with a predominantly
statistical manipulation of words (treated as abstract symbols). Interaction
with reality is thus what characterises us humans and transforms a continuous
flow of information into verbally encoded knowledge.
Much of this knowledge constitutes 'common sense', a
heritage passed on from generation to generation that enables us to survive in
our environment and interact with other human beings, sharing the profound
meaning of words. Words that for us are not just symbols, but semantic markers
of reality. Common sense has always been and still is the subject of study and
research. The development of computer science is closely linked to the many
possible ways of capturing and representing it in so-called ontologies and
using it in deductive, inductive and abductive automatic reasoning systems.
Since the time of Aristotle, logic has been the greatest conceptual challenge
for mankind.
Human nature wants the person in control of the
machines and not subject to their domination. A group of researchers at Rice
University recently discovered that the humans in charge of training AI
systems, to relieve a repetitive and tedious operating procedure, use, however,
these very same systems, in a way violating the indications they receive. In
practice, the indications given by the trainers are generated 'synthetically'
by the systems they are training, in what we can metaphorically call a
'computer incest' that could slowly degrade the quality of the trained neural
network. The same problem is encountered when creating a new AI system, using
texts from the Internet, which are increasingly generated by other AI systems.
Generative artificial intelligence is still in its
infancy and, as such, is destined to grow and evolve. Its training based on
human-produced texts makes it inevitably human-like. Its enormous computing
power makes it perceived as superhuman, when for instance it can perform tasks
that would be inconceivable to us in terms of the amount of calculation. The
absence of common sense, however, reveals its inherent and inevitable
inhumanity. At the present time, the most likely hypothesis is that AI will become
increasingly superhuman (increase in knowledge and performance) but also
increasingly less inhuman (evolution of algorithms, sense management). In the
background remains the hypothesis that at some point some form of artificial
consciousness may emerge that would even make it autonomous.
www.avvenire.it
Nessun commento:
Posta un commento