The 2020 Philosophy festival had the machine as its main theme and among the examinations faced by the Philosophers, one dear to Ellysse and BOTrainer emerged: the relationship between Ethics and Artificial Intelligence.
Machines in a broad sense are those technologies that, based on AI, are not limited to mechanically carrying out tasks that have been programmed for them, but in the face of training like the one our #BOTrainer performs, they perform tasks in synergy with the “human team” supporting it and sometimes replacing it.
These virtual assistants become a concrete help in companies because they manage to be, in some cases, completely autonomous.
But what price does this autonomy have? The answer is closely linked to the question what relationship there is between ethics and artificial intelligence.
Everything revolves around decision-making capacity: AI-based machines base their decisions on data which, stored and trained for example by a good BOTrainer, form their database, their knowledge. But, everything is fallible, even the machines that, based on the data, run into errors due to the nature of the latter. Without going into very complex questions, the data that form the knowledge of the Virtual Assistants are copies of reality and as such imperfect.
For this reason the decisions made, based on these copies, are imperfect, fallible.
Fallible decisions make the decision-making ability of cars dangerous, the American story of a driverless driving test that capitulated in the worst way is notorious. Quoting Paolo Benanti, a theologian friar and a great scholar of the management of innovations in the digital age between America and Europe
” since the Ai base their decisions on data and since these are not a perfect copy of reality, the sapiens machine will obviously not be infallible and this precisely makes a shared ethical approach absolutely necessary to avoid actions and decisions that could harm people , create imbalances at individual and social levels ”.
The shared ethical approach refers to the activity, for example, of the BOTrainer which supervises and works to provide improvement and control tools to the machine or virtual assistant. This should also put an end to the unmotivated fear that machines will replace humans by stealing their jobs. Because this is partly the case, but as Ferrais, Theoretical Philosopher of the University of Turin says, the machine can have infinite inputs but always determined by man. And therefore the only job that the machine can steal from man is the one that causes only boredom and annoyance to man. On the other hand, the job opportunities that the machine offers to man are much more stimulating and productive and are making a lot of space in the technological scenario of recent years, according to #BOTrainer.