Adam’s case after the interview with the chatbot: the ethical burden of companies and the regulatory vacuum on the development of the AI

John

By John

The suicide of Adam, the 16 -year -old Californian who – according to the complaint of his parents – has committed his life because of a state of discomfort sharp from the use of chatgpt, has rekindled the spotlight on the AI.

The “interview” to the main “accused” (the software created by Openai, in fact), published in Gazzetta del Sud, seems to play as the famous phrase of Andersen’s fairy tale: “The king is naked”. The same application, questioned on the incident, first admitted a failure in its programming, saying he was unprepared in front of such cases. He then recognized that he did not experience emotions and, therefore, not to be able to have empathic relationships with users. We add that algorithmic logic provides, within these communicative environments, the strengthening of the beliefs (whatever they are) of those who frequent them. In essence, if we feel a problematic phenomenon, the machine must, first of all, confirm it as such and avoid proposing opposite ideas, in order not to generate conflict.

Finally, in the last part of the dialogue, Chatgpt launches his j’avusa: the tragic end of Adam is also a cultural failure, since “a society let a boy find comfort in a car instead of in a human network”.

The focal point that, however, seems to escape in this scenario, in our opinion, is another. The manufacturers of IA, it must be reiterated, have enormous ethical responsibilities and, despite this being clear, there are still no sufficient regulatory fences to direct their development. But we cannot make the IA the scapegoat of youth difficulties, replicating an operation that periodically repeats itself with the advent of new media.

Starting from the 1960s, for example, rock music was put on the defendants. Then, the shift of the “bad teacher” TV came. So, the network, cell phones, social networks. Certainly, technological innovations create anxieties, especially for those generations which, with their diffusion, see consolidated reference points of reference. Those same generations, however, delegate to these instruments ever increasing educational tasks. It is true that the cultural models promoted by the AI ​​are more invasive, for example, than the television ones. The mechanism, however, does not change: the day before yesterday the TV served as a babysitter for long afternoons; Yesterday cell phones kept guys and girls busy. Today, Chatgpt is a study companion, games, teacher and confidant.

Try to ask the AI, then, what loneliness is. It will answer you aseptically that it can be negative (“when you feel it as an emptiness, isolation, lack of someone to share with”) or positive (“like an intimate space in which to find yourself, recharge, really listen to”). We should all know how, for those who are fragile, especially young people, this risks being a very dangerous answer if not mediated by a critical interpretation.