“When a mess happens, when they are in trouble, what should I do?” This is the question – simple, direct, desperate – that many teenagers today address to the adult world today. But often without receiving an answer. The web, which for digital natives is more than passing environment of life, can turn into a silent trap: words, emojis and memes become weapons, and the border between joke and violence becomes thin, mobile, dangerous.
This is where it comes into play NetGuardianan application based onArtificial intelligence Designed to monitor school chats in real time, measure exposure to the risk of cyberbullying and support educators, teachers and students in identifying alarm signals before it is too late. A real digital sentinel At the service of the educating community, capable of reading the languages and attractions of the conversations between peers, providing an objective indication of the level of risk for each class.
The project was born from partnership betweenUniversity of Padua, Carolina Foundation – for years activated on the online security front – and the TIM Foundationwhich has financed development through a call for ideas dedicated to social innovation.
Frame the QR code to try the demo version of Netguardian
An algorithm that listens, observes and protects
NetGuardian’s heart is an algorithm of Machine Learning applied to the analysis of natural language (Natural Language ProcessingNLP). Ia analyzes text conversations between students, looking for “Sentinel repertoires“, That is, specific linguistic and relational patterns that indicate growing exposure to the risk of harmful behaviors: insults, isolation, threats, manipulations.
The result is a numerical index, from 0 to 10which divides the risk into four bands: low, medium, medium-high and high. «Netguardian – explains Professor Gian Piero Turchiscientific manager of the project – does not work with fixed categories of bully or victim. Because they don’t exist. There are only more or less interactions oriented to the health of the school community ».
The app consists of two environments. The first is a messaging platform used by students, with nickname, avatar and family interface. The second is a dashboard dedicated to teachers, who receive in real time the degree of exposure to the risk of their class, in aggregate and anonymous form. In case of medium-high risk, the application provides Tips for didactic intervention; If the risk is high, it can be The Carolina Foundation Task Force is activated directly.
“The added value of the app” underlines Ivano ZoppiSecretary General of the Foundation «is the ability to intervene upstreambefore the negative action occurs. While many tools operate downstream, when the damage has already been done, Netguardian recognizes the relational signals that can be transformed into violence ».

The results of the experimentation
Between December and May, NetGuardian has been tested in 18 classes of first and second grade secondary schools between Lombardy and Piedmont, involving 263 students between 13 and 19 years old. The students chatted on a simulated platform, also interacting with a “shadow” profile created to test reactions to variable risk situations. Then, each of them was asked to evaluate the level of danger perceived.
70% did not recognize to be in a risky situation. Only 30% reported a perception of risk, and almost exclusively when the content was clearly aggressive or harmful. “Direct experience is not enough,” explains Turks. «The most serious data is that even those who have undergone or practiced cyberbullying in the past do not develop greater recognition capacity. What is missing is not the knowledge of phenomena, but the relational awareness».
Even the jerg -speaking language did not affect the level of risk. “Slang, verbal traps, strong ways of saying” clarifies zoppi “are not dangerous in themselves. They become it when they graft on asymmetrical, repeated, devaluing relationships. It is there that the algorithm intervenes, restoring an objective data ».
The numbers are merciless: only the 3% of the students involved would have contacted an expert; The 9% would not know who to turn to; The 31% It would seek family help. But well the 30.9% of the boys involved in high risk chats he doesn’t feel the need to call anyonebecause “it was only bravate with friends”.
A community that observes, not that spy
“Netguardian does not read names, surnames or specific content” specifies Teresa Camelliniresponsible for the project «but intercepts dynamics, linguistic and relational models. It is a system that measures the atmosphere of the interactions, offering adults a parameter to reflect on, not a control tool ».
The school feedback was very positive. The 72% of teachers He recognized Netguardian as “an innovative, effective and non -invasive tool”. Teachers do not read chats, but receive a relational temperatureof the class. This allows timely pedagogical interventions, before the damage occurs.
Even the students, after the experience, said they are available to the majority at ask for help. 70%said that, if put in the condition of recognizing the risk, it would turn to an adult of reference: family (31%), friends (17.5%), experts (11%). “The close the reference, the more the boys trust,” he reflects lame. «But the point is to help them understand When We need to ask for help ».
The challenge of the future
Today NetGuardian is available for All Italian schools that they wish to adopt it. Just request it from the Carolina Foundation. But the project has already shown that it can go beyond monitoring.
It was born from it Net. (Rescue Team)a free service with three multidisciplinary teams ready to intervene in cases of online discomfort. And, in the heart of Milan, a therapeutic center which welcomes adolescents victims or cycerviolence authors. “We call them the refugees of the digital ocean,” says Zoppi. «Guys looking for refuge, listening, care. And that they tell us that we cannot limit ourselves to prevention: we must be present when the pain explodes ».
NetGuardian’s real novelty is not in technology, but in the paradigm change: You no longer need to wait for the victim’s complaintbecause the algorithm detects discomfort independently, in the network of relationships. “We are convinced” concludes Zoppi “that today cannot exist training without support. Schools must be able to count on tools that help them a seea understandto intervene ».
An adult answer to a question of the boys
In the world of school, innovation is often scary. But here it is innovate to protectnot to monitor. “Technology is not the problem,” he underlines Giorgia Florianigeneral manager of TIM Foundation. «It becomes it when it is used without awareness. But it can also be part of the solution. Netguardian was born for this: to return an educational function to technology ».
To those who fear the “car that judges”, the promoters respond with a wider vision: that of an artificial intelligence at the service of an ‘theoretical intelligencethat is, of a critical, pedagogical, human thinking. “There is nothing science fiction in NetGuardian,” concludes Zoppi. “Indeed, it is a concrete, pragmatic tool, which returns to teachers the opportunity to be next to the children, not above them.”
Bullying and cyberbullying are not abstract concepts, but real injuries in the school and social fabric. With NetGuardian, educators, parents and students have an extra ally. Not to “spy” young people, but to walk with them, recognize the signals, build healthier relationships.
And, finally, give a real answer to that question that we can no longer ignore: “When they are in trouble … what should I do?”.