Artificial intelligence is emerging as a powerful tool for combating child abuse online, which is of growing concern compared to the pre-pandemic period. The digital world, in fact, offers many opportunities for learning, entertainment and communication, but also presents serious risks for the safety of young people. Among these, the most worrying are the dissemination of images resulting from sexual abuse of minors (child sexual abuse materials, CSAM) and predatory behavior that anticipate the realization.
According to UNICEF, one in five girls and one in 13 children are sexually abused or exploited, and online interactions are now at the heart of almost every incident .
The phenomenon of online abuse, in fact, is in a sad and frightening growth and in 2022 alone recorded a peak of 32 million reports of suspected cases. These include both dissemination of material on minors (such as photos and videos) and episodes of so called grooming: manipulation practices aimed at exploiting and abusing people. The latter recorded an increase of 82% of cases over the past year, of whichsextortion was the main event.
The phenomenon affects the European Union, from which comes 68% of the 2022 reports. In the face of these appalling data, there are already significant initiatives taken to combat the phenomenon: the European Commission, in particular, presented a proposal for a regulation in 2022  to introduce obligations to prevent and combat all phenomena of child abuse on the Internet.
In particular, the objectives would be to ensure the detection, reporting and removal of sexual abuse of children online; to improve legal certainty, transparency and accountability and to ensure the protection of fundamental rights; reduce the proliferation and effects of sexual abuse on children through harmonisation of rules and better coordination of efforts. The latter would be ensured through the establishment of a new specialised agency: the EU centre on child sexual abuse (EUCSA).
As further proof of the increased sensitivity on the subject, the recent approvals of the Digital Services Act and the AI Act by the European Parliament, as well as the Online Safety Bill currently under discussion in the UK: legislative initiatives stemming from the common concern to ensure the increasingly uncertain safety of users on the Internet and which could soon be followed by the adoption of the proposal for a European Regulation on the prevention and combating of sexual abuse and sexual exploitation of children.
Emblem of this uncertainty is also the recent decision of the Supreme Court of the United States not to resolve the dating debate on the immunity of platforms with respect to the content published by users, which will probably intervene in the coming years Congress.
However, the apparent breadth of the material to be monitored across the network does not allow for effective (or efficient) manual control by either platforms or law enforcement. For example, the only “self-produced” abuse material, that is, explicit photos and videos made by minors themselves, in 2021 grew by 374% compared to the pre-pandemic levels, making it even more difficult to operate and control non-automated.
The contribution of artificial intelligence
Instead, AI offers a wide range of possibilities to identify and combat child sexual abuse on the Internet. Through the analysis of digital content, in fact, it can detect suspicious images and videos, automatically detecting signals and characteristics that indicate the presence of sexual abuse. This rapid and efficient analysis capability would enable the identification and removal of illegal content in a timely manner, reducing the exposure of children to harmful images or the dissemination of material concerning them.
In fact, another field of application of AI is the prevention of online sexual abuse. Through real-time analysis of user behavior on the network, it can identify patterns and signals that may indicate a child’s interaction with any abuser. By monitoring online chats, messages, and activities, AI can detect suspicious patterns of behavior and even automatically alert parents, guardians, or security workers to take timely action to protect the child.
Collaboration between technology companies and law enforcement is also critical to ensuring the effectiveness of AI in online child protection. Companies can, in fact, develop and implement AI tools that automatically analyze content uploaded to their platforms, immediately reporting illegal or suspicious content to moderation managers and competent authorities.
The United Nations and the AI4SC project
An example of the application of AI to effectively counter child abuse online comes from the United Nations: in 2020 the United Nations Interregional Crime and Justice Research Institute’s (UNICRI) Centre for AI and Robotics and the Ministry of Interior of the United Arab Emirates have launched the project AI for Safer Children(AI4SC).
As part of this initiative, the AI for Safer Children Global Hub, a centralized platform for police forces around the world designed to combat child abuse, with the ambition to derive a model for further issues related to risky material, from fake news to extremist propaganda.
Thus, the Global Hub provides police with a catalog of artificial intelligence tools that can be used in investigations, enabling selection with respect to specific investigative needs; offers specialized training on child abuse and the implications regarding the mental health of victims; enables networking among agencies in different countries so as to create a stronger international community of law enforcement through sharing experiences in artificial intelligence (the project has 270 investigators from 72 countries).
The need for an integrated approach
However, the use of AI for online child protection also raises important ethical issues; it is essential to ensure that AI is used responsibly and that the individual rights of users are respected. Automated content analysis may involve the collection and processing of large amounts of personal data, which requires adequate information protection and security.
The AI4SC project itself is based on a specific legal and ethical strategy, based on fundamental principles including a responsible approach to the purchase, development, and deployment of artificial intelligence technology so as not to compress the fundamental freedoms of individuals. Likewise, the work of the investigators. and tech companies who cooperate with the project is guided by the principles of the 1989 Convention on the Rights of the Child.
Moreover, AI cannot be the only tool in online child protection. A combination of efforts among parents, educators, online safety practitioners and legislators is needed to create a safe environment for children on the Internet. Raising awareness of online threats, digital education, and promoting safe online behaviors are equally important to ensure the protection of children.
Information materials such as those disseminated by companies like Thorn – including in the context of the many phenomena of non-consensual pornography-can allow a safer approach for minors (and their parents) to their inevitable online lives. These can be usefully complemented by CSAM identification and prevention tools that are easily accessible to private platforms or police forces and have already had a significant impact.
The private Safer software, for example, boasts identification of one million cases and makes available to its users a database of 32 million hash codes
 identifying abuse materials already identified on the network by other programs as well
The use of artificial intelligence for online child protection and countering the dissemination of images resulting from sexual abuse is thus already proving to be an important development in the field of digital safety.
However, it is crucial to reiterate the need for ethical employment, which ensures the protection of the individuals subject to the controls and the security of the personal data processed. Therefore, through effective (digital) collaboration between web platforms and law enforcement, on the one hand, and proper training on online life, on the other, it will be possible to achieve a safe online environment for minors, enabling them to take advantage of the benefits of technology in a protected manner.
 M. Grzegorczyk, The AI for Safer Children initiative – a collaboration between the UN Interregional Crime and Justice Research Institute’s Centre for AI and Robotics and the Ministry of Interior of the UAE – is helping law enforcement agencies tap into the potential of AI, UNICRI; UNICEF, Children from all walks of life endure violence, and millions more are at risk.
 The text of the proposal approved by the European Parliament: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
 European Commission,
EU Strategy for a more effective fight against child sexual abuse
 Internet Watch Foundation,
‘Self-generated’ child sexual abuse prevention campaign
 It was precisely concerns about the risks posed by the use of artificial intelligence that recently led to the passage of the much-cited AI Act.
 AI for Safer Children,
The core principles of the AI for Safer Children initiative
 The framework for this approach is the United Nations Guiding Principles on Business and Human Rights.
 An alphanumeric code that uniquely identifies a particular file and from which the original file cannot be reconstructed. For more, see https://www.ionos.it/digitalguide/server/sicurezza/funzione-di-hash/