doi: 10.56294/ai2024111

 

REVIEW

 

Artificial intelligence and digital sex crimes: an emerging threat to children

 

Inteligencia artificial y delitos sexuales digitales: una amenaza emergente para la infancia

 

Antonella Petrini1, María Belén Gauna1

 

1Universidad siglo 21. Córdoba, Argentina.

 

Cite as: Petrini A, Gauna MB. Artificial intelligence and digital sex crimes: an emerging threat to children. EthAIca. 2024; 3:111. https://doi.org/10.56294/ai2024111

 

Submitted: 30-06-2023          Revised: 19-11-2023          Accepted: 14-04-2024          Published: 15-04-2024

 

Editor: PhD. Rubén González Vallejo

 

ABSTRACT

 

The article analysed the impact of artificial intelligence (AI) on the creation of child sexual abuse content, an emerging problem within cybercrime. In the introduction, AI was presented as a technology capable of imitating human cognitive functions, highlighting its potential for both progress and illicit uses, especially in digital environments. During the development, it explained how generative AI was used by criminals to produce CSM (Child Sexual Abuse Material) without the need for physical contact with real victims. It showed how this technology made it possible to create hyper-realistic images and fake sexual videos of minors, generating a new form of victimisation. Various international reports, such as those by the Internet Watch Foundation and the National Centre for Missing and Exploited Children, revealed an alarming increase in this content on the web, as well as the difficulty in distinguishing it from real material. In addition, legal weaknesses in countries such as Argentina, where specific regulations to punish these technological crimes had not yet been established, were addressed. In conclusion, the need for an interdisciplinary response combining legislation, education, technology and social awareness was raised. It was argued that without adequate regulatory frameworks and preventive strategies, AI could become a dangerous tool for facilitating crime. Finally, it was emphasised that protecting children and adolescents from these new digital threats should be an urgent priority on the public agenda.

 

Keywords: Artificial Intelligence; Child Sexual Abuse; Digital Criminology; Deepfake; Legislation.

 

RESUMEN

 

El artículo analizó el impacto de la inteligencia artificial (IA) en la creación de contenido de abuso sexual infantil, una problemática emergente dentro de la cibercriminalidad. En la introducción, se presentó a la IA como una tecnología capaz de imitar funciones cognitivas humanas, destacando su potencial tanto para el progreso como para usos ilícitos, especialmente en entornos digitales. Durante el desarrollo, se explicó cómo la IA generativa fue utilizada por delincuentes para producir MASI (Material de Abuso Sexual Infantil) sin necesidad de contacto físico con víctimas reales. Se expuso cómo esta tecnología permitió crear imágenes hiperrealistas y videos sexuales falsos de menores, generando una nueva forma de victimización. Diversos informes internacionales, como los de la Internet Watch Foundation y el National Center for Missing and Exploited Children, revelaron un alarmante incremento de estos contenidos en la web, así como la dificultad para diferenciarlos de materiales reales. Además, se abordaron las debilidades legales en países como Argentina, donde aún no se habían establecido normativas específicas para sancionar estos delitos tecnológicos. En la conclusión, se planteó la necesidad de una respuesta interdisciplinaria que combinara legislación, educación, tecnología y concientización social. Se argumentó que, sin marcos normativos adecuados ni estrategias preventivas, la IA podría convertirse en una herramienta peligrosa para facilitar delitos. Finalmente, se enfatizó que proteger a niños y adolescentes frente a estas nuevas amenazas digitales debía ser una prioridad urgente en la agenda pública.

 

Palabras clave: Inteligencia Artificial; Abuso Sexual Infantil; Criminología Digital; Deepfake; Legislación.

 

 

INTRODUCTION

Artificial intelligence has transformed everything from industrial automation to the personalization of digital content. To begin with, AI can be defined as: “AI is the design of processes that, when executed on a physical architecture, produce results that maximize a certain measure of performance. These processes are based on sequences of inputs that are sensed and stored by the aforementioned architecture (in the hands of an ALGORITHM)”.(1)

“Colloquially, the term artificial intelligence is applied when a machine mimics the cognitive functions that humans associate with other human minds, such as perceiving, reasoning, learning and problem-solving”. This ability to mimic cognitive processes and automate complex tasks has not only revolutionized sectors such as industry and communication but also enabled significant advancements in other fields. Still, it has also opened up new opportunities for the misuse of technology in criminal activities, especially in the digital realm. Based on this, these advances in AI raise ethical issues; as Gabriela Ramos, Assistant Director-General for Social and Human Sciences at UNESCO, points out, “these rapid changes also raise profound ethical dilemmas, arising from the potential for AI-based systems to reproduce biases, contribute to climate degradation and threaten human rights, among others”.(2)

Furthermore, artificial intelligence has created a unique opportunity for criminals to produce an infinite amount of child sexual abuse content. All AI-generated variants of such content are illegal, regardless of whether they depict a real child.(3)

As the Internet Watch Foundation mentions, “the majority of AI child sexual abuse images identified by IWF analysts are now realistic enough to be treated as real images under UK law”. The IWF indicates that, even for specialized and experienced analysts, it is tough to differentiate AI-generated MASI as they appear too realistic.(4) Hence, the importance of analyzing this new and challenging-to-detect problem requires both preventive measures and legislation on the part of governments, as well as awareness-raising among citizens.

This phenomenon has shown alarming growth, prompting various organizations to produce detailed reports. A growing trend has been identified in which artificial intelligence is being exploited to make child sexual abuse material. The initial report in October 2023 revealed the presence of over 20,000 AI-generated images on the dark web forum in one month, with over 3,000 showing criminal child sexual abuse activities. Since then, the problem has escalated in severity. According to the updated July 2024 report, not only has there been an increase in the number of images generated, including more serious images, but video generation has also been recorded. Moreover, these are no longer only posted on the dark web but also on the surface web.(4)

In 2023, the CyberTipline of the National Center for Missing and Exploited Children received 4700 reports linked to child sexual abuse material or sexually exploitative content using Generative Artificial Intelligence (GAI) technology. This type of material, known as GAI MASI, shows images of digitally created children in sexually explicit situations and can be produced on demand by users of specific generative AI platforms. In addition, this technology allows for the creation of sexually explicit deepfake-style images and videos, starting from an innocent photo of a real child to generating a computerized version of a sexually explicit child.(5) With sexual content. This significant increase in reports and the evolution of techniques used to create CSAM underscore the urgency of addressing the problem comprehensively.

In the local context, there is an increasing number of reports and a growing visibility of these illegal actions. The Cybersecurity Centre of the Province of Buenos Aires published a note on its website warning that authentic images can be modified to create pornography, called ‘deep nudes,’ constituting a severe form of cyberstalking and violation of people's sexual integrity. Artificial intelligence can transform our lives, but it also presents challenges that must be addressed proactively. Having preventive measures and tools in place to reduce risks is essential, ensuring the responsible use of these technologies.(6)

CSAM (child sexual abuse content) generated by AI can be both synthetic and real-life.(7) According to Cordeiro (2024): “This content poses a dual threat: by itself it enables the generation of new and harmful CSAM and, simultaneously, AI tools allow criminals to access and disseminate other CSAM at a faster rate than ever before.” Illicit content can include manipulated images of real minors or artificial creations. The ability of AI to generate content that did not previously exist is known as “generative” capability and increases the risk to potential victims of CSAM of allowing the creation of these images.

Protecting children and adolescents from the impact of child sexual abuse material produced with generative artificial intelligence involves education and support from adults.

From adults. The opportunity to guide young people in using this tool safely and understanding the risks of misusing it to create such content.(5) This educational approach highlights the need for prevention strategies for the responsible use of Generative Artificial Intelligence (GAI) and to promote the risks and consequences associated with the creation of Child Sexual Abuse Material to prevent it and protect minors.

These illicit activities can be directly related to criminology, primarily through the theory of criminal activities. This approach offers both macro and micro-level explanations for the configuration of crime rates. At the micro level, it is argued that crime typically occurs when a potential offender encounters a suitable target in the exact location, absent a capable guardian to prevent the crime.(8)

Complementarily, at the macro level, social organization and the routines of daily life will increase the likelihood of convergence between offenders and targets in specific places and at particular times.(8) In this context, it could be considered that offenders find artificial intelligence a tool that facilitates criminal opportunities. Quick access to information and the lack of need for direct contact with victims simplify the commission of these crimes. Finally, the absence of a ‘capable watchdog’ manifests itself in the lack of regulations, implementation, and knowledge necessary to prevent these acts effectively.

Technology has advanced exponentially, and artificial intelligence has integrated into many aspects of our daily lives, transforming and automating tasks in ways that once seemed impossible. However, what happens when this powerful tool is used for criminal purposes and without proper oversight? The everydayness of AI in society has not been thoroughly analyzed in terms of its potential risks, especially in sensitive contexts such as facilitating crime. The technology that drives progress can also become a dangerous weapon if its ethical and criminal implications are not assessed and managed.

 

DEVELOPMENT

The production and distribution of Child Sexual Abuse Material (CSAM) using Artificial Intelligence (AI) tools poses a new paradigm in digital criminology. This issue must be understood from a multidisciplinary perspective that includes legal, social, technological, and criminological analysis. The theoretical framework underpinning this research is structured around four fundamental pillars: the definition and application of AI, the conceptualization of MASI, the legal vacuum in its regulation, and the criminological theory of routine activities.

 

Artificial Intelligence (AI)

AI is defined as the set of systems or machines that mimic human intelligence to perform tasks and can iteratively improve on the information they gather. According to Boden,(9) it aims to simulate human cognitive functions, such as reasoning, learning, and problem-solving. These systems rely on perceived and stored inputs to generate intelligent responses and actions. AI has become a fundamental tool in various fields, but its misapplication poses serious ethical and criminal risks.(10)

 

Child Sexual Abuse Material (CSAM)

CSAM comprises any visual depiction of minors engaged in sexual activities or for sexual purposes. According to the Convention on Cybercrime,(11) this also includes artificially generated hyper-realistic depictions. Various organizations, such as the Ministerio Público Tutelar de Buenos Aires(12) and Humanium(7) agree that the term ‘child pornography’ is inappropriate because it trivializes the violent nature of the crime. MASI's concept underlines that these materials are evidence of abuse and not voluntary sexual content.

 

Legal Loopholes and Draft Legislation

The current regulatory framework is insufficient to address the creation of CSAM through the use of AI. In Argentina, there is currently no specific law regulating the use of AI in criminal contexts, although projects are underway to reform Article 128 of the Penal Code.(13,14,15) These bills seek to punish the generation and distribution of child sexual content through deepfakes and other AI technologies. Internationally, some US states have legislated on non-consensual deepfake pornography, but the results are still limited.(16)

From environmental criminology, routine activities theory(8) posits that crime occurs when a motivated offender, a suitable target, and the absence of a capable guardian come together.(17,18,19,20) In the digital context, AI has increased the number of available targets (digitally represented children), reduced the need for physical contact with victims, and weakened the role of guardians due to the lack of legislation and control on virtual platforms. Thus, AI functions as a catalyst that facilitates crime and complicates its detection.(21,22,23,24)

AI-facilitated CSAM represents an emerging phenomenon in cybercrime that challenges traditional categories of criminal law and criminological theory.(25,26,27,28) This theoretical framework allows us to understand how technology, in the absence of regulation and surveillance, can be instrumentalized for the commission of highly harmful and difficult-to-detect crimes.(29,30,31,32,33) Thus, it is essential to address this problem from an interdisciplinary approach that combines technology, law, criminology, and education.

 

CONCLUSIONS

Artificial intelligence has significantly enhanced human capabilities in various areas, including the automation of processes and the creation of digital content. However, its application in criminal contexts poses unprecedented and urgent challenges. In particular, the generation of Child Sexual Abuse Material (CSAM) through AI tools represents a tangible threat to the integrity of children and adolescents while also highlighting the normative, ethical, and technological limitations of our societies.

Through the analysis developed, it is found that CSAM generated by AI not only reproduces the harms of traditional representations of child sexual abuse but also amplifies its scope and complexity. This content can be produced without physical contact with a victim, which dilutes the perceived risk on the part of the aggressor but does not diminish the social harm or the need for protection. Indeed, the “generative” capacity of AI - that is, its ability to create unpublished images from existing data - allows for the mass creation of illicit content, even using manipulated real photos of minors. This introduces a new level of victimization, where the image can be exploited without the victim's knowledge.

An inadequate legal framework further exacerbates the situation. As mentioned above, Argentinean legislation does not yet specifically address the generation of ADR through AI. Although there are parliamentary proposals to reform Article 128 of the Penal Code, its implementation remains incipient. At the international level, some legislative initiatives have begun to consider these issues, but there is no global consensus or uniform protocol to guarantee effective responses. This lack makes the absence of a ‘capable gatekeeper,’ a central concept in the theory of routine activities, one of the main risk factors for the proliferation of these crimes.

From a criminological perspective, the use of AI as a criminal tool is driven by a convergence of enabling factors: technological ease, anonymity, perceived impunity, and a growing supply of platforms with increasingly powerful functionalities. Routine activity theory helps explain how the availability of targets (digitally represented minors), the motivation of the offender, and the weakness of control mechanisms work together to facilitate the commission of crime. Thus, AI is not only presented as a technical tool but also as a criminological enabler.

In this scenario, an interdisciplinary response is indispensable. The design of public policies must integrate the legal dimension, with the enactment of updated laws; the technological dimension, with the development of algorithms capable of detecting and blocking CSAM content; the educational dimension, with awareness-raising programs aimed at citizenship and the responsible use of AI; and the social dimension, with strategies to support victims and their environments. The protection of the rights of children and adolescents must become a top priority in regulating the digital environment.

In short, the advance of artificial intelligence requires an ethic of shared responsibility. Only through the articulation between the State, civil society, academia, and the private sector will it be possible to mitigate the adverse effects of this technology and promote its use for beneficial purposes, guaranteeing the dignity, integrity, and security of the most vulnerable sectors of society.

 

BIBLIOGRAPHICAL REFERENCES

1. Real Academia Española. Diccionario de la lengua española. s.f. Available from: https://dle.rae.es/inteligencia#2DxmhCT

 

2. Ministerio de Justicia de la Nación. Ley simple: Delitos contra la integridad sexual. s.f. Available from: https://www.argentina.gob.ar/justicia/derechofacil/leysimple/delitos-contra-la-integridad-sexual#titulo-7

 

3. Cordeiro VC. Combatir el aumento de material de abuso sexual infantil generado por la IA. Humanium; 2024. Available from: https://www.humanium.org/es/combatir-el-aumento-de-material-de-abuso-sexual-infantil-generado-por-la-ia/

 

4. Comisión Europea. AI Watch: Defining Artificial Intelligence. 2020. p. 12.

 

5. National Center for Missing & Exploited Children. Child Sexual Abuse Material (CSAM). s.f. Available from: https://missingkids.org

 

6. Biblioteca del Congreso de la Nación. Dossier Legislativo Nacional: Inteligencia Artificial. Buenos Aires: Biblioteca del Congreso de la Nación; 2024. Available from: https://bcn.gob.ar/uploads/adjuntos/Dossier-279-legis-nacional-inteligencia-artificial-feb-2024.pdf

 

7. Cordeiro VC. Abuso infantil. Humanium; 2021. Available from: https://www.humanium.org/es/enfoques-tematicos/violencia/abuso-infantil/

 

8. Vozmediano SL, San Juan Guillén C. Criminología ambiental, Ecología del delito y de la Seguridad. España: UOC; 2010.

 

9. Boden M. Inteligencia artificial. Madrid: Turner Publicaciones S.L.; 2016. p. 3.

 

10. UNESCO. Recomendación sobre la ética de la inteligencia artificial. 2021. Available from: https://unesdoc.unesco.org/ark:/48223/pf0000381137_spa

 

11. Consejo de Europa. Convenio sobre la ciberdelincuencia. 2001. Available from: https://www.oas.org/juridico/english/cyb_pry_convenio.pdf

 

12. Ministerio Público Tutelar de la Provincia de Buenos Aires. “Pornografía infantil”, no: se llama “abuso sexual contra niñas y niños y adolescentes”. 2021. Available from: https://mptutelar.gob.ar/pornograf-infantil-no-se-llama-abuso-sexual-contra-ni-y-ni-os-y-adolescentes

 

13. Romero A. Proyecto de Ley sobre el uso de IA en delitos sexuales infantiles. 2023. Available from: file:///C:/Users/pasca/Downloads/S2469_23PL%20(1).pdf

 

14. Lospennato S. Proyecto de Ley Modifica art. 128 CPN. HCDN; 2023. Available from: https://www4.hcdn.gob.ar/dependencias/dsecretaria/Periodo2023/PDF2023/TP2023/4436-D-2023.pdf

 

15. Milman G. Proyecto de Ley Modificación del Artículo 128 del Código Penal Argentino. HCDN; 2023. Available from: https://www4.hcdn.gob.ar/dependencias/dsecretaria/Periodo2023/PDF2023/TP2023/4411-D-2023.pdf

 

16. Le Masurier J. Adolescentes en Estados Unidos afectadas por los 'deepfakes' pornográficos - En Foco. France 24; 2024. Available from: https://www.france24.com/es/programas/en-foco/20240509-adolescentes-en-estados-unidos-afectadas-por-los-deepfakes-pornográficos

 

17. Levine AS. Fotos de niños generados por IA revelan depredadores en TikTok e Instagram. Forbes; 2023. Available from: https://www.forbes.com.mx/fotos-de-ninos-generados-por-ia-revelan-depredadores-en-tiktok-e-instagram/

 

18. Marín R. California está impulsando un paquete de leyes para regular la inteligencia artificial y combatir los deepfakes. Infobae; 2024. Available from: https://www.infobae.com/estados-unidos/2024/09/04/california-esta-impulsando-un-paquete-de-leyes-para-regular-la-inteligencia-artificial-y-combatir-los-deepfakes/

 

19. CBS News. Florida man accused of using AI to create child porn. 2024. Available from: https://www.cbsnews.com/miami/news/florida-man-accused-of-using-ai-to-create-child-porn/?intcid=CNM-00-10abd1h

 

20. Clarín. En EE.UU. piden más protección para adolescentes víctimas de deepfakes con desnudos. 2024. Available from: https://www.clarin.com/tecnologia/eeuu-piden-proteccion-adolescentes-victimas-deepfakes-desnudos_0_UmN2bNxU8r.html

 

21. Cloud Booklet. Imágenes realistas falsas: el generador de imágenes de IA. 2023. Available from: https://www.cloudbooklet.com/lifestyle/realistic-fake-nudes-with-ai-nude-generator

 

22. Moyano L. El uso de la IA para la creación de material de abuso sexual infantil. Ciberseguridad Latam; 2024. Available from: https://www.ciberseguridadlatam.com/2024/08/28/el-uso-de-de-la-ia-para-la-creacion-de-material-de-abuso-sexual-infantil/

 

23. Naciones Unidas. Protocolo Facultativo de la Convención sobre los Derechos del Niño, relativo a la venta de niños, la prostitución infantil y la utilización de niños en la pornografía (CRC-OPSC). 2002. Available from: https://www.ohchr.org/es/instruments-mechanisms/instruments/optional-protocol-convention-rights-child-sale-children-child

 

24. Forbes. Pedófilo grabó a niños en Disney World para crear imágenes de abuso con IA. 2024. Available from: https://www.forbes.com.mx/pedofilo-grabo-a-ninos-en-disney-world-para-crear-imagenes-de-abuso-con-ia/

 

25. Infobae. Escándalo en una escuela de Córdoba: un alumno utilizó IA para crear imágenes pornográficas de sus compañeras. 2024. Available from: https://www.infobae.com/sociedad/2024/07/03/escandalo-en-una-escuela-de-cordoba-un-alumno-utilizo-ia-para-crear-imagenes-pornograficas-de-sus-companeras/

 

26. Inhope Organization. What is CSAM? 2021. Available from: https://www.inhope.org/EN/articles/child-sexual-abuse-material

 

27. Sullivan E. El auge del material de abuso sexual infantil generado por inteligencia artificial. The New York Times; 2024. Available from: https://www.nytimes.com/es/2024/02/01/espanol/inteligencia-artificial-abuso-infantil.html

 

28. Tenbarge K. Deepfake porn and the Mr. Deep Fake economy. NBC News; 2023. Available from: https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake economy-google-visa-mastercard-download-rcna75071

 

29. The White Hatter. Alert for parents, caregivers, educators, and law enforcement concerning apps utilizing AI pose risks to youth and adults. 2023. Available from: https://thewhitehatter.ca/blog/alert-for-parents-caregivers-educators-and-law-enforcement-concerning-apps-utilizando-ai-pose-risks-to-youth-and-adults/

 

30. The White Hatter. Deepnudes: undressing AI-generated intimate image abuse material. 2023. Available from: https://thewhitehatter.ca/deepnudes-undressing-ai-generated-intimate-image-abuse-material/

 

31. La Voz. Generó imágenes pornográficas de sus compañeras con IA: ¿Qué hizo la escuela? 2024. Available from: https://www.lavoz.com.ar/ciudadanos/genero-imagenes-pornograficas-de-sus-companeras-con-ia-que-hizo-la-escuela/

 

32. Serrahima de Bedoya A. Avances y desafíos de la inteligencia artificial. Madrid: Universidad Pontificia Comillas; 2022. Available from: https://repositorio.comillas.edu/xmlui/bitstream/handle/11531/57178/Avances%20y%20desafios%20de%20la%20inteligencia%20artificial%20-%20Serrahima%20de%20Bedoya%2c%20Alvaro.pdf?sequence=2&isAllowed=y

 

33. UNICEF. El abuso sexual infantil en Argentina. 2017. Available from: https://www.unicef.org/argentina/sites/unicef.org.argentina/files/2018-04/proteccion-AbusoSexual_contra_NNyA-2016.pdf

 

FUNDING

The authors received no funding for the conduct of this research.

 

CONFLICT OF INTEREST

The authors declare that there is no conflict of interest.

 

AUTHORSHIP CONTRIBUTION

Conceptualisation: Antonella Petrini, María Belén Gauna.

Research: Antonella Petrini, María Belén Gauna.

Writing - original draft: Antonella Petrini, María Belén Gauna.

Writing - proofreading and editing: Antonella Petrini, María Belén Gauna.