doi: 10.56294/ai2024148
ReviEW
Artificial Intelligence and its Impact on the Production of Illicit Materials
La Inteligencia Artificial y su Impacto en la Producción de Material Ilícito
Antonella Petrini1, María Belén Gauna1
1Universidad siglo 21, Córdoba, Argentina.
Cite as: Petrini A, Belén Gauna M. Artificial Intelligence and its Impact on the Production of Illicit Materials. EthAIca. 2024; 3:148. https://doi.org/10.56294/ai2024148
Submitted: 03-09-2023 Revised: 18-01-2024 Accepted: 08-06-2024 Published: 09-06-2024
Editor: PhD.
Rubén
González Vallejo
ABSTRACT
Artificial intelligence has profoundly transformed different areas of human development, but it has also been used for criminal purposes, especially in the generation of child sexual abuse material (CSAM). This alarming phenomenon revealed AI’s potential to create hyperrealistic content that is indistinguishable from real images, making it difficult to detect even by specialised analysts. The use of generative AI has enabled the production of sexual images and videos of minors, both synthetic and modified from real photos, posing a serious and growing threat. Organisations such as the Internet Watch Foundation and the National Centre for Missing & Exploited Children have documented a significant increase in the number of reports, highlighting the need for more effective legal frameworks and public policies. At the local level, institutions such as the Buenos Aires Cybersecurity Centre warned about practices such as ‘deepnudes’, highlighting the urgent need to take action in national contexts as well. From a criminological perspective, AI was understood as a tool that facilitated criminal opportunities by reducing barriers for offenders and allowing crimes to be committed without direct contact with victims. In the absence of regulation and digital education, the need for a comprehensive response was raised, including legislation, prevention, international cooperation and citizen training. Protecting children and adolescents from these risks required a joint effort between governments, organisations and civil society to ensure the ethical and safe use of technology.
Keywords: Artificial Intelligence; Child Sexual Abuse; Deepfake; Criminology; Prevention.
RESUMEN
La inteligencia artificial transformó profundamente distintos ámbitos del desarrollo humano, pero también fue empleada con fines delictivos, especialmente en la generación de material de abuso sexual infantil (MASI). Este fenómeno alarmante reveló el potencial de la IA para crear contenido hiperrealista, indistinguible de imágenes reales, lo que dificultó su detección incluso por parte de analistas especializados. El uso de IA generativa permitió producir imágenes y videos sexuales de menores, tanto sintéticos como modificados a partir de fotos reales, constituyendo una amenaza grave y creciente. Organizaciones como la Internet Watch Foundation y el National Center for Missing & Exploited Children documentaron un aumento significativo en la cantidad de denuncias, destacando la necesidad de marcos legales y políticas públicas más eficaces. A nivel local, instituciones como el Centro de Ciberseguridad de Buenos Aires alertaron sobre prácticas como los “deepnudes”, evidenciando la urgencia de actuar también en contextos nacionales. Desde la criminología, la IA fue entendida como una herramienta que facilitó oportunidades delictivas al reducir barreras para los infractores y permitir la comisión de delitos sin contacto directo con las víctimas. Ante la ausencia de regulación y educación digital, se planteó la necesidad de una respuesta integral que incluyera legislación, prevención, cooperación internacional y formación ciudadana. Proteger a niños, niñas y adolescentes frente a estos riesgos implicó un esfuerzo conjunto entre gobiernos, organizaciones y sociedad civil para garantizar el uso ético y seguro de la tecnología.
Palabras clave: Inteligencia Artificial; Abuso Sexual Infantil; Deepfake; Criminología; Prevención.
INTRODUCTION
On the other hand, artificial intelligence has transformed everything from industrial automation to the personalization of digital content. To begin with, AI can be defined as: “AI consists of the design of processes that, when executed on a physical architecture, produce results that maximize a certain measure of performance.
These processes are based on sequences of inputs that are perceived and stored by the aforementioned architecture (in the hands of an ALGORITHM)».(1)
“Colloquially, the term artificial intelligence is applied when a machine mimics cognitive functions that humans associate with other human minds, such as perceiving, reasoning, learning, and problem solving”.(1) This ability to mimic cognitive processes and automate complex tasks has not only revolutionized sectors such as industry and communication. However, it has also opened up new opportunities for the misuse of technology in criminal activities, especially in the digital realm. Based on this, these advances in AI raise ethical issues, as pointed out by Gabriela Ramos, Assistant Director-General for Social and Human Sciences at UNESCO: “These rapid changes also raise profound ethical dilemmas, arising from the potential of AI-based systems to reproduce biases, contribute to climate degradation, and threaten human rights, among others.”
In addition, artificial intelligence has created a unique opportunity for criminals to produce an infinite amount of child sexual abuse content. All variants of this type of content created using AI are illegal, whether or not they depict a real child.(2)
As mentioned by the Internet Watch Foundation, “the majority of AI-generated child sexual abuse images identified by IWF analysts are now realistic enough to be treated as real images under UK law”.(3) This organization indicates that even for specialized analysts and with experience, it is tough to differentiate AI-generated MASI because it appears too realistic.(3) This is why it is important to analyze this problem, which is very new and challenging to detect, and why prevention and legislation by governments and awareness among citizens are needed. This phenomenon has shown alarming growth, leading various organizations to produce detailed reports. A growing trend has been identified in which artificial intelligence is being exploited to produce child sexual abuse material. The initial report in October 2023 revealed the presence of more than 20 000 images generated by artificial intelligence on the dark web forum in one month, where more than 3 000 showed criminal activities of child sexual abuse.(3) Since then, the problem has intensified seriously. According to the updated report from July 2024, there has been an increase in the number of images generated, including more serious images, as well as videos. Furthermore, these are no longer only published on the dark web but also on the surface web.(3)
In 2023, the National Center for Missing and Exploited Children’s CyberTipline received 4 700 reports linked to child sexual abuse material or sexual exploitation content using Generative Artificial Intelligence (GAI) technology. This type of material, known as GAI MASI, shows images of digitally created children in sexually explicit situations and can be produced on demand by users of specific generative AI platforms. In addition, this technology allows for the creation of sexually explicit deepfake images and videos, starting with an innocent photo of a real child to generate a computerized version with sexual content.(4) This significant rise in reports, coupled with the evolution of MASI creation techniques, underscores the need for a comprehensive approach to addressing the problem.
In the local context, there are increasing reports and awareness of these illegal actions. The Cybersecurity Center of the Province of Buenos Aires published a note on its website warning that authentic images can be modified to create pornography, known as “deepnudes,” which constitute a severe form of cyberbullying and violation of people’s sexual integrity. Artificial intelligence can transform our lives, but it also presents challenges that must be addressed proactively. It is essential to have preventive measures and tools in place to reduce risks and ensure the responsible use of these technologies.(5)
AI-generated CSAM (child sexual abuse material) can be both synthetic and real-life.(2) According to Cordeiro(2): “This content poses a double threat: on its own, it enables the generation of new and harmful CSAM, and at the same time, AI tools allow criminals to access other CSAM and disseminate it at an unprecedented rate.” Illegal content may include manipulated images of real minors or artificial creations. AI’s ability to generate content that did not previously exist is known as “generative” capability and increases the risk for potential victims of CSAM of allowing these images to be created.(2,6)
Protecting children and adolescents from the impact of child sexual abuse material produced with generative artificial intelligence involves providing education and support from adults. The opportunity to guide young people to use this tool safely and understand the risks of misusing it to create this type of content.(4,7,8) This educational approach highlights the need for prevention strategies for the responsible use of Generative Artificial Intelligence (GAI) and to promote the risks and consequences associated with the creation of Child Sexual Abuse Material in order to prevent it and protect minors.
These illegal activities can be directly linked to criminology, primarily through criminal activity theory. As Felson explains, his approach offers explanations at the macro and micro levels about how crime rates are shaped.(9) At the micro level, it is argued that crime usually occurs when a potential offender happens to be in the same place as a suitable target, in the absence of a capable guardian to prevent the crime.
Complementarily, at the macro level, social organization and daily routines will make the convergence of offenders and targets much more likely in specific places and at specific times. In this context, it could be considered that offenders find in artificial intelligence a tool that facilitates criminal opportunities. Rapid access to information and the lack of need for direct contact with victims simplify the commission of these crimes. Finally, the absence of a “capable guardian” is manifested in the lack of regulations, implementation, and knowledge necessary to prevent these acts effectively.
DEVELOPMENT
Artificial intelligence (AI) has revolutionized various fields of human knowledge and technological development, enabling significant advances in sectors such as industry, medicine, education, and communication.(10) However, the growing power of this technology has also been used for illicit purposes, including the production of child sexual abuse material (CSAM).(12,13,14,15,16,17,18,19) This increasingly prevalent problem poses ethical, legal, and social challenges, especially when considering the ease with which AI can generate hyperrealistic content that violates the integrity of children and adolescents.(20,21,22) In this context, it is essential to understand the functioning, scope, and risks associated with the use of artificial intelligence, as well as to analyze the phenomenon from a criminological perspective that allows for the design of effective prevention, regulation, and protection strategies.
Artificial Intelligence: Concept and Scope
AI can be defined as the design of processes that, when executed on a physical architecture, generate results that optimize a measure of performance, using inputs perceived and stored by algorithms.(1,11) More generally, it refers to the ability of machines to mimic human cognitive functions such as perceiving, reasoning, learning, and problem-solving. This ability has transformed the way we interact with technology and has opened up both productive opportunities and risks associated with its misuse, especially in the digital environment.(10)
Ethics and Challenges of Artificial Intelligence
The rapid development of AI also raises fundamental ethical questions. As Gabriela Ramos, Assistant Director-General for Social and Human Sciences at UNESCO, warns, AI systems can reproduce biases, exacerbate social and ecological problems, and endanger human rights. These dilemmas become particularly urgent when considering the use of AI to commit crimes such as child sexual abuse, which poses a serious threat to human dignity and the safety of the most vulnerable.(7,23,24)
AI and the Generation of Child Sexual Abuse Content
The use of generative artificial intelligence has enabled criminals to produce sexually explicit images and videos depicting minors, either through synthetic creations or by manipulating authentic images. According to Cordeiro(2), this content is illegal in all its forms and represents a new digital threat. The Internet Watch Foundation(3) has identified thousands of AI-generated images on the dark web and has warned of their realism, to the point that even specialized analysts cannot easily distinguish them from real images.(9,25)
In 2023, the National Center for Missing & Exploited Children received more than 4 700 reports related to AI-created MASI. These include images generated on demand and deepfake content based on real photos of children,(4) demonstrating the urgent need for public policies and concrete actions to curb this growing trend.
The Situation in the Local Context
In Argentina, the issue has also gained visibility. The Buenos Aires Cybersecurity Center(5) has warned about “deepnudes,” where authentic images are altered using AI to generate pornography, which constitutes a severe form of cyberbullying and sexual abuse. The local response requires both specific legislation and awareness campaigns that promote the safe and ethical use of these technologies.(26,27,28,29)
Criminology and Criminal Opportunity with AI
According to the theory of criminal activity proposed by Felson, crime occurs when a potential offender encounters a suitable target in the absence of a capable guardian. In this case, artificial intelligence acts as a facilitator of crime, allowing illegal acts to be committed without physical contact and with high efficiency. The lack of effective regulations and general ignorance about how these tools work represent the absence of the guardian necessary to prevent crimes.(9,30,31)
Education, Prevention, and Regulation
The protection of children and adolescents from AI-generated MASI requires a comprehensive approach. Education and prevention must go hand in hand with updated legal frameworks. As the National Center for Missing & Exploited Children(4) argues, adults must guide young people about the risks of AI misuse, promoting a responsible and ethical approach that prevents harm before it occurs.(32,33)
The advancement of artificial intelligence has brought extraordinary benefits, but also significant challenges. The use of this technology to create child sexual abuse material represents an urgent, complex, and expanding threat. From an ethical, legal, and criminological perspective, governments, international organizations, and civil society must work together to establish regulatory frameworks, strengthen prevention systems, and promote digital education. Only through a coordinated and proactive response will it be possible to protect children from this new type of digital violence effectively.
CONCLUSIONS
Artificial intelligence, as a powerful technological tool, has profoundly transformed various aspects of everyday life, driving progress in key areas of knowledge and production. However, its misuse in the generation of child sexual abuse content is one of its most alarming and dangerous expressions. The ability of AI to create hyperrealistic material, often indistinguishable from real images, exposes a serious vulnerability in the protection of the rights of children and adolescents.
The magnitude of the problem, evidenced by international reports and growing complaints, shows that we are facing a complex phenomenon that requires a coordinated response. It is imperative to implement updated legal frameworks, promote international cooperation, strengthen cyber surveillance systems, and, especially, promote preventive digital education as a key tool for reducing risks.
Furthermore, from a criminological perspective, AI acts as a facilitator of crime by reducing barriers to its execution and increasing criminal opportunities in environments without adequate regulation. In this sense, the absence of a “capable guardian” translates into a lack of specific laws, technological control resources, and informed citizenship.
In conclusion, in the face of this new scenario, it is essential to adopt a comprehensive approach that combines ethics, regulation, prevention, and education to ensure the safe and responsible use of artificial intelligence, thereby protecting the integrity of children from these new forms of digital violence.
REFERENCES
1. Gaimari G. Inteligencia Artificial e Impacto en el cibercrimen. 2021. p. 39.
2. Cordeiro VC. Combatir el aumento de material de abuso sexual infantil generado por la IA. Humanium; 2024. Disponible en: https://www.humanium.org/es/combatir-el-aumento-de-material-de-abuso-sexual-infantil-generado-por-la-ia/
3. Internet Watch Foundation. How AI is being abused to create child sexual abuse material (CSAM) online. s.f. Disponible en: https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
4. National Center for Missing & Exploited Children. Child Sexual Abuse Material (CSAM). s.f. Disponible en: https://www.missingkids.org/
5. Centro de Ciberseguridad de Buenos Aires. El lado oscuro de la IA (parte II). 2024. Disponible en: https://buenosaires.gob.ar/noticias/el-lado-oscuro-de-la-ia-parte-ii
6. Inhope Organization. What is CSAM? 2021. Disponible en: https://www.inhope.org/EN/articles/child-sexual-abuse-material
7. Naciones Unidas. Protocolo Facultativo de la Convención sobre los Derechos del Niño, por lo que se refiere a la Venta de Niños, la Prostitución Infantil y la utilización de niños en la Pornografía (CRC-OPSC). 2002. Disponible en: https://www.ohchr.org/es/instruments-mechanisms/instruments/optional-protocol-convention-rights-child-sale-children-child
8. The White Hatter. Alert for parents, caregivers, educators, and law enforcement concerning apps utilizing AI pose risks to youth and adults. 2023. Disponible en: https://thewhitehatter.ca/blog/alert-for-parents-caregivers-educators-and-law-enforcement-concerning-apps-utilizando-ai-pose-risks-to-youth-and-adults/
9. The White Hatter. Deepnudes: undressing AI-generated intimate image abuse material. 2023. Disponible en: https://thewhitehatter.ca/deepnudes-undressing-ai-generated-intimate-image-abuse-material/
10. Serrahima de Bedoya A. Avances y desafíos de la inteligencia artificial. Epositorio Comillas; 2022. Disponible en: https://repositorio.comillas.edu/xmlui/bitstream/handle/11531/57178/Avances%20y%20desafios%20de%20la%20inteligencia%20artificial%20-%20Serrahima%20de%20Bedoya%2c%20Alvaro.pdf?sequence=2&isAllowed=y
11. Real Academia Española, Asociación de Academias de la Lengua Española. Diccionario de la lengua española. s.f. Disponible en: https://dle.rae.es/inteligencia#2DxmhCT
12. CBS News. Florida man accused of using AI to create child porn. 2024. Disponible en: https://www.cbsnews.com/miami/news/florida-man-accused-of-using-ai-to-create-child-porn/?intcid=CNM-00-10abd1h
13. Clarín. En EE.UU. piden más protección para adolescentes víctimas de deepfakes con desnudos. 2024. Disponible en: https://www.clarin.com/tecnologia/eeuu-piden-proteccion-adolescentes-victimas-deepfakes-desnudos_0_UmN2bNxU8r.html
14. Forbes. Pedófilo grabó a niños en Disney World para crear imágenes de abuso con IA. 2024. Disponible en: https://www.forbes.com.mx/pedofilo-grabo-a-ninos-en-disney-world-para-crear-imagenes-de-abuso-con-ia/
15. Infobae. Detuvieron a un hombre del estado de Florida por usar inteligencia artificial para crear pornografía infantil. 2024. Disponible en: https://www.infobae.com/estados-unidos/2024/04/05/detuvieron-a-un-hombre-del-estado-de-florida-por-usar-inteligencia-artificial-para-crear-pornografia-infantil/
16. Infobae. Escándalo en una escuela de Córdoba: un alumno utilizó IA para crear imágenes pornográficas de sus compañeras. 2024. Disponible en: https://www.infobae.com/sociedad/2024/07/03/escandalo-en-una-escuela-de-cordoba-un-alumno-utilizo-ia-para-crear-imagenes-pornograficas-de-sus-companeras/
17. Le Masurier J. Adolescentes en Estados Unidos afectadas por los ‘deepfakes’ pornográficos - En Foco. France 24; 2024. Disponible en: https://www.france24.com/es/programas/en-foco/20240509-adolescentes-en-estados-unidos-afectadas-por-los-deepfakes-pornogr%C3%A1ficos
18. Levine AS. Fotos de niños generados por IA revelan depredadores en TikTok e Instagram. Forbes; 2023. Disponible en: https://www.forbes.com.mx/fotos-de-ninos-generados-por-ia-revelan-depredadores-en-tiktok-e-instagram/
19. La Voz. Generó imágenes pornográficas de sus compañeras con IA: ¿Qué hizo la escuela?. 2024. Disponible en: https://www.lavoz.com.ar/ciudadanos/genero-imagenes-pornograficas-de-sus-companeras-con-ia-que-hizo-la-escuela/
20. Sullivan E. El auge del material de abuso sexual infantil generado por inteligencia artificial. The New York Times; 2024. Disponible en: https://www.nytimes.com/es/2024/02/01/espanol/inteligencia-artificial-abuso-infantil.html
21. Cloud Booklet. Imágenes realistas falsas: el generador de imágenes de IA. 2023. Disponible en: https://www.cloudbooklet.com/lifestyle/realistic-fake-nudes-with-ai-nude-generator
22. Moyano L. El uso de la IA para la creación de material de abuso sexual infantil. Ciberseguridad Latam; 2024. Disponible en: https://www.ciberseguridadlatam.com/2024/08/28/el-uso-de-de-la-ia-para-la-creacion-de-material-de-abuso-sexual-infantil/
23. Comisión Europea. AI Watch Defining Artificial Intelligence. 2020. p. 12.
24. Convenio sobre la ciberdelincuencia. Budapest: Consejo de Europa; 2001. Art. 9. Disponible en: https://www.oas.org/juridico/english/cyb_pry_convenio.pdf
25. Tenbarge K. Deepfake porn and the Mr. Deep Fake economy. NBC News; 2023. Disponible en: https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake-economy-google-visa-mastercard-download-rcna75071
26. Lospennato S. Proyecto de Ley Modifica art. 128 CPN. HCDN; 2023. Disponible en: https://www4.hcdn.gob.ar/dependencias/dsecretaria/Periodo2023/PDF2023/TP2023/4436-D-2023.pdf
27. Milman G. Proyecto de Ley Modificación del Artículo 128 del Código Penal Argentino. HCDN; 2023. Disponible en: https://www4.hcdn.gob.ar/dependencias/dsecretaria/Periodo2023/PDF2023/TP2023/4411-D-2023.pdf
28. Ministerio Público Tutelar de la Provincia de Buenos Aires. “Pornografía infantil”, no: se llama “abuso sexual contra niñas y niños y adolescentes”. 2021. Disponible en: https://mptutelar.gob.ar/pornograf-infantil-no-se-llama-abuso-sexual-contra-ni-y-ni-os-y-adolescentes
29. Ministerio de Justicia de la Nación. Ley simple: Delitos contra la integridad sexual. s.f. Disponible en: https://www.argentina.gob.ar/justicia/derechofacil/leysimple/delitos-contra-la-integridad-sexual#titulo-7
30. RAINN. Abuso sexual infantil. s.f. Disponible en: https://rainn.org/articles/abuso-sexual-infantil
31. Cordeiro VC. Abuso infantil. Humanium; 2021. Disponible en: https://www.humanium.org/es/enfoques-tematicos/violencia/abuso-infantil/
32. Marín R. California está impulsando un paquete de leyes para regular la inteligencia artificial y combatir los deepfakes. Infobae; 2024. Disponible en: https://www.infobae.com/estados-unidos/2024/09/04/california-esta-impulsando-un-paquete-de-leyes-para-regular-la-inteligencia-artificial-y-combatir-los-deepfakes/
33. Romero A. Proyecto de Ley sobre el uso de IA en delitos sexuales infantiles. 2023. Disponible en: file:///C:/Users/pasca/Downloads/S2469_23PL%20(1).pdf
FUNDING
The authors did not receive funding for the development of this research.
CONFLICT OF INTEREST
The authors declare that there is no conflict of interest.
AUTHOR CONTRIBUTION
Conceptualization: Antonella Petrini, María Belén Gauna.
Data curation: Antonella Petrini, María Belén Gauna.
Formal analysis: Antonella Petrini, María Belén Gauna.
Research: Antonella Petrini, María Belén Gauna.
Methodology: Antonella Petrini, María Belén Gauna.
Project management: Antonella Petrini, María Belén Gauna.
Resources: Antonella Petrini, María Belén Gauna.
Software: Antonella Petrini, María Belén Gauna.
Supervision: Antonella Petrini, María Belén Gauna.
Validation: Antonella Petrini, María Belén Gauna.
Visualization: Antonella Petrini, María Belén Gauna.
Writing – original draft: Antonella Petrini, María Belén Gauna.
Writing – review and editing: Antonella Petrini, María Belén Gauna.