doi: 10.56294/ai2024123
ORIGINAL
Child Sexual Abuse Material provided by Artificial Intelligence
Material de Abuso Sexual Infantil facilitado por Inteligencia Artificial
Antonella Petrini1, María Belén Gauna1
1Universidad Siglo 21. Cordoba, Argentina.
Cite as: Petrini A, Gauna MB. Child Sexual Abuse Material provided by Artificial Intelligence. EthAIca. 2024; 3:123. https://doi.org/10.56294/ai2024123
Submitted: 01-08-2023 Revised: 19-12-2023 Accepted: 10-05-2024 Published: 11-05-2024
Editor: PhD.
Rubén González Vallejo
ABSTRACT
The present study aimed to analyze how artificial intelligence can facilitate the production and distribution of child sexual abuse material. The scope of the study was descriptive, adopting a qualitative approach, and the research design was non-experimental cross- sectional. The selected sample comprised documents, agreements, laws and books related to the subject matter; the sampling method was non-probabilistic and qualitative data collection instruments were used in the conduct of the study. The results demonstrated the development of Artificial Intelligence (AI) in the criminal sphere, as well as the lack of efficient legislation at both national and international level, even though initiatives addressing these matters have been explored in both contexts, along with an analysis of how these crimes operate. In conclusion, the paper highlighted an increase in the prevalence of crime, emphasizing the complexities of combating it without a clear and robust regulatory framework. This study also underscored the risks posed by social platforms lacking appropriate restrictions. Mutual collaboration among governments, organizations, companies and society is key to address these crimes, and the implementation of preventive strategies is fundamental to mitigate risks and safeguard minors in digital environments. The findings presented in this paper emphasize the need for a regulatory framework that addresses child sexual abuse material preventively and reactively.
Keywords: Child Sexual Abuse Material; Artificial Intelligence; Child Abuse Prevention Digital Platforms; Legislation.
RESUMEN
El presente trabajo buscó analizar como la inteligencia artificial facilita la creación y difusión de material de abuso sexual infantil. El alcance fue de tipo descriptivo, con un enfoque cualitativo, y su diseño fue no experimental-transversal. La muestra elegida se basó en documentos, acuerdos, leyes, libros, relacionados al tema; el muestreo elegido fue no probabilístico y los instrumentos de análisis fueron cualitativos. Los resultados evidenciaron el desarrollo de la inteligencia artificial en la esfera criminal, como también la insuficiencia de legislaciones eficientes tanto en el contexto nacional como internacional, aunque se exploraron iniciativas para ambos escenarios; así mismo se analizó el funcionamiento de estos delitos. Finalmente, a modo de conclusión, se destacó la presencia de un delito creciente, donde sin marcos regulatorios claros y robustos se complejiza combatirlo, y se destaca los riesgos que también presentan las plataformas sin restricciones adecuadas. La colaboración entre gobiernos, organizaciones, empresas y la sociedad es clave para abordar la problemática, y la implementación de estrategias preventivas resulta fundamental para reducir los riesgos y proteger a los menores de edad en el entorno digital. Estos hallazgos resaltan la necesidad de un marco regulador que aborde de manera preventiva y reactiva el contenido de abuso sexual infantil.
Palabras clave: Material de Abuso Sexual Infantil; Inteligencia Artificial; Prevención; Plataformas Digitales; Legislación.
INTRODUCTION
In recent years, Latin American states, primarily local governments, have begun to incorporate the concept of citizen security into their public policy actions. The reasons for this involvement are closely connected and interdependent and can be observed in the realities of our countries:
1. The decadence of the penal system (justice, police, and penitentiary system) to satisfy citizen demands in the face of existing levels of criminality. The emphasis on repression as a means of increasing and making social control more efficient has not yielded results in the face of the increase in crime, making it possible, then, to advance prevention strategies with a multi-sectoral dimension and an emphasis on prevention.(1)
2. A sustained and significant increase and transformation of violence(2) which has led to the fact that security which has led to security becoming a social requirement for local governments because they are in direct and close contact with citizens, from whom they receive and channel their demands.
3. The concept of citizen security (coexistence and interpersonal well-being) has been introduced over that of public security (maintenance of state public order).’ The elements of public security deal with the classic and traditional institutions of the penal system (police, justice, and penitentiary system) at the national or provincial level, while in citizen security, a plurality of actors appears, such as the municipalities, municipalities, the judiciary, the judiciary, and the penitentiary actors, such as municipalities, the media, and civil society's own ombudsman's offices.
It is important to mention that, in various municipal governments across different regions, due to their legal and institutional structure, it is not within their competencies to implement police, judicial, or penitentiary measures. Local governments are therefore faced with a new and dual dynamic:
· On the one hand, to continue implementing measures that address the issue of citizen security and violence primarily from a preventive perspective, as a process aimed at addressing criminal risk factors that give rise to illicit and/or violent behaviors.
· On the other hand, to guide, from a governance standpoint, this shared delegation of powers, which involves coordinating, articulating, and sharing strategies with different levels of government, without undermining their own responsibilities.
In this context, the capacity of Artificial Intelligence (AI) to generate Child Sexual Abuse Material (CSAM)—misleadingly referred to as “child pornography”—poses a growing challenge. How does AI facilitate the production and dissemination of such material, and what preventive strategies can be effective?
Understanding the following concepts is key: according to Article 9 of the Convention on Cybercrime, published in 2001(3) and to which Argentina acceded in November 2017, “Child pornography shall include pornographic material that visually depicts: a minor engaged in sexually explicit conduct, a person appearing to be a minor engaged in sexually explicit conduct, realistic images representing a minor engaged in sexually explicit conduct.”
For its part, the National Ministry of Justice understands the dissemination of pornographic images and performances of minors as: ‘Producing, financing, offering, trading, publishing, facilitating, disseminating or distributing, by any means, any representation of a minor under 18 years of age engaged in sexual activities or a representation of their genital parts for sexual purposes’(4)
According to the Public Tutelary Ministry of Buenos Aires, the term ‘child pornography’ is incorrect, and experts are seeking to eliminate its use. The proper way to refer to this crime is ‘sexual abuse against children and adolescents’. The use of the incorrect term does not accurately reflect the seriousness of the crime and can lead to misinterpretations, causing stigmatization, distortion, or invisibilisation of the problem. The specialists emphasize that adults and minors commit these actions and constitute serious violations of the human rights of children and adolescents. For this reason, it is recommended that ‘child pornography’ be replaced by ‘materials of sexual representation of children and adolescents’.(5)
How does artificial intelligence facilitate the creation and distribution of child sexual abuse material?
Objective: to analyze how artificial intelligence facilitates the creation and dissemination of child sexual abuse material.
METHOD
Design: This paper is descriptive in scope, as it seeks to ‘specify important properties, characteristics and features of any phenomenon being analyzed’ (p. 80). In this case, the phenomenon in question is the production and distribution of child sexual abuse material (CSAM) facilitated by artificial intelligence.
The research approach was qualitative, with non-numerical data collection that has been interpreted dynamically.
The research design is a non-experimental, cross-sectional study, as no variables were deliberately manipulated; only phenomena were observed as they occurred, and data collection was conducted at a single point in time.
Units of analysis: The sample chosen were documents, reports, studies, and agreements collected about the creation and dissemination of child sexual abuse material (CSAM) facilitated by artificial intelligence.
The sampling was non-probabilistic, as the choice of elements was not based on probability but on factors related to the study's characteristics.
Instruments: The methods of analysis used in this research are qualitative. For data collection, written sources were consulted, including scientific documents and articles, as well as information available on the official websites of organizations dedicated to investigating and eradicating crime. It also included the analysis of cases reported in the media.
Data analysis
• The first stage consisted of a bibliographic review for the development and construction of the theoretical framework.
• The second stage involved a documentary data survey, including the collection, selection, and analysis of documentary sources such as laws, reports, and official documents.
• Thirdly, the analysis of the collected data was carried out by organizing the content of various documents. Variables were previously identified, coded, and grouped into categories.
The following categories were established:
1. Child sexual abuse material: Child sexual abuse content, known as Child Sexual Abuse Material content, known as Child Sexual Abuse Material (CSAM), refers to any visual depiction that involves children under the age of 18 in sexually explicit activities or that depicts their genitalia for sexual purposes This material not only documents abuse but also perpetuates the re-victimization of victims by being shared online, representing a serious crime against minors (Optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution and Child Pornography.(6,7,9)
2. Artificial intelligence: The scientific and technological field focused on the development of software, hardware, and software focused on the development of software, which, by receiving and analyzing data, performs tasks that mimic human cognitive functions such as learning or reasoning.(9,10,11)
RESULTS
Based on the analysis carried out, we proceed to present the results aligned with the specific objectives established
To define artificial intelligence, its development, and its influence on the criminal sphere, with a specific focus on specific focus on the facilitation of child sexual abuse material
Artificial intelligence (AI) is defined as the discipline that seeks to build machines capable of performing tasks that require human intelligence. Margaret A. Boden explains that artificial intelligence aims to enable computers to perform tasks that can be carried out by the human mind.(10) AI is classified into several categories, such as assisted AI, which helps to perform tasks more efficiently, and autonomous AI, which makes decisions without human intervention. Furthermore, UNESCO has noted that artificial intelligence systems can exhibit biases, as search engines utilizing these technologies are not neutral and can reinforce existing societal stereotypes.(12) This highlights how artificial intelligence has evolved and integrated into multiple domains while raising concerns about its application.
The development of artificial intelligence (AI) has had a significant impact on the criminal sphere, especially in the facilitation of child sexual abuse content. A concrete case illustrating this issue is that of a student in Córdoba, Argentina, who used AI tools to create manipulated images of his female classmates, merging their faces with naked bodies and distributing this content on pornographic platforms.(13,14,15) This incident highlights the lack of adequate legal protection and how technology can be used to violate people's privacy, which has led to an increase in gender-based violence and other sexual crimes.(12) In addition, the rise of new technologies has facilitated the creation of this content, enabling the exploitation of victims and posing significant challenges to justice systems.
Another representative case is the arrest of Justin Culmo, who used an AI model called Stable Diffusion to generate thousands of illegal images of children.(16) This incident highlights how technological advances have made tools for creating CSAM more accessible, leading to an increase in the use of AI for criminal activities.(17) The combination of technologies, such as deepfakes, and the lack of effective regulation have allowed these criminal practices to proliferate and evolve, thereby increasing the vulnerability of victims.(13,14)
Finally, the arrest of Daniel Clark Warren in Florida, who allegedly used AI to generate child sexual abuse content of a neighbor, adds another concern to this issue.(18,19) Additionally, incidents have been reported in schools where educators and students have manipulated images of peers and students to create such content. This context reveals growing concerns about the impact of AI on people's privacy and security, as well as the need to develop more effective policies to protect victims of abuse. Emerging technologies, such as deep learning algorithms, have improved their ability to generate content, thereby increasing the risk of exploitation.(25)
Defining what is meant by sexual abuse related to child sexual abuse material, how these criminals conduct child sexual abuse material, and how artificial intelligence technologies can facilitate these criminal behaviors
The term “Child Sexual Abuse Material” (CSAM) is preferred over “child pornography” because it more accurately reflects the nature of the content, which is evidence of child sexual abuse.(20,21,22) The use of the term ‘pornography’ may give the misleading impression that there is consent, as is the case with adult pornography, whereas images of children always represent a violation of their rights and are a manifestation of the abuse of which they are victims.(20,22) Therefore, many organizations have chosen to use the term CSAM to emphasize the need to address this problem from a child protection and justice perspective rather than a term that minimizes the severity of abuse.(20,21)
The advancement of artificial intelligence (AI) has presented several concerning challenges in the context of CSAM. The Europol report has warned that AI is increasingly being used to generate child sexual content, complicating the identification of both victims and offenders.(23,24,25) AI tools enable the creation of hyper-realistic images and videos of children who do not exist, amplifying the production of child exploitation material and making it more difficult to detect.(23,24) This technology also enables the manipulation of existing images, generating sexualized content from innocent photos, contributing to the normalization of abuse and psychological harm to victims.(23)
In addition, the increased use of AI to create child sexual abuse material has led to a saturation of content on networks, making it difficult for law enforcement to identify and prosecute offenders. The difficulty in distinguishing between AI-generated images and those that are real creates an environment in which investigation becomes more complex and time-consuming.(23) This highlights the urgency of establishing stricter regulations and ensuring that technology companies take adequate measures to protect children from this crime.
Identify the existence of current laws and regulations that address the use of artificial intelligence in artificial intelligence in the commission of crimes, with a focus on protection against child sexual crimes
After researching various documentary sources, it was observed that there is no specific law or regulation governing the scope of Artificial Intelligence (AI) use in Argentina. (AI) in Argentina, although several projects were identified that proposed its regulation in different areas.
About the commission of child sex crimes through the use of AI, three bills were identified in Argentina that proposed amendments to Article 128 of the Penal Code (Law 11179). These bills sought to regulate the use of AI in the creation, dissemination, and commercialization of sexually explicit material related to minors. The first bill, submitted to the Senate of the Nation, proposed to sanction both the production and distribution of representations of minors in sexual activities generated by AI or any other technological mechanism. This bill emphasized the need to adapt legislation to deal with technological advances that allow the creation and distribution of hyperrealistic or fictitious representations of minors in sexually explicit situations.(26)
Similarly, two bills presented to the National Chamber of Deputies proposed sanctioning the creation and distribution of sexually explicit images, specifically through the use of deepfakes. These technologies allow the manipulation of images of minors, using their faces in other people's bodies or entirely fictitious representations. Both bills also provided penalties for those who commercialize this type of AI-generated content.(27)
In the international context, some US states, such as Texas, Minnesota, Virginia, New York, Hawaii, and Georgia, have already banned the creation and distribution of non-consensual deepfake pornography. However, it was noted that this legislation has failed to completely stop the rise of AI-generated CSAM images, particularly in school settings. In California, a series of laws have been proposed a series of laws that seek to regulate the use of AI to combat the creation of sexually explicit deepfakes, including criminal penalties and the possibility for victims to sue those who create and distribute such material.(28,29)
Identify the types of sexual abuse related to child sexual abuse material that are most common in the context of the child sexual abuse material that is most common in the context of the use of artificial intelligence
Several types of sexual abuse related to child sexual abuse material were identified in the context of the use of artificial intelligence. Among the most common practices, the generation of intimate and pornographic images of minors using AI technologies, primarily through deepfake applications, was highlighted. These tools allow users to manipulate photographs and create visual depictions of minors in sexualized situations, facilitating the creation and distribution of illegal content.(30,31) Researchers have noted that the volume of images produced by creators is significant, with some claiming that they aim to produce at least 1000 images per month.(1,30,31) In addition, it was noted that sex offenders use these technologies to produce material that can be mistaken for legitimate content, complicating detection and removal on digital platforms.(1,30,32) This type of abuse is not only limited to the creation of images but also includes extortion and manipulation of victims, creating an environment of fear and vulnerability.(1,31,32)
Images of abuse are shared through a three-stage process: firstly, pedophiles generate the images using AI software; secondly, they promote the photos on social media; and thirdly, they use the images as a way to promote the images to the public. Second, they promote the photos on platforms such as Pixiv, a Japanese image-sharing site. Finally, these accounts include links directing customers to more explicit images on sites like Patreon, where they can pay to access such content. Although Pixiv is primarily used by artists sharing manga and anime, its legal context allows for the sharing of sexual cartoons and illustrations of underage people, complicating the response to this phenomenon. Despite Pixiv's efforts to address this issue and the prohibition of realistic depictions of sexual content involving minors, it was evident that some users continue to promote their work in groups and through tags, facilitating the circulation of such material.(1,30,31)
Analyse common patterns and methods in the use of artificial intelligence to facilitate child sexual abuse material
Analysis of common patterns and methods in the use of artificial intelligence to facilitate access to child sexual abuse material showed various strategies used by offenders to evade detection. It has been documented that many of these actors utilize popular social networks, such as TikTok and Instagram, to disseminate AI-generated content, thereby increasing the exposure of young people to potentially risky situations. The use of apps that generate realistic images of people, including images of minors, was also identified as a key tool in the dissemination of sexual abuse material.(1,30,31,32,33)
In this context, many accounts on Pixiv include links in their bios directing users to their ‘uncensored content’ on Patreon, where they offer to sell
photorealistic obscene images of children and adolescents, with different price levels depending on the type of material requested. For example, one user offered access to ‘exclusive uncensored art’ for US$8.30 per month, while another indicated that he was ‘training his girls on his PC’ with ‘submission’ content. The Patreon platform has claimed to have a ‘zero tolerance’ policy towards content involving minors, although there have been confirmed accounts that violate these policies by offering child-exploitative material.(1,31,32,33)
The data also revealed that offenders use automated technologies and bots to facilitate the distribution of this content, further complicating the efforts of law enforcement and child protection organizations to prevent and respond to abuse.(1,30,31) This scenario suggests the urgency of developing prevention strategies that address the relationship between artificial intelligence and child sexual abuse, considering the new methods and patterns emerging in this context. Ian Critchley, Director of Child Protection at the National Police Chiefs' Council, noted that the invasion of realistic AI or “synthetic” images could hinder the process of identifying real victims of abuse, creating an additional requirement for law enforcement authorities.(1,30)
CONCLUSIONS
The research conducted allowed us to develop one of the most worrying aspects of artificial intelligence (AI) applied to criminality: the creation and dissemination of Child Sexual Abuse Material (CSAM) through advanced technologies. This problem, thanks to advances in AI, has given rise to new forms of crime that urgently require criminological analysis and a readjustment of regulations. Among the study's findings is the growing use of AI to facilitate online content, and therefore, there is an imminent need to establish prevention strategies and updated laws with an emphasis on protecting the most vulnerable victims: children and adolescents.
The advancement of artificial intelligence has revolutionized society in every aspect, from how we interact with technology to how criminals use these tools to their advantage. AI enabled clear access for criminals, making it easier for these types of crimes to occur. The growth of technologies such as machine learning and imaging algorithms has allowed offenders to evade security measures. Based on research, it is clear that these technologies facilitate the creation of this type of content at an alarming rate, allowing child exploitation to be simplified and criminals to go undetected. The lack of knowledge about how these technologies work and their potential for criminal use highlights the need to develop effective regulatory and prevention frameworks.(34,35,36)
As mentioned, the emergence of strict regulations is one of the main factors for action. Technology companies must implement robust strategies to protect children from these crimes. However, the lack of clarity about what regulations are necessary and how they should be enforced can make it challenging to implement them. It should be emphasized that any regulatory framework needs to be not only strict but also adaptable to the rapidly evolving nature of technology and related criminal tactics, ensuring that it is truly effective.(37,38,39)
An obstacle in the research was the difficulty in finding regulations that govern artificial intelligence, which highlights the lack of law or regulation on artificial intelligence in Argentina as one of the main shortcomings, especially in this context where not only technology is advancing but also the problem is growing. The present situation not only exposes more and more children to be possible victims but also leaves a vacuum for the authorities, who do not have the necessary tools to address the issue. Without clear and specific legislation, offenders continue to operate in an environment that enables their activities, highlighting the need for immediate action.(40,41)
Although the United States took the initiative in measures to prohibit the creation and distribution of deepfake pornography, they failed to eradicate the problem, especially in school settings.(28) This highlights the inadequacy of the current measures, which are often not effectively implemented or lack the necessary resources to achieve their goals. The lack of enforcement and current legal loopholes allow offenders to evade justice; therefore, it is suggested that enforcement strategies and community awareness campaigns should accompany the creation of new measures.
Thus, the lack of a legal and regulatory framework for artificial intelligence and child protection poses a significant challenge in the fight against AI-facilitated CSAM. Additionally, the lack of cooperation between jurisdictions creates loopholes, allowing criminals to find escape routes through platforms where regulation is either weak or nonexistent. This is why both national and international measures and regulations are needed to unify criteria and promote cooperation in the investigation and prosecution of these crimes. Without consolidated international collaboration, local or national efforts will be insufficient to address this complex problem, which operates in the digital sphere across borders.
Furthermore, the lack of law enforcement training and access to specialized tools is another obstacle. Rapidly evolving technology outpaces the adaptive capacity of law enforcement agencies, creating a gap between the techniques used to generate and distribute CSAM and the authorities' ability to respond. This underscores the need to equip agencies with the necessary resources, both in terms of technology and training, to effectively respond to these new forms of crime and protect potential victims.
The Pivix case shows the complications regarding the lack of restriction of sexual content on platforms. Despite implementing policies to prohibit realistic depictions of such content involving minors, users reportedly find ways to circumvent these rules by using private groups and specific tags.(1) This reveals that measures are not robust enough, and the need for platforms to establish clear rules and invest in technologies to detect such content is critical. Without effective collaboration with the authorities, these platforms are a shortcut for the circulation of CSAM.
Similarly, it was noted that these offenders use social networks such as TikTok and Instagram to disseminate AI-generated content, reflecting a lack of safeguards for young people.(30,33) What is dangerous about these platforms is their virality, which, combined with the lack of oversight, creates a fertile ground for the dissemination of such images and videos. The situation calls for an urgent review of content policies and collaboration between technology companies and law enforcement agencies to develop strategies to reduce the risks faced by young people in the digital world.
Despite Patreon's claimed “zero tolerance” policy towards content involving minors, accounts have been confirmed to be violating the measures, calling into question its effectiveness.(1) Platforms must, therefore, take greater responsibility for regulating content and preventing child abuse. The lack of stringent measures allows offenders to operate in the shadows, thus requiring effective mechanisms to detect and remove content. Creating a safe digital environment requires clear policies, effective enforcement, and a culture of accountability among Internet service providers.
Because these crimes are transnational, countries must collaborate to address the issue, as significant disparities in laws and regulations hinder the provision of an effective and coordinated response to this growing problem. The creation of international agreements that align standards and facilitate the exchange of information will enhance the ability to combat these crimes. Cooperation among states, organizations, and businesses is crucial to developing a comprehensive approach to AI-facilitated AML/CFT protection.
Developing preventive strategies for this issue is a necessary measure, but they must be designed comprehensively and effectively rather than reactively. This requires a thorough analysis of the methods and patterns that emerge, ensuring efficient and adaptive responses. This implies multidisciplinary collaboration that includes experts in technology, psychology, criminology, and cybersecurity, as well as the involvement of communities.
As Ian Critchley of the NPCC mentioned, the large number of realistic or synthetic images generated by AI makes it difficult to distinguish between real and fake victims.(1) This statement highlights that current identification and response strategies are insufficient for addressing this crime. This critical issue may complicate the rescue and protection of victims, necessitating a reevaluation of investigative methodologies and the integration of technologies and tactics to analyze and detect these shortcomings. Authorities must adapt to this reality and strengthen their capacities to address this phenomenon with the necessary speed and effectiveness.
It is also critical to address the psychological consequences that victims of child exploitation can suffer as a result of these crimes. The availability and distribution of CSAM can have adverse effects on children, and recognizing this is crucial to developing intervention and prevention strategies. Victims require comprehensive support, not only from a legal perspective but also from a psychological and emotional point of view.
For these crimes, criminology plays a crucial role in their analysis, providing a theoretical and practical framework for understanding their dynamics. As technology evolves, criminologists must adapt their approaches to investigate the new methodologies used and the implications they have for victim identification and protection. Linking artificial intelligence and criminology is fundamental to developing prevention strategies and responses to address child exploitation. This will not only identify criminal patterns and trends but also encourage the creation of public policies and regulations that protect minors and promote a safe digital environment.
Finally, it should be noted that recognizing the fight against the creation and dissemination of AI-generated CSAM is not only the responsibility of the state or platforms. The entire society, including parents and educators, must be committed to preventing this crime and creating a protected digital space. Therefore, education about the risks of AI use,
Education about the risks of using technology, and the promotion of a culture of responsibility in the virtual world are required. The collaboration of all social actors is essential for building a trustworthy environment for children and adolescents, where artificial intelligence can be utilized as a tool for benefits and evolution rather than as a risk.
This research contributes to the fields of cybersecurity and criminology in Argentina, addressing a problem that remains inadequately regulated worldwide, including in the local context. We find ourselves in a context in which the cyber world is advancing rapidly, and regulations and agents have not kept pace with this evolution. This study highlights the urgency of regulatory frameworks that regulate, restrict, and prevent the generation and dissemination of AI-facilitated child sexual abuse content. Strengthening the understanding of the dynamics of these crimes provides a basis for future studies and regulations in criminology and cybersecurity. It adds value by exploring the ethical, technological, and criminal implications of artificial intelligence in the facilitation of child sexual abuse material. It provides an academic resource for policymakers, law enforcement officials, and criminology practitioners seeking to understand and address the emerging risks associated with this issue.
Based on the findings of this study, it is recommended that further research be conducted on the psychological and emotional impacts on victims of CSAM facilitated by AI. Furthermore, researching the effectiveness of tools to detect and block illicit content in real-time on digital platforms may offer proactive solutions to mitigate this problem. In parallel, it is crucial to establish prevention strategies quickly to reduce this problem more effectively.
BIBLIOGRAPHICAL REFERENCES
1. BBC Mundo. Deepfake porn: el peligro de las imágenes sintéticas. 2023. Available from: https://www.bbc.com/mundo/articles/c1vzyevl0nro
2. Biblioteca del Congreso de la Nación. Dossier Legislativo Nacional: Inteligencia Artificial. Buenos Aires: Biblioteca del Congreso de la Nación; 2024. Available from: https://bcn.gob.ar/uploads/adjuntos/Dossier-279-legis-nacional-inteligencia-artificial-feb-2024.pdf
3. Consejo de Europa. Convenio sobre la ciberdelincuencia. 2001. Art. 9. Available from: https://www.oas.org/juridico/english/cyb_pry_convenio.pdf
4. Ministerio de Justicia de la Nación. Ley simple: Delitos contra la integridad sexual. s.f. Available from: https://www.argentina.gob.ar/justicia/derechofacil/leysimple/delitos-contra-la-integridad-sexual#titulo-7
5. Ministerio Público Tutelar de la Provincia de Buenos Aires. “Pornografía infantil”, no: se llama “abuso sexual contra niñas y niños y adolescentes”. 2021. Available from: https://mptutelar.gob.ar/pornograf-infantil-no-se-llama-abuso-sexual-contra-ni-y-ni-os-y-adolescentes
6. Naciones Unidas. Protocolo Facultativo de la Convención sobre los Derechos del Niño. 2002. Available from: https://www.ohchr.org/es/instruments-mechanisms/instruments/optional-protocol-convention-rights-child-sale-children-child
7. Inhope Organization. What is CSAM?. 2021. Available from: https://www.inhope.org/EN/articles/child-sexual-abuse-material
8. National Center for Missing & Exploited Children. Child Sexual Abuse Material (CSAM). s.f. Available from: https://missingkids.org
9. Real Academia Española. Diccionario de la lengua española. s.f. Available from: https://dle.rae.es/inteligencia#2DxmhCT
10. Boden M. Inteligencia artificial. Madrid: Turner Publicaciones S.L.; 2016. p. 3.
11. Comisión Europea. AI Watch: Defining Artificial Intelligence. 2020. p. 12.
12. UNESCO. Recomendación sobre la ética de la inteligencia artificial. 2021. Available from: https://unesdoc.unesco.org/ark:/48223/pf0000381137_spa
13. Infobae. Detuvieron a un hombre del estado de Florida por usar inteligencia artificial para crear pornografía infantil. 2024. Available from: https://www.infobae.com/estados-unidos/2024/04/05/detuvieron-a-un-hombre-del-estado-de-florida-por-usar-inteligencia-artificial-para-crear-pornografia-infantil/
14. Infobae. Escándalo en una escuela de Córdoba: un alumno utilizó IA para crear imágenes pornográficas de sus compañeras. 2024. Available from: https://www.infobae.com/sociedad/2024/07/03/escandalo-en-una-escuela-de-cordoba-un-alumno-utilizo-ia-para-crear-imagenes-pornograficas-de-sus-companeras/
15. La Voz. Generó imágenes pornográficas de sus compañeras con IA: ¿Qué hizo la escuela?. 2024. Available from: https://www.lavoz.com.ar/ciudadanos/genero-imagenes-pornograficas-de-sus-companeras-con-ia-que-hizo-la-escuela/
16. Forbes. Pedófilo grabó a niños en Disney World para crear imágenes de abuso con IA. 2024. Available from: https://www.forbes.com.mx/pedofilo-grabo-a-ninos-en-disney-world-para-crear-imagenes-de-abuso-con-ia/
17. UNESCO. Ética de la inteligencia artificial. s.f. Available from: https://www.unesco.org/es/artificial-intelligence/recommendation-ethics?hub=32618
18. CBS News. Florida man accused of using AI to create child porn. 2024. Available from: https://www.cbsnews.com/miami/news/florida-man-accused-of-using-ai-to-create-child-porn/?intcid=CNM-00-10abd1h
19. WFLA News Channel. Pasco teacher arrested: Is AI-generated child porn illegal?. 2024. Available from: https://www.wfla.com/news/pasco-county/pasco-teacher-arrested-is-ai-generated-child-porn-illegal/
20. Cordeiro VC. Abuso infantil. Humanium; 2021. Available from: https://www.humanium.org/es/enfoques-tematicos/violencia/abuso-infantil/
21. RAINN. Abuso sexual infantil. s.f. Available from: https://rainn.org/articles/abuso-sexual-infantil
22. UNICEF. El abuso sexual infantil en Argentina. 2017. Available from: https://www.unicef.org/argentina/sites/unicef.org.argentina/files/2018-04/proteccion-AbusoSexual_contra_NNyA-2016.pdf
23. Sullivan E. El auge del material de abuso sexual infantil generado por inteligencia artificial. The New York Times; 2024. Available from: https://www.nytimes.com/es/2024/02/01/espanol/inteligencia-artificial-abuso-infantil.html
24. Moyano L. El uso de la IA para la creación de material de abuso sexual infantil. Ciberseguridad Latam; 2024. Available from: https://www.ciberseguridadlatam.com/2024/08/28/el-uso-de-de-la-ia-para-la-creacion-de-material-de-abuso-sexual-infantil/
25. Internet Watch Foundation. How AI is being abused to create child sexual abuse material (CSAM) online. s.f. Available from: https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/
26. Romero A. Proyecto de Ley sobre el uso de IA en delitos sexuales infantiles. 2023. Available from: file:///C:/Users/pasca/Downloads/S2469_23PL%20(1).pdf
27. Milman G. Proyecto de Ley Modificación del Artículo 128 del Código Penal Argentino. HCDN; 2023. Available from: https://www4.hcdn.gob.ar/dependencias/dsecretaria/Periodo2023/PDF2023/TP2023/4411-D-2023.pdf
28. Le Masurier J. Adolescentes en Estados Unidos afectadas por los 'deepfakes' pornográficos - En Foco. France 24; 2024. Available from: https://www.france24.com/es/programas/en-foco/20240509-adolescentes-en-estados-unidos-afectadas-por-los-deepfakes-pornográficos
29. Clarín. En EE.UU. piden más protección para adolescentes víctimas de deepfakes con desnudos. 2024. Available from: https://www.clarin.com/tecnologia/eeuu-piden-proteccion-adolescentes-victimas-deepfakes-desnudos_0_UmN2bNxU8r.html
30. The White Hatter. Alert for parents, caregivers, educators, and law enforcement concerning apps utilizing AI pose risks to youth and adults. 2023. Available from: https://thewhitehatter.ca/blog/alert-for-parents-caregivers-educators-and-law-enforcement-concerning-apps-utilizando-ai-pose-risks-to-youth-and-adults/
31. Levine AS. Fotos de niños generados por IA revelan depredadores en TikTok e Instagram. Forbes; 2023. Available from: https://www.forbes.com.mx/fotos-de-ninos-generados-por-ia-revelan-depredadores-en-tiktok-e-instagram/
32. Cloud Booklet. Imágenes realistas falsas: el generador de imágenes de IA. 2023. Available from: https://www.cloudbooklet.com/lifestyle/realistic-fake-nudes-with-ai-nude-generator
33. Tenbarge K. Deepfake porn and the Mr. Deep Fake economy. NBC News; 2023. Available from: https://www.nbcnews.com/tech/internet/deepfake-porn-ai-mr-deep-fake economy-google-visa-mastercard-download-rcna75071
34. Centro de Ciberseguridad de Buenos Aires. El lado oscuro de la IA (parte II). 2024. Available from: https://buenosaires.gob.ar/noticias/el-lado-oscuro-de-la-ia-parte-ii
35. Cordeiro VC. Combatir el aumento de material de abuso sexual infantil generado por la IA. Humanium; 2024. Available from: https://www.humanium.org/es/combatir-el-aumento-de-material-de-abuso-sexual-infantil-generado-por-la-ia/
36. Gaimari G. Inteligencia Artificial e Impacto en el cibercrimen. 2021. p. 39.
37. Lospennato S. Proyecto de Ley Modifica art. 128 CPN. HCDN; 2023. Available from: https://www4.hcdn.gob.ar/dependencias/dsecretaria/Periodo2023/PDF2023/TP2023/4436-D-2023.pdf
38. Marín R. California está impulsando un paquete de leyes para regular la inteligencia artificial y combatir los deepfakes. Infobae; 2024. Available from: https://www.infobae.com/estados-unidos/2024/09/04/california-esta-impulsando-un-paquete-de-leyes-para-regular-la-inteligencia-artificial-y-combatir-los-deepfakes/
39. Serrahima de Bedoya A. Avances y desafíos de la inteligencia artificial. Madrid: Universidad Pontificia Comillas; 2022. Available from: https://repositorio.comillas.edu/xmlui/bitstream/handle/11531/57178/Avances%20y%20desafios%20de%20la%20inteligencia%20artificial%20-%20Serrahima%20de%20Bedoya%2c%20Alvaro.pdf?sequence=2&isAllowed=y
40. The White Hatter. Deepnudes: undressing AI-generated intimate image abuse material. 2023. Available from: https://thewhitehatter.ca/deepnudes-undressing-ai-generated-intimate-image-abuse-material/
41. Vozmediano SL, San Juan Guillén C. Criminología ambiental, Ecología del delito y de la Seguridad. España: UOC; 2010.
FUNDING
The authors received no funding for this research.
CONFLICT OF INTEREST
The authors declare that there is no conflict of interest.
AUTHORSHIP CONTRIBUTION
Conceptualisation: Antonella Petrini, María Belén Gauna.
Research: Antonella Petrini, María Belén Gauna.
Writing - original draft: Antonella Petrini, María Belén Gauna.
Writing - proofreading and editing: Antonella Petrini, María Belén Gauna.