Algorithmic biases in mental health diagnoses and their impact on vulnerable populations: a documentary review of advances and challenges
DOI:
https://doi.org/10.56294/ai202220Keywords:
algorithmic biases, mental health, vulnerable populations, artificial intelligence, health equityAbstract
Algorithmic biases in mental health diagnostic systems represent a critical challenge, particularly for vulnerable populations, as they perpetuate inequities in access to and quality of care. This article aims to analyze advances and challenges in identifying and mitigating these biases through a documentary review of Spanish and English articles indexed in Scopus between 2018 and 2022. The methodology involved a systematic analysis of 50 selected studies, classified into four thematic areas: types of algorithmic biases, clinical impact on vulnerable populations, technical limitations in algorithm development, and proposed mitigation strategies. The results demonstrate that biases are deeply rooted in training data and the unequal representation of marginalized groups, leading to less accurate diagnoses for women, racialized communities, and low-income individuals. Although technical and ethical approaches have been proposed, gaps persist in their practical implementation. The study concludes that without multidisciplinary intervention integrating public health, ethics, and data science perspectives, algorithms will continue to reproduce structural inequalities. This research underscores the urgency of inclusive policies and robust regulatory frameworks to ensure equity in digital mental health.
References
Graham S, Depp C, Lee E, Nebeker C, Tu X, Kim H, Jeste D. Artificial Intelligence for Mental Health and Mental Illnesses: an Overview. Current Psychiatry Reports 2019;21:116. https://doi.org/10.1007/s11920-019-1094-0
Grzenda A. Artificial Intelligence in Mental Health. In: Convergence Mental Health. Oxford University Press; 2021. https://doi.org/10.1093/MED/9780197506271.003.0011
Pérez Gamboa AJ, Gómez Cano CA, Sánchez Castillo V. Decision making in university contexts based on knowledge management systems. Data and Metadata 2022;1:92. https://doi.org/10.56294/dm202292
Ntoutsi E, Fafalios P, Gadiraju U, Iosifidis V, Nejdl W, Vidal M, et al. Bias in data‐driven artificial intelligence systems—An introductory survey. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 2020;10(3):e1356. https://doi.org/10.1002/widm.1356
Peters U. Algorithmic Political Bias in Artificial Intelligence Systems. Philosophy & Technology 2022;35:25. https://doi.org/10.1007/s13347-022-00512-8
Cano CAG, Castillo VS. Mobbing en una institución de educación superior en Colombia. Aglala 2021;12(2):100-116. https://dialnet.unirioja.es/servlet/articulo?codigo=8453105
Straw I, Callison-Burch C. Artificial Intelligence in mental health and the biases of language based models. PLoS ONE 2020;15(12):e0240376. https://doi.org/10.1371/journal.pone.0240376
Timmons A, Duong J, Fiallo N, Lee T, Vo H, Ahle M, et al. A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health. Perspectives on Psychological Science 2022;18(5):1062-1096. https://doi.org/10.1177/17456916221134490
Park J, Arunachalam R, Silenzio V, Singh V. Fairness in Mobile Phone–Based Mental Health Assessment Algorithms: Exploratory Study. JMIR Formative Research 2022;6:e34366. https://doi.org/10.2196/34366
Estrada-Cely GE, Sánchez-Castillo V, Gómez-Cano CA. Bioética y desarrollo sostenible: entre el biocentrismo y el antropocentrismo y su sesgo economicista. Cilo América 2018;12(24):255-267. http://dx.doi.org/10.21676/23897848.2818
Embí PJ. Algorithmovigilance—Advancing Methods to Analyze and Monitor Artificial Intelligence-Driven Health Care for Effectiveness and Equity. JAMA Network Open 2021;4(4):e214622. https://doi.org/10.1001/jamanetworkopen.2021.4622
Gooding P, Kariotis T. Ethics and Law in Research on Algorithmic and Data-Driven Technology in Mental Health Care: Scoping Review. JMIR Mental Health 2021;8(6):e24668. https://doi.org/10.2196/24668
Gómez-Cano CA, Sánchez V, Tovar G. Factores endógenos causantes de la permanencia irregular: una lectura desde el actuar docente. Educación y Humanismo 2018;20(35):96-112. https://revistas.unisimon.edu.co/index.php/educacion/article/view/3030
Schouler-Ocak M. Transcultural Aspect of Mental Health Care. European Psychiatry 2022;65(S1):S3. https://doi.org/10.1192/j.eurpsy.2022.37
Schick A, Rauschenberg C, Ader L, Daemen M, Wieland L, Paetzold I, et al. Novel digital methods for gathering intensive time series data in mental health research: scoping review of a rapidly evolving field. Psychological Medicine 2023;53(1):55-65. https://doi.org/10.1017/S0033291722003336
Guayara Cuéllar CT, Millán Rojas EE, Gómez Cano CA. Diseño de un curso virtual de alfabetización digital para docentes de la Universidad de la Amazonia. Revista científica 2019;(34):34-48. https://doi.org/10.14483/23448350.13314
Gómez Cano CA, Sánchez Castillo V, Millán Rojas EE. Capitalismo y ética: una relación de tensiones. Económicas CUC 2019;40(2):31–42. https://doi.org/10.17981/econcuc.40.2.2019.02
Siddaway AP, Wood AM, Hedges LV. How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annual Review of Psychology 2019;70:747-770. https://doi.org/10.1146/annurev-psych-010418-102803
Hiebl MRW. Sample Selection in Systematic Literature Reviews of Management Research. Organizational Research Methods 2023;26(2):229-261. https://doi.org/10.1177/1094428120986851
Pérez Gamboa AJ, García Acevedo Y, García Batán J. Proyecto de vida y proceso formativo universitario: un estudio exploratorio en la Universidad de Camagüey. Transformación 2019;15(3):280-296. http://scielo.sld.cu/scielo.php?script=sci_arttext&pid=S2077-29552019000300280
Richter T, Fishbain B, Fruchter E, Richter-Levin G, Okon-Singer H. Machine learning-based diagnosis support system for differentiating between clinical anxiety and depression disorders. Journal of Psychiatric Research 2021;141:199-205. https://doi.org/10.1016/j.jpsychires.2021.06.044
Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366(6464):447-453. https://doi.org/10.1126/science.aax2342
Rai T. Racial bias in health algorithms. Science 2019;366(6464):440. https://doi.org/10.1126/science.366.6464.440-e
Hooker K, Phibbs S, Irvin V, Mendez-Luck C, Doan L, Li T, et al. Depression Among Older Adults in the United States by Disaggregated Race and Ethnicity. The Gerontologist 2019;59(5):886-891. https://doi.org/10.1093/geront/gny159
Sedgewick F, Kerr-Gaffney J, Leppanen J, Tchanturia K. Anorexia nervosa, autism, and the ADOS: How appropriate is the new algorithm for identifying cases? Frontiers in Psychiatry 2019;10:507. https://doi.org/10.3389/fpsyt.2019.00507
Berthonnet I. Where Exactly Does the Sexist Bias in the Official Measurement of Monetary Poverty in Europe Come From? Review of Radical Political Economics 2021;55(1):132-146. https://doi.org/10.1177/0486613420981785
Ciecierski-Holmes T, Singh R, Axt M, Brenner S, Barteit S. Artificial intelligence for strengthening healthcare systems in low- and middle-income countries: a systematic scoping review. NPJ Digital Medicine 2022;5:162. https://doi.org/10.1038/s41746-022-00700-y
Gomez Cano CA, Sánchez Castillo V, Clavijo Gallego TA. English teaching in undergraduate programs: A reading of the challenges at Uniamazonia from teaching practice. Horizontes Pedagógicos 2018;20(1):55-62. https://doi.org/10.33881/0123-8264.hop.20107
Buda T, Guerreiro J, Iglesias J, Castillo C, Smith O, Matic A. Foundations for fairness in digital health apps. Frontiers in Digital Health 2022;4:943514. https://doi.org/10.3389/fdgth.2022.943514
Xu J, Xiao Y, Wang W, Ning Y, Shenkman E, Bian J, Wang F. Algorithmic fairness in computational medicine. EBioMedicine 2022;84:104250. https://doi.org/10.1016/j.ebiom.2022.104250
Gloria K, Rastogi N, DeGroff S. Bias Impact Analysis of AI in Consumer Mobile Health Technologies: Legal, Technical, and Policy. arXiv 2022;abs/2209.05440. https://doi.org/10.48550/arXiv.2209.05440
Simpson E, Semaan B. For You, or For"You"?. Proceedings of the ACM on Human-Computer Interaction 2021;4(CSCW3):1-34. https://doi.org/10.1145/3432951
Oliva T, Antonialli D, Gomes A. Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content Moderation and Risks to LGBTQ Voices Online. Sexuality & Culture 2021;25(2):700-732. https://doi.org/10.1007/s12119-020-09790-w
Wylie L, Van Meyel R, Harder H, Sukhera J, Luc C, Ganjavi H, Elfakhani M, Wardrop N. Assessing trauma in a transcultural context: challenges in mental health care with immigrants and refugees. Public Health Reviews 2018;39:22. https://doi.org/10.1186/s40985-018-0102-y
Huq S, Maskeliūnas R, Damaševičius R. Dialogue agents for artificial intelligence-based conversational systems for cognitively disabled: a systematic review. Disability and Rehabilitation: Assistive Technology 2022;19(5):1059-1078. https://doi.org/10.1080/17483107.2022.2146768
Lee-Cheong S, Amanullah S, Jardine M. New assistive technologies in dementia and mild cognitive impairment care: A PubMed review. Asian Journal of Psychiatry 2022;73:103135. https://doi.org/10.1016/j.ajp.2022.103135
Wangmo T, Lipps M, Kressig R, Ienca M. Ethical concerns with the use of intelligent assistive technology: findings from a qualitative study with professional stakeholders. BMC Medical Ethics 2019;20:98. https://doi.org/10.1186/s12910-019-0437-z
Pérez A, Echerri D, García Y. Life project as a category of Higher Education pedagogy: Approaching a grounded theory. Transformación 2021;17(3):542-563. http://scielo.sld.cu/pdf/trf/v17n3/2077-2955-trf-17-03-542.pdf
Bird K, Castleman B, Mabel Z, Song Y. Bringing Transparency to Predictive Analytics: A Systematic Comparison of Predictive Modeling Methods in Higher Education. AERA Open 2021;7:23328584211037630. https://doi.org/10.1177/23328584211037630
Gómez Cano CA, García Acevedo Y, Pérez Gamboa AJ. Intersection between health and entrepreneurship in the context of sustainable development. Health Leadership and Quality of Life 2022;1:89. https://doi.org/10.56294/hl202289
Norori N, Hu Q, Aellen F, Faraci F, Tzovara A. Addressing bias in big data and AI for health care: A call for open science. Patterns 2021;2(10):100347. https://doi.org/10.1016/j.patter.2021.100347
Fazelpour S, Danks D. Algorithmic bias: Senses, sources, solutions. Philosophy Compass 2021;16(8):e12760. https://doi.org/10.1111/PHC3.12760
Bird S, Kenthapadi K, Kıcıman E, Mitchell M. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. Proceedings of the 12th ACM International Conference on Web Search and Data Mining 2019:834-835. https://doi.org/10.1145/3289600.3291383
Petrović A, Nikolić M, Radovanović S, Delibašić B, Jovanović M. FAIR: Fair Adversarial Instance Re-weighting. Neurocomputing 2022;476:14-37. https://doi.org/10.1016/j.neucom.2021.12.082
Mantelero A. AI and Big Data: A Blueprint for a Human Rights, Social and Ethical Impact Assessment. Computer Law & Security Review 2018;34(4):754-772. https://doi.org/10.1016/J.CLSR.2018.05.017
Yam J, Skorburg J. From human resources to human rights: Impact assessments for hiring algorithms. Ethics and Information Technology 2021;23(4):611-623. https://doi.org/10.1007/s10676-021-09599-7
Straw I. Ethical implications of emotion mining in medicine. Health Policy and Technology 2021;10(1):167-171. https://doi.org/10.1016/j.hlpt.2020.11.006
Walsh C, Chaudhry B, Dua P, Goodman K, Kaplan B, Kavuluru R, Solomonides A, Subbian V. Stigma, biomarkers, and algorithmic bias: recommendations for precision behavioral health with artificial intelligence. JAMIA Open 2020;3(1):9-15. https://doi.org/10.1093/jamiaopen/ooz054
Koster R, Balaguer J, Tacchetti A, Weinstein A, Zhu T, Hauser O, et al. Human-centred mechanism design with Democratic AI. Nature Human Behaviour 2022;6(10):1398-1407. https://doi.org/10.1038/s41562-022-01383-x
D'Alfonso S. AI in mental health. Current Opinion in Psychology 2020;36:112-117. https://doi.org/10.1016/j.copsyc.2020.04.005
Published
Issue
Section
License
Copyright (c) 2022 Ariadna Matos Matos (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.
The article is distributed under the Creative Commons Attribution 4.0 License. Unless otherwise stated, associated published material is distributed under the same licence.