Epistemic Injustice in Generative AI: A Pipeline Taxonomy, Empirical Hypotheses, and Stage-Matched Governance
DOI:
https://doi.org/10.56294/ai2025417Keywords:
Epistemic injustice, Generative AI governance, Pipeline taxonomyAbstract
Introduction: generative AI systems increasingly influence whose knowledge is represented, how meaning is framed, and who benefits from information. However, these systems frequently perpetuate epistemic injustices—structural harms that compromise the credibility, intelligibility, and visibility of marginalized communities.
Objective: this study aims to systematically analyze how epistemic injustices emerge across the generative AI pipeline and to propose a framework for diagnosing, testing, and mitigating these harms through targeted design and governance strategies.
Method: a mutually exclusive and collectively exhaustive (MECE) taxonomy is developed to map testimonial, hermeneutical, and distributive injustices onto four development stages: data collection, model training, inference, and dissemination. Building on this framework, four theory-driven hypotheses (H1–H4) are formulated to connect design decisions to measurable epistemic harms. Two hypotheses—concerning role-calibrated explanations (H3) and opacity-induced deference (H4)—are empirically tested through a PRISMA-style meta-synthesis of 21 behavioral studies.
Results: findings reveal that AI opacity significantly increases deference to system outputs (effect size d ≈ 0,46–0,58), reinforcing authority biases. In contrast, explanations aligned with stakeholder roles enhance perceived trustworthiness and fairness (d ≈ 0,40–0,84). These effects demonstrate the material impact of design choices on epistemic outcomes.
Conclusions:
Epistemic justice should not be treated as a post hoc ethical concern but as a designable, auditable property of AI systems. We propose stage-specific governance interventions—such as participatory data audits, semantic drift monitoring, and role-sensitive explanation regimes—to embed justice across the pipeline. This framework supports the development of more accountable, inclusive generative AI.
References
1. Alvarado RC. What large language models know. Critical AI. 2024. doi:10.1215/2834703x-11205161.
2. Bommasani R, Hudson DA, Adeli E, Altman R, Arora S, von Arx S, et al. On the opportunities and risks of foundation models. arXiv [preprint]. 2021. arXiv:2108.07258.
3. Hauswald R. Artificial epistemic authorities. Social Epistemology. 2025:1–10.
4. Kay J, Kasirzadeh A, Mohamed S. Epistemic injustice in generative AI. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. 2024;7(1):684–697. doi:10.1609/aies.v7i1.31671.
5. Barry I, Stephenson E. The gendered, epistemic injustices of generative AI. Australian Feminist Studies. 2025:1–21.
6. Mollema JTM. A taxonomy of epistemic injustice in AI and the case for generative hermeneutical erasure. arXiv [preprint]. 2025. doi:10.48550/arXiv.2504.07531.
7. Helm P, Bella G, Koch F, Giunchiglia F. Diversity and language technology: How techno-linguistic bias can cause epistemic injustice. arXiv [preprint]. 2023. doi:10.48550/arXiv.2307.13714.
8. Duede E. Deep learning opacity in scientific discovery. Proceedings of the Philosophy of Science Association. 2023;2023(8):1–10. doi:10.1017/psa.2023.8.
9. Villa E, Quaresmini C, Breschi V, Schiaffonati V. The epistemic dimension of algorithmic fairness: Assessing its impact in innovation diffusion and fair policy-making. arXiv [preprint]. 2025. doi:10.48550/arXiv.2504.02856.
10. Samek W, Wiegand T, Müller K. Explainable artificial intelligence: Understanding and visualizing deep learning models. arXiv [preprint]. 2017. doi:10.48550/arXiv.1708.08296.
11. Akpinar N, Fazelpour S. Authenticity and exclusion: Social media algorithms and the dynamics of belonging in epistemic communities. arXiv [preprint]. 2024. doi:10.48550/arXiv.2407.08552.
12. Fricker M. Epistemic injustice: Power and the ethics of knowing. Oxford University Press; 2007.
13. Herrera F, Calderón R. Opacity as a feature, not a flaw: The LoBOX governance ethic for role-sensitive explainability and institutional trust in AI. arXiv [preprint]. 2025. doi:10.48550/arXiv.2505.20304.
14. Verhagen RS, Neerincx MA, Parlar C, Vogel M, Tielman ML. Personalized agent explanations for human-agent teamwork: Adapting explanations to user trust, workload, and performance. In: AAMAS. 2023. p. 2316–8.
15. Kasirzadeh A. Algorithmic fairness and structural injustice: Insights from feminist political philosophy. arXiv [preprint]. 2022. doi:10.48550/arXiv.2206.00945.
16. Buolamwini J, Gebru T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency. PMLR; 2018. p. 77–91.
17. Binns R. Fairness in machine learning: Lessons from political philosophy. In: Conference on Fairness, Accountability and Transparency. PMLR; 2018. p. 149–59.
18. Vecchione B, Levy K, Barocas S. Algorithmic auditing and social justice: Lessons from the history of audit studies. In: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. 2021. p. 1–9.
19. Birhane A. Algorithmic injustice: A relational ethics approach. Patterns. 2021;2(2):100205. doi:10.1016/j.patter.2021.100205.
20. Ziporyn B. Artificial epistemic authorities: AI and the challenge to expertise. Social Epistemology. 2025;39(2):131–47. doi:10.1080/02691728.2025.2449602.
21. Ratti E. The epistemic cost of opacity: How the use of artificial intelligence undermines the knowledge of medical doctors in high-stakes contexts. Philosophy & Technology. 2024;38(1):5. doi:10.1007/s13347-024-00834-9.
22. Milano S, Prunkl C. Algorithmic profiling and hermeneutical injustices. 2023. doi:11098-023-02095-2.
23. Héder M. The epistemic opacity of autonomous systems and the ethical consequences. AI & Society. 2023;38:1819–27. doi:10.1007/s00146-020-01024-9.
24. Bahel V, Sriram H, Conati C. Personalizing explanations of AI-driven hints to users’ cognitive abilities. 2024.
25. Kim M, Kim S, Kim J, Song TJ, Kim Y. Do stakeholder needs differ? Designing stakeholder-tailored explainable artificial intelligence (XAI) interfaces. International Journal of Human-Computer Studies. 2024;181:103160.
26. Klingbeil A, Grützner C, Schreck P. Trust and reliance on AI—An experimental study on the extent and costs of overreliance. 2024.
27. Buçinca Z, Malaya MB, Gajos KZ. To trust or to think: Cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction. 2021;5(CSCW1):188:1–21. doi:10.1145/3449287.
28. Feldman-Maggor Y, Cukurova M, Kent C, Alexandron G. The impact of explainable AI on teachers’ trust and acceptance of AI EdTech recommendations: The power of domain-specific explanations. International Journal of Artificial Intelligence in Education. 2025:1–34.
29. Wang B, Yuan T, Rau PLP. Effects of explanation strategy and autonomy of explainable AI on human–AI collaborative decision-making. International Journal of Social Robotics. 2024;16:791–810. doi:10.1007/s12369-024-01132-2.
30. Lin C, Spens R, Wagh P, Wang PH, Lane JN, Boussioux L, et al. Narrative AI and the human-AI oversight paradox in evaluating early-stage innovations. 2024.
31. Bansal G, Wu T, Zhou J, Fok R, Nushi B, Kamar E, et al. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021. p. 1–16.
32. Lehmann CA, Haubitz CB, Fügener A, Thonemann UW. The risk of algorithm transparency: How algorithm complexity drives the effects on the use of advice. Production and Operations Management. 2022;31(9):3419–34.
33. Vasconcelos H, Jörke M, Grunde-McLaughlin M, Gerstenberg T, Bernstein M, Krishna R. Explanations can reduce overreliance on AI systems during decision-making. Proceedings of the ACM on Human-Computer Interaction. 2022;7:1–38. doi:10.1145/3579605.
34. Durán JM, Formanek N. Grounds for trust: Essential epistemic opacity and computational reliabilism. Minds and Machines. 2018;28(4):645–66.
35. Ortmann J. Of opaque oracles: Epistemic dependence on AI in science poses no novel problems for social epistemology. Synthese. 2025;205(2):80.
36. Fleisher W. Understanding, idealization, and explainable AI. Episteme. 2022;19(4):493–513. doi:10.1017/epi.2022.39.
37. Harding S. Whose science? Whose knowledge? Cornell University Press; 1991.
38. Klein L, D’Ignazio C. Data feminism for AI. In: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 2024. p. 100–12.
39. McKinlay S, Macnish K, Galliott J. Trust and algorithmic opacity. In: Big data and democracy. 2020. p. 153–66.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Joffrey Baeyaert (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.
The article is distributed under the Creative Commons Attribution 4.0 License. Unless otherwise stated, associated published material is distributed under the same licence.