Human-in-the-Loop Models for Ethical AI Grading: Combining AI Speed with Human Ethical Oversight
DOI:
https://doi.org/10.56294/ai2025413Keywords:
Human-In-The-Loop (HITL), AI-Assisted Grading, Educational Assessment, Ethical AI, Algorithmic Transparency, Responsible AutomationAbstract
The adoption of AI-powered grading systems in academic institutions promised improved efficiency, consistency, and scalability. However, these benefits introduced ethical challenges, including algorithmic bias, contextual insensitivity, and reduced transparency, particularly in high-stakes assessments. To address these concerns, the chapter presented a Human-in-the-Loop (HITL) grading framework that integrated AI-generated recommendations with human oversight. The model consisted of four layers: (i) pre-grading configuration with customizable rubrics and model calibration; (ii) preliminary scoring using transformer-based language models; (iii) human validation and contextual adjustment of AI outputs; and (iv) transparent feedback supported by dual-logged audit trails. A case study was conducted at a mid-sized university, where the framework was applied to 800 undergraduate essays. As a result of this implementation, the faculty validated 87 % of the AI-generated scores with only minor adjustments, while 13 % required overrides due to misinterpretations involving creative expression, linguistic nuance, or cultural context. The grading time was reduced by 40 %, and student satisfaction improved due to transparent assessment and educator involvement. These findings demonstrate that the HITL model has the potential to balance automation with ethical oversight, promoting fairer evaluations and preserving academic integrity. It enhanced faculty agency, ensured equity across diverse student populations, and built trust through explainable AI tools such as SHAP and LIME. The chapter concluded by proposing policy guidelines, technical integrations, and communication strategies, while advocating for future applications in multimodal grading and open-source ethical assessment platforms.
References
1. Kumar S, Datta S, Singh V, Datta D, Singh SK, Sharma R. Applications, challenges, and future directions of human-in-the-loop learning. IEEE Access. 2024 May 15.
2. Emami Y, Almeida L, Li K, Ni W, Han Z. Human-in-the-loop machine learning for safe and ethical autonomous vehicles: Principles, challenges, and opportunities. arXiv preprint arXiv:2408.12548. 2024 Aug 22.
3. UNESCO. Recommendation on the Ethics of Artificial Intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000380455
4. Chen X, Wang X, Qu Y. Constructing ethical AI based on the “Human-in-the-Loop” system. Systems. 2023 Nov;11(11):548.
5. Krishnamoorthy MV. Enhancing Responsible AGI Development: Integrating Human-in-the-loop Approaches with Blockchain-based Smart Contracts. Journal of Advances in Mathematics and Computer Science. 2024 Sep 1;39(9):14-39.
6. Crootof R, Kaminski ME, Price W, Nicholson II. Humans in the Loop. Vand. L. Rev.. 2023;76:429.
7. Drori I, Te'eni D. Human-in-the-loop AI reviewing: feasibility, opportunities, and risks. Journal of the Association for Information Systems. 2024;25(1):98-109.
8. Tschider CA. Humans outside the loop. Yale JL & Tech.. 2023;26:324.
9. Chen X, Zhou P, Tao L, Wang X, Qu Y. From Data to Decisions: Assessing the Feasibility and Rationality of Human-in-the-Loop for AI Value Alignment. In2024 IEEE Conference on Engineering Informatics (ICEI) 2024 Nov 20 (pp. 1-6). IEEE.
10. Kyriakou K, Otterbacher J. Modular oversight methodology: a framework to aid ethical alignment of algorithmic creations. Design Science. 2024 Jan;10:e32.
11. Middleton SE, Letouzé E, Hossaini A, Chapman A. Trust, regulation, and human-in-the-loop AI: within the European region. Communications of the ACM. 2022 Mar 19;65(4):64-8.
12. Rožanec JM, Montini E, Cutrona V, Papamartzivanos D, Klemencic T, Fortuna B, Mladenic D, Veliou E, Giannetsos T, Emmanouilidis C. Human in the AI loop via xAI and active learning for visual inspection. Artificial Intelligence in Manufacturing. 2024:381.
13. Tariq MU. Navigating the Ethical Frontier-Human Oversight in AI-Driven Decision-Making System. InEnhancing Automated Decision-Making Through AI 2025 (pp. 425-448). IGI Global Scientific Publishing.
14. Soto-Rangel AG, Peluffo-Ordóñez DH, Florez H. Reflections on Modern Perspectives in Human-in-the-Loop AI. Revista Científica. 2024 Aug 1;50(2).
15. Sayles J. Designing a Well-Governed AI Lifecycle Model. InPrinciples of AI Governance and Model Risk Management: Master the Techniques for Ethical and Transparent AI Systems 2024 Dec 28 (pp. 85-111). Berkeley, CA: Apress.
16. Iyenghar P. Clever Hans in the Loop? A Critical Examination of ChatGPT in a Human-in-the-Loop Framework for Machinery Functional Safety Risk Analysis. Eng. 2025 Feb 7;6(2):31.
17. Von der Felsen E. Optimizing Hybrid Decision-Making Models in AI-Integrated Weapon Systems: Balancing Human Control, Ethical Oversight, and Efficiency through AI Autonomy.
18. Göksal Şİ, Solarte Vasquez MC. The Blockchain-Based Trustworthy Artificial Intelligence Supported by Stakeholders-In-The-Loop Model. Scientific Papers of the University of Pardubice. Series D, Faculty of Economics & Administration. 2024 May 1;32(2).
19. Karunamurthy A, Kiruthivasan R, Gauthamkrishna S. Human-in-the-Loop Intelligence: Advancing AI-Centric Cybersecurity for the Future. Quing: International Journal of Multidisciplinary Scientific Research and Development. 2023 Sep 30;2(3):20-43.
20. Thurzo A. Provable AI Ethics and Explainability in Medical and Educational AI Agents: Trustworthy Ethical Firewall. Electronics. 2025 Mar 25;14(7):1294.
21. Muyskens K, Ma Y, Menikoff J, Hallinan J, Savulescu J. When can we kick (some) humans “out of the loop”? An examination of the use of AI in medical imaging for lumbar spinal stenosis. Asian Bioethics Review. 2025 Jan;17(1):207-23.
22. Retzlaff CO, Das S, Wayllace C, Mousavi P, Afshari M, Yang T, Saranti A, Angerschmid A, Taylor ME, Holzinger A. Human-in-the-loop reinforcement learning: A survey and position on requirements, challenges, and opportunities. Journal of Artificial Intelligence Research. 2024 Jan 30;79:359-415.
23. Koussouris S, Dalamagas T, Figueiras P, Pallis G, Bountouni N, Gkolemis V, Perakis K, Bibikas D, Agostinho C. Bridging Data and AIOps for Future AI Advancements with Human-in-the-Loop. The AI-DAPT Concept. In2024 IEEE International Conference on Engineering, Technology, and Innovation (ICE/ITMC) 2024 Jun 24 (pp. 1-8). IEEE.
24. Buckley RP, Zetzsche DA, Arner DW, Tang BW. Regulating artificial intelligence in finance: putting the human in the loop. Sydney Law Review, The. 2021 Mar;43(1):43-81.
25. Bui LV. Advancing patent law with generative AI: Human-in-the-loop systems for AI-assisted drafting, prior art search, and multimodal IP protection. World Patent Information. 2025 Mar 1;80:102341.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Muthu Selvam, Rubén González Vallejo (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.
The article is distributed under the Creative Commons Attribution 4.0 License. Unless otherwise stated, associated published material is distributed under the same licence.