Human Intellectual Humility: Addressing the fallibility of Artificial Enabled Systems

Authors

  • Oyenike Akinlabi Sheffield Hallam University

DOI:

https://doi.org/10.7190/fintaf.v3i1.513

Keywords:

Intellectual Humility, Human Fallibility, Artificial Intelligence, Hallucination

Abstract

The use of artificial intelligence (AI) cuts across diverse fields, allowing the generation of output at the speed of light. Despite its efficiency, AI is blamed for its hallucinations and biased output. While this is true, society is hypercritical of this anomaly, and they tend to forget the fallibility of human who trained the model. While studies have examined this problem from technical perspective, little is known about the influence of psychological factors on AI hallucination. Hence, from a psychological lens, this paper proposes intellectual humility to disrupt AI hallucinations. Qualitative research will be adopted to understand the development and application of AI in lived environments. AI developers and users will be recruited, and data will be collected in three stages using questions in the Comprehensive Intellectual Humility Scale (CIHS), behavioural semi-structured questionnaire and face-to-face interview. The findings of this research will highlight how intellectual humility shapes developers’ accountability of AI’s development and users’ responsibility for communicating true output from the AI system, challenging humans for AI hallucinations.

Downloads

Published

2026-02-26