Do Large Language Models Show Human-like Biases? Exploring Confidence – Competence Gap in AI

Share

Abstract

#AI #LLM #languagemodels

This study investigates self-assessment tendencies in Large Language Models (LLMs), examining if patterns resemble human cognitive biases like the Dunning–Kruger effect. LLMs, including GPT, BARD, Claude, and LLaMA, are evaluated using confidence scores on reasoning tasks. The models provide self-assessed confidence levels before and after responding to different questions. The results show cases where high confidence does not correlate with correctness, suggesting overconfidence. Conversely, low confidence despite accurate responses indicates potential underestimation. The confidence scores vary across problem categories and difficulties, reducing confidence for complex queries. GPT-4 displays consistent confidence, while LLaMA and Claude demonstrate more variations. Some of these patterns resemble the Dunning–Kruger effect, where incompetence leads to inflated self-evaluations. While not conclusively evident, these observations parallel this phenomenon and provide a foundation to further explore the alignment of competence and confidence in LLMs. As LLMs continue to expand their societal roles, further research into their self-assessment mechanisms is warranted to fully understand their capabilities and limitations.

Type of Publication
articles-and-papers
Theme
artificial-intelligence
Publishing
MDPI Information Journal
Publishing Date
2024
Language
english
Status
open-access
DOI
10.3390/info15020092
Authors
Aniket Kumar Singh; Bishal Lamichhane; Suman Devkota; Uttam Dhakal; Chandra Dhakal