Authors: Atharva Birthare
Abstract: Large Language Models (LLMs) have rapidly permeated educational spaces, offering tools for lesson preparation, doubt clarification, and content generation. However, their tendency to hallucinate—producing confident but inaccurate, irrelevant, or fabricated information—poses critical challenges for both teachers and students. This study employs assumed survey data from 120 teachers and 300 students to analyze awareness, trust, and coping strategies regarding hallucinations. The results highlight a significant awareness gap between teachers and students, with students more vulnerable to unverified reliance on LLMs. Four types of hallucinations—factual, intrinsic, extrinsic, and amalgamated—are discussed, along with practical mitigation strategies suitable for classroom contexts. This paper also provides graphical representations of awareness, trust, and coping strategies and concludes with recommendations for hallucination-aware pedagogy and future research directions.