An Explainable Hybrid AI System For Multi-Class Brain Tumor Detection Using VGG16 And Large Language Models

27 Mar

Authors: Amarnath, Vivek Upadhyay, Pratyush Dutta Shukla, Mr. Sameer Awasthi

Abstract: Early detection of brain tumors is crucial for improving treatment outcomes, but the diagnosis process is still heavily dependent on specialists' expertise in analyzing MRI images. This can be limited by high workload, subjective variations in interpretation, and a lack of well-trained professionals [1],[2]. Recently, deep-learning-based automated approaches have achieved high performance in tumor detection, outperforming traditional radiomics approaches in their capability of extracting features directly from images with excellent generalization across various tumor types [3]–[5]. Despite these advances, many of them only output a classification result, failing to provide clinically useful insights into their decisions, which limits their practical usage in clinical settings. In this paper, we present a hybrid diagnostic support system that integrates transfer learning in MRI classification with an explanation module powered by AI. For the classification model, we fine-tuned the VGG16 CNN on a carefully selected dataset of MRI images for glioma, meningioma, pituitary tumor, and no tumor. Taking an inspiration from the recent work conducted on multimodal AI systems, combined with explainable medical imaging systems [6, 7], the proposed system incorporates a large language model, Groq LLaMA-3.3. Our system is trained with explanations in a manner so that it could be clinically adequate to interpret symptoms and provide initial guidance based on the prediction made by the model. This combination helps overcome the lack of transparency often seen in CNN-based medical systems while maintaining a high level of diagnostic accuracy. Results from our experiments reveal that the proposed VGG16 model performs well, matching the effectiveness of other leading CNN models used in brain tumor classification [3], [4], [8]. Adding the module for medical explanation makes the system easier to use, providing predictions in natural language as other AI-driven clinical systems currently under development are doing [7]. In conclusion, this system represents a useful, low-cost, easy-to- understand tool for early screening, conceived to assist-not replace-health professionals, particularly in those geographical areas where radiological competence is lacking.

DOI: https://doi.org/10.5281/zenodo.19254308