Authors: Mrs. C. Radha, Rohith P, Sapthagiri R
Abstract: Despite its effectiveness in enriching LLMs with external knowledge, there are many issues related to the opaqueness of the joint decisions made by the retriever and the generator that impede the implementation of RAG models in real-world applications. In this paper, we propose an end-to-end framework for designing xRAG systems that can facilitate explainable decision intelligence via several types of explanation techniques. We review the state-of-the-art progress in four paradigms for explaining RAG models: ARENA (reinforcement learning-based evidence navigation), ArgRAG (quantitative bipolar argumentation), post-hoc sentence-level attribution, and perturbation-based explanation. The proposed approach delivers 92.3% explanation fidelity on standard benchmarks and 94.1% user trust ratings in clinical decision-making applications. The comparison between different methods shows that structured reasoning-based techniques (ArgRAG) exhibit the best explanatory capability while retaining 88.9% of the accuracy of the traditional RAG models. We also show that the quality of explanations significantly contributes to user trust (r=0.87, p<0.001).
International Journal of Science, Engineering and Technology