Enhancing Autonomous Vehicle Security Using Explainable Artificial Intelligence For Anomaly Detection

14 Apr

Authors: Mrs. Ch. Veera Gayathri, Bhaviri Sri Ganesha Seetha Hanuma Gowri, Sangani Praveen Dhana Kumar, Ryali Yuvaraj, Ganta Venkata Sridhar, Gadhi Subrahmanya Krishna Teja

Abstract: Autonomous driving systems have emerged as a transformative technology in modern intelligent transportation, enabling vehicles to operate with minimal or no human intervention. These systems rely heavily on large volumes of sensor data, communication networks, and machine learning algorithms to make real-time driving decisions. However, the increasing integration of autonomous vehicles into vehicular networks has also introduced significant cybersecurity and safety challenges. In particular, anomalous behaviours caused by cyber-attacks, faulty sensors, or malicious vehicles in Vehicular Ad Hoc Networks (VANETs) can threaten the reliability and safety of autonomous driving environments. Detecting such anomalies using traditional monitoring approaches is difficult due to the complexity, scale, and dynamic nature of vehicular communication data. To address these challenges, this study proposes an explainable artificial intelligence (XAI)–based anomaly detection framework for autonomous driving systems. The proposed framework integrates machine learning models with explainability techniques to identify abnormal behaviours in vehicular networks while also providing transparent interpretations of model decisions. Initially, autonomous driving datasets are pre-processed through feature extraction, redundancy elimination, data balancing, and normalization to improve model performance. Several machine learning algorithms, including Decision Tree, Random Forest, Support Vector Machine (SVM), K-Nearest Neighbour (KNN), Deep Neural Network (DNN), and AdaBoost, are implemented to classify vehicles as normal or anomalous based on their behavioural features. To enhance interpretability, the framework incorporates explainable AI techniques such as SHapley Additive explanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). These methods provide both global and local explanations by identifying the most influential features contributing to anomaly detection decisions.

DOI: