AI-Augmented Sign Language Recognition for Inclusive Human-Computer Interaction
Authors- Dhanraj.P
Abstract--The integration of Artificial Intelligence (AI) in Sign Language Recognition (SLR) marks a transformative leap toward creating inclusive Human-Computer Interaction (HCI) frameworks. The present paper investigates the implementation of AI-augmented SLR systems to bridge the communication divide between hearing-impaired individuals and digital systems. Traditional HCI mechanisms have largely excluded non-verbal modes of communication, particularly sign language, thereby marginalizing a significant user demographic. By leveraging advances in computer vision, deep learning, and natural language processing, AI-driven SLR technologies have the potential to translate dynamic sign gestures into meaningful commands, thereby ensuring accessibility and participation of the deaf and hard-of-hearing communities in digital ecosystems.This research outlines the key methodologies and architectures employed in real-time sign language detection and translation, with a focus on convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer-based models. It further explores gesture segmentation, hand-tracking techniques, multimodal sensor fusion, and language modeling for accurate contextual interpretation. The study discusses the challenges inherent in SLR systems, such as signer variability, environmental constraints, and limited annotated datasets, and presents adaptive learning techniques and domain generalization strategies to overcome them.