AI-Driven Multimodal Framework for Transparent and Secure Faceless Registrations

4 Mar

AI-Driven Multimodal Framework for Transparent and Secure Faceless Registrations

Authors- Dr. K. Baskar, R. Sathyaraj, P.Annumalika, K. Archana, M. Jayasree

Abstract-Faceless registration is increasingly utilized in digital identity verification, enabling seamless authentication without requiring physical presence. However, traditional unimodal authentication methods, such as facial recognition or voice-based verification, often exhibit security vulnerabilities, limited robustness, and susceptibility to spoofing attacks. To address these challenges, we propose a Multimodal Sentiment Analysis (MSA) framework based on a Multichannel Cross- modal Fusion Network (MCFNet), which integrates text, audio, and video modalities to enhance sentiment-driven user verification. The MCFNet architecture employs cross-modal attention mechanisms to capture interdependencies between different modalities, redundancy reduction techniques to minimize irrelevant features, and a Text-Guided Information Interactive Module (TIIM) to prioritize meaningful textual data. The system enhances security and reliability by incorporating deep learning-based feature extraction, nonverbal information refinement (NIRM) for multimodal synchronization, and an optimized decision making model for accurate sentiment analysis. Our experimental evaluation, conducted on benchmark multimodal biometric datasets, demonstrates that the proposed approach significantly improves authentication accuracy while reducing false acceptance and rejection rates. The results highlight the potential of multimodal sentiment analysis in ensuring secure, transparent, and robust faceless registration systems, making it a viable solution for next-generation digital identity management.

DOI: /10.61463/ijset.vol.13.issue1.168