Authors: Pathagunta Muniraji, M.Sowmiya, Surathi Teja, Tana Suman
Abstract: Large Language Models (LLMs) have significantly advanced human–computer interaction by enabling natural lan-guage communication across diverse applications. However, these systems lack runtime control mechanisms, resulting in unsafe outputs, hallucinated responses, and unpredictable behaviour. This paper proposes a Self-Aware AI Middleware, an external control layer that regulates interaction between users and LLMs without modifying the model architecture. The proposed system intercepts user queries, performs intent analysis using Natural Language Processing techniques, applies policy-based decision-making, and validates generated responses before delivery. A dual-layer awareness mechanism is introduced to monitor both input queries and output responses. The architecture consists of modular components including input interception, prompt analysis, policy enforcement, LLM in-teraction, response evaluation, and self-reflection. This structured pipeline ensures safe, reliable, and controlled AI interaction. Experimental observations demonstrate reduced unsafe outputs, improved response consistency, and enhanced user trust. The system is scalable, model-agnostic, and suitable for deployment across multiple domains such as education, healthcare, and intelligent assistants.
International Journal of Science, Engineering and Technology