Personalized Mood Melody through Facial Emotion Analysis Research
Authors- Professor Dr. Y. Subba Reddy, Chilamakuru Vara Mounika, Dalal Nafisa, Avula Mamatha, Gittala Sunil Kumar
Abstract-In the modern era of human-computer interaction, leveraging artificial intelligence to enhance personal well-being has gained significant attention. This research explores the innovative concept of generating personalized music based on facial emotion analysis, aiming to improve mental health and emotional regulation1. Music has long been recognized for its profound psychological effects, capable of influencing mood, stress levels, and cognitive functions. By integrating advanced machine learning algorithms with facial recognition technology, this study proposes a system that dynamically adapts music playlists to match and potentially uplift an individual’s emotional state. The methodology involves real-time facial emotion detection using convolutional neural networks (CNNs) that classify emotions such as happiness, sadness, anger, surprise, fear, and neutrality. High-resolution facial images or video streams are analyzed to extract facial landmarks and micro expressions, which are then processed to determine the prevailing emotional state.
International Journal of Science, Engineering and Technology