Authors: Professor P.E.Pawar, Ms. Sanjeevani Yadav, Mr. Aditya Jagtap, Ms. Mahi Deshmukh, Ms. Nikita Jankar, Ms. Anuja Nikam, Ms. Shravani Ghadge, Ms. Snehal Mahadik
Abstract: Artificial Intelligence (AI) has become a transformative force across various sectors, yet its environmental impact—particularly from energy consumption and carbon emissions—is increasingly concerning. Training and deploying large-scale models like BERT and GPT require immense computational resources, contributing to significant power use and environmental degradation. In response, Green AI has emerged to promote energy-efficient and environmentally sustainable AI development. This paper explores the core principles, techniques, and applications of Green AI. It highlights the environmental costs of conventional AI, including the carbon footprint of training, data center energy demands, and hardware lifecycle impacts. Green AI promotes efficiency alongside accuracy, transparency in reporting energy and carbon metrics, and lifecycle-based evaluations. Key techniques such as model pruning, quantization, and knowledge distillation are discussed for their role in reducing computational complexity. Efficient architectures like Mobile Net and Tiny ML, and innovations in edge computing and low-power hardware (e.g., TPUs, FPGAs) are examined for their sustainability benefits. Tools and metrics like ML CO2 Impact and performance-per-watt benchmarks support the evaluation of sustainable AI.Real-world applications in smart agriculture, energy management, and urban planning illustrate the practical relevance of Green AI. The paper concludes by addressing ethical and policy considerations, advocating for responsible, low-impact AI as both a technical necessity and a moral imperative.
DOI: 10.61463/ijset.vol.13.issue3.194