Authors: Dr. P Radhika, J N V S R Sri Thanaya, S Khyathi Chowdary, G Nageswari, P Gowthami
Abstract: The rapid growth of social media and digital communication platforms has increased the spread of misinformation. This threatens public trust, democracy, and health communication. As a result, detecting fake news has become a major focus in Natural Language Processing (NLP) research. Traditional machine learning methods, such as Logistic Regression and Support Vector Machines, are widely used, but they struggle to capture the meaning and context of text. Recent advancements in transformer-based models, including BERT and RoBERTa, have achieved top results in several NLP tasks by using self-attention and contextual embedding techniques.This study presents a hybrid approach that combines a literature review with a comparative experimental study using the ISOT Fake News Dataset. We compared the baseline Logistic Regression classifier with TF-IDF features to transformer-based models, specifically BERT and RoBERTa. The results show that transformer models outperformed traditional classifiers across all metrics. This demonstrates their better understanding of context and ability to generalize.
International Journal of Science, Engineering and Technology