Authors: Pham Quoc Thang, Hoang Thi Lam
Abstract: Relevance Vector Machines (RVMs) are known for producing sparse probabilistic models, often with significantly fewer vectors than Support Vector Machines (SVMs). However, RVMs sometimes underperform in classification accuracy due to their reliance on Bayesian inference over the entire dataset, which may not emphasize decision boundary regions effectively. This paper proposes a novel hybrid framework—SVM-Guided RVM (SG-RVM)—which enhances the RVM by leveraging the support vectors of a pre-trained SVM to guide its training. Specifically, the SG-RVM model restricts RVM training to a subset of data points near the SVM decision boundary, thereby focusing learning effort where classification uncertainty is highest. Experiments on multiple benchmark datasets demonstrate that SG-RVM consistently outperforms traditional RVM in accuracy, while maintaining or improving model sparsity.
DOI: http://doi.org/10.5281/zenodo.15797525