Resum
In this paper we present a new method for Joint Feature Selection and Classifier Learning using a sparse Bayesian approach. These tasks are performed by optimizing a global loss function that includes a term associated with the empirical loss and another one representing a feature selection and regularization constraint on the parameters. To minimize this function we use a recently proposed technique, the Boosted Lasso algorithm, that follows the regularization path of the empirical risk associated with our loss function. We develop the algorithm for a well known non-parametrical classification method, the relevance vector machine, and perform experiments using a synthetic data set and three databases from the UCI Machine Learning Repository. The results show that our method is able to select the relevant features, increasing in some cases the classification accuracy when feature selection is performed. © 2008 Springer-Verlag London Limited.
Idioma original | Anglès |
---|---|
Pàgines (de-a) | 299-308 |
Revista | Pattern Analysis and Applications |
Volum | 11 |
Número | 3-4 |
DOIs | |
Estat de la publicació | Publicada - 1 de set. 2008 |