Regularized optimization in statistical learning: A bayesian perspective

Document Type

Article

Publication Date

4-1-2006

Abstract

Regularization plays a major role in modern data analysis, whenever non-regularized fitting is likely to lead to over-fitted model. It is known that most regularized optimization problems have Bayesian interpretation in which the prior plays the role of the regularizer. In this paper, we consider the issue of sensitivity of the regularized solution to the prior specification within the Bayesian perspective. We suggest a class of flat-tailed priors for a general likelihood function for robust Bayesian solutions, in the same spirit as the t-distribution being suggested as a flat-tail prior for normal likelihood. Results are applied to a family of regularized learning methods and group LASSO. In addition, the consistency issue for LASSO is discussed within this framework.

Publication Source (Journal or Book title)

Statistica Sinica

First Page

411

Last Page

424

This document is currently not available here.

Share

COinS