Convergence results for a self-dual regularization of convex problems
Document Type
Article
Publication Date
6-1-2012
Abstract
We study a one-parameter regularization technique for convex optimization problems whose main feature is self-duality with respect to the Legendre-Fenchel conjugation. The self-dual technique, introduced by Goebel, can be defined for both convex and saddle functions. When applied to the latter, we show that if a saddle function has at least one saddle point, then the sequence of saddle points of the regularized saddle functions converges to the saddle point of minimal norm of the original one. For convex problems with inequality and state constraints, we apply the regularization directly on the objective and constraint functions, and show that, under suitable conditions, the associated Lagrangians of the regularized problem hypo/epi-converge to the original Lagrangian, and that the associated value functions also epi-converge to the original one. Finally, we find explicit conditions ensuring that the regularized sequence satisfies Slater's condition. © 2012 Copyright Taylor and Francis Group, LLC.
Publication Source (Journal or Book title)
Optimization
First Page
699
Last Page
716
Recommended Citation
Guevara, A., & Wolenski, P. (2012). Convergence results for a self-dual regularization of convex problems. Optimization, 61 (6), 699-716. https://doi.org/10.1080/02331934.2011.644287