Inexact proximal stochastic gradient method for convex composite optimization
Document Type
Article
Publication Date
12-1-2017
Abstract
We study an inexact proximal stochastic gradient (IPSG) method for convex composite optimization, whose objective function is a summation of an average of a large number of smooth convex functions and a convex, but possibly nonsmooth, function. Variance reduction techniques are incorporated in the method to reduce the stochastic gradient variance. The main feature of this IPSG algorithm is to allow solving the proximal subproblems inexactly while still keeping the global convergence with desirable complexity bounds. Different subproblem stopping criteria are proposed. Global convergence and the component gradient complexity bounds are derived for the both cases when the objective function is strongly convex or just generally convex. Preliminary numerical experiment shows the overall efficiency of the IPSG algorithm.
Publication Source (Journal or Book title)
Computational Optimization and Applications
First Page
579
Last Page
618
Recommended Citation
Wang, X., Wang, S., & Zhang, H. (2017). Inexact proximal stochastic gradient method for convex composite optimization. Computational Optimization and Applications, 68 (3), 579-618. https://doi.org/10.1007/s10589-017-9932-7