An Infeasible-Point Subgradient Method Using Adaptive Approximate Projections
We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyak-type step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum L1-norm solution to an underdetermined linear system, an important problem in Compressed Sensing.
@article{LorenzPfetschTillmann2014,
author = {D. A. Lorenz and M. E. Pfetsch and A. M. Tillmann},
title = {{An infeasible-point subgradient method using adaptive approximate projections}},
journal = {{Computational Optimization and Applications}},
volume = {57},
number = {2},
pages = {271--306},
year = {2014},
note = {DOI: 10.1007/s10589-013-9602-3}
}