# Projection algorithms with dynamic stepsize for constrained composite minimization

Volume 11, Issue 7, pp 927--936
Publication Date: May 24, 2018 Submission Date: December 25, 2017 Revision Date: April 13, 2018 Accteptance Date: April 14, 2018
• 943 Views

### Authors

Yujing Wu - Tianjin Vocational Institute, Tianjin 300410, P. R. China. Luoyi Shi - Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, P. R. China. Rudong Chen - Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, P. R. China.

### Abstract

The problem of minimizing the sum of a large number of component functions over the intersection of a finite family of closed convex subsets of a Hilbert space is researched in the present paper. In the case of the number of the component functions is huge, the incremental projection methods are frequently used. Recently, we have proposed a new incremental gradient projection algorithm for this optimization problem. The new algorithm is parameterized by a single nonnegative constant $\mu$. And the algorithm is proved to converge to an optimal solution if the dimensional of the Hilbert space is finite the step size is diminishing (such as $\alpha_n=\mathcal{O}(1/n)$). In this paper, the algorithm is modified by employing the constant and the dynamic stepsize, and the corresponding convergence properties are analyzed.

### Keywords

• Composite minimization
• projection algorithm
• dynamic stepsize

•  47H10
•  54H25

### References

• [1] H. H. Bauschke, J. M. Borwein , On projection algorithms for solving convex feasibility problems, SIAM Rev., 38 (1996), 367–426.

• [2] D. P. Bertsekas, Nonlinear Programming, Athena Scientific, Belmont, MA (1995)

• [3] D. P. Bertsekas, Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey, MIT Press, Cambridge (2011)

• [4] D. P. Bertsekas, Incremental aggregated proximal and augmented Lagrangian algorithms, arXiv preprint, 2015 (2015), 38 pages.

• [5] D. P. Bertsekas, J. N. Tsitsikls, Neuro-Dynamic Programming, Athena Scientific, Belmont, MA (1996)

• [6] P. L. Combettes, Hilbertian convex feasibility problem: convergence of projection methods, Appl. Math. Optim., 35 (1997), 311–330.

• [7] K. Geobel, W. A. Kirk, Topics in Metric Fixed Point Theory, Cambridge University Press, Cambridge (1990)

• [8] K. Geobel, S. Reich, Uniform convexity, hyperbolic geometry, and nonexpansive mappings, Marcel Dekker, Inc., New York (1984)

• [9] J.-L. Goffin, K. C. Kiwiel , Convergence of a simple subgradient level method, Math. Program., 85 (1999), 207–211.

• [10] T. Hastie, R. Tibshirani, J. Friedman, The elements of statistical learning, Data mining, inference, and prediction, Second edition, Springer, New York (2009)

• [11] K. C. Kiwiel, P. O. Lindberg, Parallel subgradient methods for convex optimization, Inherently parallel algorithms in feasibility and optimization and their applications (Haifa, 2000), 335–344, Stud. Comput. Math., North-Holland, Amsterdam (2001)

• [12] K. C. Krzysztof , Convergence of approximate and incremental subgradient methods for convex optimization, SIAM J. Optim., 14 (2004), 807–840.

• [13] Z.-Q. Luo, On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks, Neural Computation, 3 (1991), 226–245.

• [14] A. Nedić, D. P. Bertsekas, Incremental subgradient methods for nondifferentiable optimization, SIAM J. Option, 12 (2001), 109–138.

• [15] A. Nedić, D. P. Bertsekas, V. S. Borkar, Distributed Asynchronous Incremental Subgradient Methods , Stud. Comput. Math., 8 (2001), 381–407.

• [16] B. T. Polyak , Minimization of unsmooth functionals, USSR Comput. Math. & Math. Phys., 9 (1969), 509–521.

• [17] B. T. Polyak , Introduction to Optimization, Translated from the Russian. With a foreword by Dimitri P. Bertsekas. Translations Series in Mathematics and Engineering, Optimization Software, Inc., Publications Division, New York (1987)

• [18] F. Schöpfer, T. Schuster, A. K. Louis, An iterative regularization method for the solution of the split feasibility problem in Banach spaces, Inverse Problems, 2008 (2008), 20 pages.

• [19] L. Y. Shi, Q. H. Ansari, C. F. Wen, J. C. Yao, A new algorithm for constrained composite minimization, J. Nonlinear Var. Anal., 1 (2017), 253–264.

• [20] S. Sra, S. Nowozin, S. J. Wright, Optimization for machine learning , MIT Press, Cambridge, Massachusetts (2011)

• [21] H.-K. Xu, Averaged mappings and the gradient-projection algorithm, J. Optim. Theory Appl., 150 (2011), 360–378.

• [22] X. Yang, H.-K. Xu , Projection algorithms for composite minimization, Carpathian J. Math., 33 (2017), 389–397.

• [23] X. Zhao, P. B. Luh, J. Wang, Surrogate gradient algorithm for Lagrangian relaxation, J. Optim. Theory Appl., 100 (1999), 699–712.