Abstract:
A method to solve the convex problems of nondifferentiable optimization relying on the basic philosophy of the method of conjugate gradients and coinciding with it in the case of quadratic functions was presented. Its basic distinction from the earlier counterparts lies in the a priori fixed constraint on the memory size which is independent of the accuracy of the resulting solution. Numerical experiments suggest practically linear rate of convergence of this algorithm.
Presented by the member of Editorial Board:A. I. Kibzun