aBasic Research of Artificial Intelligence Laboratory (BRAIn Lab), Moscow, Russia bMoscow Center for Advanced Studies, Moscow, Russia cFederated Learning Problems Laboratory, Moscow, Russia dInnopolis University
Abstract:
Variational inequalities (VIs) serve as a powerful tool for various problems. This setting can be applied to a wide range of optimization, machine learning (ML), and other challenges. At the same time, large volumes of data are essential for high performance in ML tasks, which are addressed through stochastic approaches. However, widely used SGD method suffers from a non-decreasing variance of the stochastic gradient. To resolve this issue, variance reduction techniques were developed. This approach is well-studied for minimization but less extensively for VIs. In this paper, we modify the SAGA method, known for its effectiveness in stochastic minimization, by integrating Extragradient to address VI problems. We provide a theoretical analysis of the proposed method and conduct experiments, including bilinear problems and image denoising tasks.
Keywords:variational inequalities, variance reduction, stochastic optimization, extragradient, SAGA.