Abstract:
Conjugate gradient methods represent a powerful class of optimization algorithms known for their efficiency and versatility. In this research, we delve into the optimization of the Generalized Descent Symmetrical Hestenes–Stiefel (GDSHS) algorithm by refining the parameter $c$, a critical factor in its performance. We employ both analytical and numerical methodologies to estimate the optimal range for $c$. Through comprehensive numerical experiments, we investigate the impact of different values of $c$ on the algorithm's convergence behavior and computational efficiency. Comparative analyses are conducted between GDSHS variants with varying $c$ values and established conjugate gradient methods such as Fletcher–Reeves (FR) and Polak–Ribière–Polyak (PRP+). Our findings underscore the significance of setting $c=1$, which significantly enhances the GDSHS algorithm's convergence properties and computational performance, positioning it as a competitive choice among state-of-the-art optimization techniques.