Abstract:
The convergence rate of the Markov random search algorithms designed for finding the extremizer of a function is investigated. It is shown that, for a wide class of random search methods that possess a natural symmetry property, the number of evaluations of the objective function needed to find the extremizer accurate to cannot grow slower than $|\ln\varepsilon|$.
Key words:random search, global optimization, stochastic optimization, estimate of convergence rate.