J4 ›› 2009, Vol. 44 ›› Issue (7): 33-37.
• Articles • Previous Articles Next Articles
TANG Jingyong, DONG Li, ZHANG Xiujun
Received:
Online:
Published:
Abstract:
A new class of memory gradient methods for unconstraind optimization problems was presented. Its global convergence and linear convergence rate were proved under some mild conditions. The methods are suitable to solve large scale optimization problems by using the current and previous iterative information, which generate the descent direction and avoiding the computation and the storage of some matrices. Numerical experiments show that the new methods are more efficient then FR, PRP, HS conjugate gradient methods and the steepest descent method under the condition of the Wolfe line search.
Key words: unconstrained optimization; memory gradient method; global co nvergence; linear convergence rate
CLC Number:
0 / / Recommend
Add to citation manager EndNote|Reference Manager|ProCite|BibTeX|RefWorks
URL: http://lxbwk.njournal.sdu.edu.cn/EN/
http://lxbwk.njournal.sdu.edu.cn/EN/Y2009/V44/I7/33
Cited