您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(理学版)》

J4 ›› 2009, Vol. 44 ›› Issue (7): 33-37.

• 论文 • 上一篇    下一篇

一类新的Wolfe线性搜索下的记忆梯度法

汤京永,董丽,张秀军   

  1. 信阳师范学院数学与信息科学学院, 河南 信阳 464000
  • 收稿日期:2008-11-17 出版日期:2009-07-16 发布日期:2009-11-01
  • 作者简介:汤京永(1979-), 讲师,硕士研究生,主要从事非线性规划的研究. Email:tangjingyong@tom.com
  • 基金资助:

    信阳师范学院青年基金资助项目(20080208;20070207)

A new class of memory gradient methods with Wolfe line search

TANG Jingyong, DONG Li, ZHANG Xiujun   

  1. College of Mathematics and Information Science, Xinyang Normal Unive rsity, Xinyang 464000, Henan, China
  • Received:2008-11-17 Online:2009-07-16 Published:2009-11-01

摘要:

提出一类新的求解无约束优化问题的记忆梯度法,在较弱条件下证明了算法具有全局收敛性和线性收敛速率。算法在每步迭代中利用当前和前面迭代点的信息产生下降方向,不需计算和存储矩阵,适于求解大规模优化问题。初步的数值试验表明算法比Wolfe搜索下的FR,PRP和HS共轭梯度法及最速下降法有效。

关键词: 无约束优化;记忆梯度法;全局收敛性;线性收敛速率

Abstract:

A new class of memory gradient methods for unconstraind optimization problems was presented. Its global convergence and linear convergence rate were proved under some mild conditions. The methods are suitable to solve large scale optimization problems by using the current and previous iterative information, which generate the descent direction and avoiding the computation and the storage of some matrices. Numerical experiments show that the new methods are more efficient then FR, PRP, HS conjugate gradient methods and the steepest descent method under the condition of the Wolfe line search.

Key words: unconstrained optimization; memory gradient method; global co nvergence; linear convergence rate

中图分类号: 

  • O2212
No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!