A New Modified Conjugate Gradient Method and Its Global Convergence Theorem

: In this article, we try to proposed a new conjugate gradient method for solving unconstrained optimization problems, we focus on conjugate gradient methods applied to the non-linear unconstrained optimization problems, the positive step size is obtained by a line search and the new scalar to the new direction for the conjugate gradient method is derived from the quadratic function and Taylor series and by using quasi newton condition and Newton direction while deriving the new formulae. We also prove that the search direction of the new conjugate gradient method satisfies the sufficient descent and all assumptions of the global convergence property are considered and proved .in order to complete the benefit of our research we should take into account studied the numerical results which are written in FORTRAN language when the objective function is compared our new algorithm with HS and PRP methods on the similar set of unconstrained optimization test problems which is very efficient and encouragement numerical results.


Introduction:
Conjugate Gradient methods (CG) contain a type of unconstrained optimization algorithms which are known by low memory requirements in strong and global convergence feathers.
This study focuses on conjugate gradient approaches for solving nonlinear unconstrained optimization problems [1].
is a continuously differentiable function, that is bounded from below ,a nonlinear conjugate gradient method constructs a sequence using iteration k x , 1  k starting from an Where k d is a search direction, and the positive step size k  is obtained by a line search such that .Often the sharpest descent path is used in the first iteration, namely  [3] ……………………. Fletcher and Reeves (1964) [4] [9] More detail see [10][11][12].
The global convergence qualities of CG algorithms are the topic of this research. we use HS k  , PR k  and compare it with our new k  which will be derived later, the condition in (5) is used to avoid nonconvergence in nonlinear functions that are utilized with inexact line search: Which is called Powell restart condition because if the scalar k  appears negative these strategies will restart the descent direction on the all iteration. The search direction obtained by the new method at each iteration satisfies the sufficient descent condition as we prove. In order to grantee the global convergence of non-linear conjugate gradient methods for the CG line search we are often used Wolfe conditions, here we denote the Standard Wolfe line search which is mean that the step length k  in equation (3) is obtained such that [13][14][15] : Where k d is a descent direction and 1 0      . The strong Wolfe conditions consists of (6) and by rewrite the equation (7), see [16]: ……………………………….. (8) but we are attention on shows whether there is a conjugate gradient method that converges under Wolfe's standard conditions or not. We develop a new formula k  to show that this new conjugate gradient method is globally convergent if the classic Wolfe requirements (6) and (7) are met.

2: The Derivation of a new Scaled CG Method:
The derivation of most CG method are based, in some way, to the quadratic function and then generalized to non-quadratic functions by restart procedures. Hence, we may assume that our objective is a convex function, then the new method depends on the quadratic form: Where G is the Hessian matrix and b = g and a is a constant, so that  (11) and when we compare (11) with Taylor series and note that (b = g) we get: (12) and since the direction is Newton direction then and when multiplying the direction with T k y we get: .……………………………(13) and since the new direction satisfying (4) so we have: Then we have the new direction as , and k  as we explain in equation (16) which is derived from the quadratic function .

Outlines Of The New Algorithm:
Step(1): Initialize select Step (2) , stop. k x is the optimal solution, else go to step (3).
Step (3): Compute k  satisfying the Wolfe conditions (6), (7 (and update the variable Step (4) is bounded, to be specific, there exists a factor 0 ii.
In some neighborhood N of  we assume that , and the gradient is globally lipschitz continuous, this means there exist a factor 0 .
[18] Now we'll provide the following theorems, which guarantees the new algorithm's descent property:  which is descent direction for all k . Hence, the proof is completed by induction.

(4.3) Global Convergence Theorem:
We have the following lemma (4.4) and (4.5) for any conjugate Gradient method with the strong Wolfe line search, which were first found by zoutendijk [20] and Wolfe. , then (19) shows that Which is conflict with Zoutendijk theorem hence 0 = k g therefore the algorithm is globally convergent.

Numerical experiments:
We now provide numerical tests comparing our novel approach to the HS and PRP algorithms on the same set of unconstrained optimization test functions, with the goal of determining which method is the most reliable and efficient for addressing any unconstrained optimization problem.
We investigated numerical trials with the same number of dimensions for each test function [21] for n=100, 300, 1000, 5000, 6000, and 10000. All

conclusion:
we searched in this research for a new modification conjugate gradient method procedure which depends on derived the quadratic function . We have decided under our experiment that the global convergence for the suggested idea is state also the numerical experiment explained in Tables (6.1),(6.2) and (6.3) are the efficient of the proposed algorithm with respect to regular HS and PRP methods on average and according to the numbers of results. [2] Dai, Y. and Yuan, Y., "A Nonlinear conjugate gradient method with a strong global convergence property", SIAM journal of Optimization, no.10, pp.177-182,1999,