In this paper,we construct a new algorithm which combines the conjugate gradient and Lanczos methods for solving nonlinear systems.The iterative direction can be obtained by solving a quadratic model via conjugate gradient and Lanczos methods.Using the backtracking line search,we will find an acceptable trial step size along this direction which makes the objective function nonmonotonically decreasing and makes the norm of the step size monotonically increasing.Global convergence and local superlinear convergence rate of the proposed algorithm are established under some reasonable conditions.Finally,we present some numerical results to illustrate the effectiveness of the proposed algorithm.
The authors propose a dwindling filter algorithm with Zhou's modified subproblem for nonlinear inequality constrained optimization.The feasibility restoration phase,which is always used in the traditional filter method,is not needed.Under mild conditions,global convergence and local superlinear convergence rates are obtained.Numerical results demonstrate that the new algorithm is effective.
A trust-region sequential quadratic programming (SQP) method is developed and analyzed for the solution of smooth equality constrained optimization problems. The trust-region SQP algorithm is based on filter line search technique and a composite-step approach, which decomposes the overall step as sum of a vertical step and a horizontal step. The algorithm includes critical modifications of horizontal step computation. One orthogonal projective matrix of the Jacobian of constraint functions is employed in trust-region subproblems. The orthogonal projection gives the null space of the trans- position of the Jacobian of the constraint function. Theoretical analysis shows that the new algorithm retains the global convergence to the first-order critical points under rather general conditions. The preliminary numerical results are reported.
Chen and Zhang [Sci. China, Set. A, 45, 1390-1397 (2002)] introduced an affine scaling trust region algorithm for linearly constrained optimization and analyzed its global convergence. In this paper, we derive a new affine scaling trust region algorithm with dwindling filter for linearly constrained optimization. Different from Chen and Zhang's work, the trial points generated by the new algorithm axe accepted if they improve the objective function or improve the first order necessary optimality conditions. Under mild conditions, we discuss both the global and local convergence of the new algorithm. Preliminary numerical results are reported.