## SEARCH

#### Institution

##### ( see all 691)

- Wayne State University 18 (%)
- Shanghai Normal University 14 (%)
- Chinese Academy of Sciences 13 (%)
- Dalian University of Technology 13 (%)
- Nanjing Normal University 13 (%)

#### Author

##### ( see all 1238)

- Mordukhovich, Boris S. 16 (%)
- Zhu, Detong 10 (%)
- Curtis, Frank E. 9 (%)
- Robinson, Daniel P. 9 (%)
- Dong, Li 8 (%)

#### Subject

##### ( see all 80)

- Mathematics [x] 722 (%)
- Mathematics of Computing 281 (%)
- Calculus of Variations and Optimal Control; Optimization 248 (%)
- Theoretical, Mathematical and Computational Physics 201 (%)
- Optimization 200 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 722 matching Articles
Results per page:

## On the Convergence Properties of a Second-Order Augmented Lagrangian Method for Nonlinear Programming Problems with Inequality Constraints

### Journal of Optimization Theory and Applications (2015-11-18): 1-18 , November 18, 2015

The objective of this paper is to conduct a theoretical study on the convergence properties of a second-order augmented Lagrangian method for solving nonlinear programming problems with both equality and inequality constraints. Specifically, we utilize a specially designed generalized Newton method to furnish the second-order iteration of the multipliers and show that when the linear independent constraint qualification and the strong second-order sufficient condition hold, the method employed in this paper is locally convergent and possesses a superlinear rate of convergence, although the penalty parameter is fixed and/or the strict complementarity fails.

## Automatic Differentiation: Point and Interval Taylor Operators

### Encyclopedia of Optimization (2001-01-01): 113-118 , January 01, 2001

## An SQP algorithm for mathematical programs with nonlinear complementarity constraints

### Applied Mathematics and Mechanics (2009-05-01) 30: 659-668 , May 01, 2009

In this paper, we describe a successive approximation and smooth sequential quadratic programming (SQP) method for mathematical programs with nonlinear complementarity constraints (MPCC). We introduce a class of smooth programs to approximate the MPCC. Using an *l*_{1} penalty function, the line search assures global convergence, while the superlinear convergence rate is shown under the strictly complementary and second-order sufficient conditions. Moreover, we prove that the current iterated point is an exact stationary point of the mathematical programs with equilibrium constraints (MPEC) when the algorithm terminates finitely.

## Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity

### Mathematical Programming (2011-12-01) 130: 295-319 , December 01, 2011

An Adaptive Regularisation framework using Cubics (ARC) was proposed for unconstrained optimization and analysed in Cartis, Gould and Toint (Part I, Math Program, doi: 10.1007/s10107-009-0286-5 , 2009), generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser, Deuflhard and Erdmann (Optim Methods Softw 22(3):413–431, 2007). In this companion paper, we further the analysis by providing worst-case global iteration complexity bounds for ARC and a second-order variant to achieve approximate first-order, and for the latter second-order, criticality of the iterates. In particular, the second-order ARC algorithm requires at most $${\mathcal{O}(\epsilon^{-3/2})}$$ iterations, or equivalently, function- and gradient-evaluations, to drive the norm of the gradient of the objective below the desired accuracy $${\epsilon}$$ , and $${\mathcal{O}(\epsilon^{-3})}$$ iterations, to reach approximate nonnegative curvature in a subspace. The orders of these bounds match those proved for Algorithm 3.3 of Nesterov and Polyak which minimizes the cubic model globally on each iteration. Our approach is more general in that it allows the cubic model to be solved only approximately and may employ approximate Hessians.

## Semidefinite representation of convex sets

### Mathematical Programming (2010-03-01) 122: 21-64 , March 01, 2010

Let
$${S =\{x\in \mathbb{R}^n: g_1(x)\geq 0, \ldots, g_m(x)\geq 0\}}$$
be a semialgebraic set defined by multivariate polynomials *g*_{i}(*x*). Assume *S* is convex, compact and has nonempty interior. Let
$${S_i =\{x\in \mathbb{R}^n:\, g_i(x)\geq 0\}}$$
, and ∂ *S* (resp. ∂ *S*_{i}) be the boundary of *S* (resp. *S*_{i}). This paper, as does the subject of semidefinite programming (SDP), concerns linear matrix inequalities (LMIs). The set *S* is said to have an LMI representation if it equals the set of solutions to some LMI and it is known that some convex *S* may not be LMI representable (Helton and Vinnikov in Commun Pure Appl Math 60(5):654–674, 2007). A question arising from Nesterov and Nemirovski (SIAM studies in applied mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 1994), see Helton and Vinnikov in Commun Pure Appl Math 60(5):654–674, 2007 and Nemirovski in Plenary lecture, International Congress of Mathematicians (ICM), Madrid, Spain, 2006, is: given a subset *S* of
$${\mathbb{R}^n}$$
, does there exist an LMI representable set Ŝ in some higher dimensional space
$${ \mathbb{R}^{n+N}}$$
whose projection down onto
$${\mathbb{R}^n}$$
equals *S*. Such *S* is called semidefinite representable or SDP representable. This paper addresses the SDP representability problem. The following are the main contributions of this paper: (i) assume *g*_{i}(*x*) are all concave on *S*. If the positive definite Lagrange Hessian condition holds, i.e., the Hessian of the Lagrange function for optimization problem of minimizing any nonzero linear function *ℓ*^{T}*x* on *S* is positive definite at the minimizer, then *S* is SDP representable. (ii) If each *g*_{i}(*x*) is either sos-concave ( − ∇^{2}*g*_{i}(*x*) = *W*(*x*)^{T}*W*(*x*) for some possibly nonsquare matrix polynomial *W*(*x*)) or strictly quasi-concave on *S*, then *S* is SDP representable. (iii) If each *S*_{i} is either sos-convex or poscurv-convex (*S*_{i} is compact convex, whose boundary has positive curvature and is nonsingular, i.e., ∇*g*_{i}(*x*) ≠ 0 on ∂ *S*_{i} ∩ *S*), then *S* is SDP representable. This also holds for *S*_{i} for which ∂ *S*_{i} ∩ *S* extends smoothly to the boundary of a poscurv-convex set containing *S*. (iv) We give the complexity of Schmüdgen and Putinar’s matrix Positivstellensatz, which are critical to the proofs of (i)–(iii).

## An Inertial Proximal-Gradient Penalization Scheme for Constrained Convex Optimization Problems

### Vietnam Journal of Mathematics (2017-09-01): 1-19 , September 01, 2017

We propose a proximal-gradient algorithm with penalization terms and inertial and memory effects for minimizing the sum of a proper, convex, and lower semicontinuous and a convex differentiable function subject to the set of minimizers of another convex differentiable function. We show that, under suitable choices for the step sizes and the penalization parameters, the generated iterates weakly converge to an optimal solution of the addressed bilevel optimization problem, while the objective function values converge to its optimal objective value.

## Approximation in multiobjective optimization

### Journal of Global Optimization (1992-06-01) 2: 117-132 , June 01, 1992

Some results of approximation type for multiobjective optimization problems with a finite number of objective functions are presented. Namely, for a sequence of multiobjective optimization problems *P*_{n}which converges in a suitable sense to a limit problem *P*, properties of the sequence of approximate Pareto efficient sets of the *P*_{n}'s, are studied with respect to the Pareto efficient set of *P*. The exterior penalty method as well as the variational approximation method appear to be particular cases of this framework.

## Entropy Function-Based Algorithms for Solving a Class of Nonconvex Minimization Problems

### Journal of the Operations Research Society of China (2015-12-01) 3: 441-458 , December 01, 2015

Recently, the $$l_p$$ minimization problem ( $$p\in (0,\,1)$$ ) for sparse signal recovery has been studied a lot because of its efficiency. In this paper, we propose a general smoothing algorithmic framework based on the entropy function for solving a class of $$l_p$$ minimization problems, which includes the well-known unconstrained $$l_2$$ – $$l_p$$ problem as a special case. We show that any accumulation point of the sequence generated by the proposed algorithm is a stationary point of the $$l_p$$ minimization problem, and derive a lower bound for the nonzero entries of the stationary point of the smoothing problem. We implement a specific version of the proposed algorithm which indicates that the entropy function-based algorithm is effective.

## Optimality conditions and finite convergence of Lasserre’s hierarchy

### Mathematical Programming (2014-08-01) 146: 97-121 , August 01, 2014

Lasserre’s hierarchy is a sequence of semidefinite relaxations for solving polynomial optimization problems globally. This paper studies the relationship between optimality conditions in nonlinear programming theory and finite convergence of Lasserre’s hierarchy. Our main results are: (i) Lasserre’s hierarchy has finite convergence when the constraint qualification, strict complementarity and second order sufficiency conditions hold at every global minimizer, under the standard archimedean condition; the proof uses a result of Marshall on boundary hessian conditions. (ii) These optimality conditions are all satisfied at every local minimizer if a finite set of polynomials, which are in the coefficients of input polynomials, do not vanish at the input data (i.e., they hold in a Zariski open set). This implies that, under archimedeanness, Lasserre’s hierarchy has finite convergence generically.

## An alternating variable method for the maximal correlation problem

### Journal of Global Optimization (2012-09-01) 54: 199-218 , September 01, 2012

The maximal correlation problem (MCP) aiming at optimizing correlations between sets of variables plays an important role in many areas of statistical applications. Up to date, algorithms for the general MCP stop at solutions of the multivariate eigenvalue problem (MEP), which serves only as a necessary condition for the global maxima of the MCP. For statistical applications, the global maximizer is quite desirable. In searching the global solution of the MCP, in this paper, we propose an alternating variable method (AVM), which contains a core engine in seeking a global maximizer. We prove that (i) the algorithm converges globally and monotonically to a solution of the MEP, (ii) any convergent point satisfies a global optimal condition of the MCP, and (iii) whenever the involved matrix *A* is nonnegative irreducible, it converges globally to the global maximizer. These properties imply that the AVM is an effective approach to obtain a global maximizer of the MCP. Numerical testings are carried out and suggest a superior performance to the others, especially in finding a global solution of the MCP.