## SEARCH

#### Institution

##### ( see all 38)

- McMaster University 41 (%)
- Delft University of Technology 39 (%)
- University of Geneva 25 (%)
- Lehigh University 12 (%)
- Erasmus University 7 (%)

#### Author

##### ( see all 49)

- Terlaky, Tamás [x] 75 (%)
- Roos, Cornelis 31 (%)
- Vial, Jean-Philiipe 24 (%)
- Roos, Kees 9 (%)
- Frenk, Hans 7 (%)

#### Subject

##### ( see all 46)

- Mathematics [x] 75 (%)
- Optimization 65 (%)
- Operations Research, Management Science 37 (%)
- Algorithms 33 (%)
- Computational Science and Engineering 25 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 75 matching Articles
Results per page:

## Applications

### Interior Point Methods for Linear Optimization (2005-01-01): 247-258 , January 01, 2005

## Partial Updating

### Interior Point Methods for Linear Optimization (2005-01-01): 317-328 , January 01, 2005

## Invariance Conditions for Nonlinear Dynamical Systems

### Optimization and Its Applications in Control and Data Sciences (2016-01-01) 115: 265-280 , January 01, 2016

Recently, Horváth et al. (Appl Math Comput, submitted) proposed a novel unified approach to study, i.e., invariance conditions, sufficient and necessary conditions, under which some convex sets are invariant sets for linear dynamical systems. In this paper, by utilizing analogous methodology, we generalize the results for nonlinear dynamical systems. First, the Theorems of Alternatives, i.e., the nonlinear Farkas lemma and the *S*-lemma, together with Nagumo’s Theorem are utilized to derive invariance conditions for discrete and continuous systems. Only standard assumptions are needed to establish invariance of broadly used convex sets, including polyhedral and ellipsoidal sets. Second, we establish an optimization framework to computationally verify the derived invariance conditions. Finally, we derive analogous invariance conditions without any conditions.

## Superlinear Convergence

### High Performance Optimization (2000-01-01) 33: 143-155 , January 01, 2000

The goal of this chapter is to establish the superlinear convergence of a path-following algorithm for semidefinite programming, without non-degeneracy assumptions. Specifically, we propose a predictor-corrector type algorithm with (*r* + 1)-step superlinear convergence of order 2/(1 + 2^{-r}), where any positive integer can be assigned to the parameter *r*. The parameter *r* is used in the algorithm as an upper bound on the number of successive corrector steps that are allowed between two predictor steps. The proof of superlinear convergence is based on the properties of the central path that were derived in Chapter 5.

## Lexicographic Pivoting Rules

### Encyclopedia of Optimization (2001-01-01): 1263-1267 , January 01, 2001

## Self-regular functions and new search directions for linear and semidefinite optimization

### Mathematical Programming (2002-06-01) 93: 129-171 , June 01, 2002

### Abstract.

In this paper, we introduce the notion of a *self-regular* function. Such a function is strongly convex and smooth coercive on its domain, the positive real axis. We show that any such function induces a so-called self-regular proximity function and a corresponding search direction for primal-dual path-following interior-point methods (IPMs) for solving linear optimization (LO) problems. It is proved that the new large-update IPMs enjoy a polynomial ?(*n*
$\frac{q+1}{2q}$
log
$\frac{n}{\varepsilon}$
) iteration bound, where *q*≥1 is the so-called barrier degree of the kernel function underlying the algorithm. The constant hidden in the ?-symbol depends on *q* and the growth degree *p*≥1 of the kernel function. When choosing the kernel function appropriately the new large-update IPMs have a polynomial ?(
$\sqrt{n}$
log*n*log
$\frac{n}{\varepsilon}$
) iteration bound, thus improving the currently best known bound for large-update methods by almost a factor
$\sqrt{n}$
. Our unified analysis provides also the ?(
$\sqrt{n}$
log
$\frac{n}{\varepsilon}$
) best known iteration bound of small-update IPMs. At each iteration, we need to solve only one linear system. An extension of the above results to semidefinite optimization (SDO) is also presented.

## Solving the Canonical Problem

### Interior Point Methods for Linear Optimization (2005-01-01): 71-83 , January 01, 2005

## Target-Following Methods for Linear Programming

### Interior Point Methods of Mathematical Programming (1996-01-01) 5: 83-124 , January 01, 1996

We give a unifying approach to various primal-dual interior point methods by performing the analysis in ‘the space of complementary products’, or ν-space, which is closely related to the use of weighted logarithmic barrier functions. We analyze central and weighted path- following methods, Dikin-path-following methods, variants of a shifted barrier method and the cone-affine scaling method, efficient centering strategies, and efficient strategies for computing weighted centerss

## Back Matter - Interior Point Methods for Linear Optimization

### Interior Point Methods for Linear Optimization (2005-01-01) , January 01, 2005

## A Conic Representation of the Convex Hull of Disjunctive Sets and Conic Cuts for Integer Second Order Cone Optimization

### Numerical Analysis and Optimization (2015-01-01) 134: 1-35 , January 01, 2015

We study the convex hull of the intersection of a convex set *E* and a disjunctive set. This intersection is at the core of solution techniques for *Mixed Integer Convex Optimization*. We prove that if there exists a cone *K* (resp., a cylinder *C*) that has the same intersection with the boundary of the disjunction as *E*, then the convex hull is the intersection of *E* with *K* (resp., *C*).The existence of such a cone (resp., a cylinder) is difficult to prove for general conic optimization. We prove existence and unicity of a second order cone (resp., a cylinder), when *E* is the intersection of an affine space and a second order cone (resp., a cylinder). We also provide a method for finding that cone, and hence the convex hull, for the continuous relaxation of the feasible set of a Mixed Integer Second Order Cone Optimization (MISOCO) problem, assumed to be the intersection of an ellipsoid with a general linear disjunction. This cone provides a new conic cut for MISOCO that can be used in branch-and-cut algorithms for MISOCO problems.