## SEARCH

#### Institution

##### ( see all 12)

- Delft University of Technology 31 (%)
- McMaster University 24 (%)
- University of Geneva 24 (%)
- Advanced Optimization Laboratory, Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada, L8S 4L7, e-mail: pengj@mcmaster.ca 1 (%)
- Advanced Optimization Laboratory, Department of Computing and Software, McMaster University, Hamilton, Ontario, Canada, L8S 4L7, e-mail: terlaky@mcmaster.ca 1 (%)

#### Author

##### ( see all 14)

- Roos, Cornelis [x] 34 (%)
- Terlaky, Tamás 31 (%)
- Vial, Jean-Philiipe 24 (%)
- Peng, Jiming 3 (%)
- Jansen, Benjamin 2 (%)

#### Subject

##### ( see all 26)

- Mathematics [x] 34 (%)
- Optimization 30 (%)
- Operations Research, Management Science 26 (%)
- Algorithms 24 (%)
- Computational Science and Engineering 24 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 34 matching Articles
Results per page:

## Applications

### Interior Point Methods for Linear Optimization (2005-01-01): 247-258 , January 01, 2005

## Partial Updating

### Interior Point Methods for Linear Optimization (2005-01-01): 317-328 , January 01, 2005

## An inequality for generalized hexagons

### Geometriae Dedicata (1981-01-01) 10: 219-222 , January 01, 1981

We show that a generalized hexagon with*s* + 1 points on a line and*t* + 1 lines through a point satisfies s=1 or*t≤s*^{3}.

## Self-regular functions and new search directions for linear and semidefinite optimization

### Mathematical Programming (2002-06-01) 93: 129-171 , June 01, 2002

### Abstract.

In this paper, we introduce the notion of a *self-regular* function. Such a function is strongly convex and smooth coercive on its domain, the positive real axis. We show that any such function induces a so-called self-regular proximity function and a corresponding search direction for primal-dual path-following interior-point methods (IPMs) for solving linear optimization (LO) problems. It is proved that the new large-update IPMs enjoy a polynomial ?(*n*
$\frac{q+1}{2q}$
log
$\frac{n}{\varepsilon}$
) iteration bound, where *q*≥1 is the so-called barrier degree of the kernel function underlying the algorithm. The constant hidden in the ?-symbol depends on *q* and the growth degree *p*≥1 of the kernel function. When choosing the kernel function appropriately the new large-update IPMs have a polynomial ?(
$\sqrt{n}$
log*n*log
$\frac{n}{\varepsilon}$
) iteration bound, thus improving the currently best known bound for large-update methods by almost a factor
$\sqrt{n}$
. Our unified analysis provides also the ?(
$\sqrt{n}$
log
$\frac{n}{\varepsilon}$
) best known iteration bound of small-update IPMs. At each iteration, we need to solve only one linear system. An extension of the above results to semidefinite optimization (SDO) is also presented.

## Solving the Canonical Problem

### Interior Point Methods for Linear Optimization (2005-01-01): 71-83 , January 01, 2005

## Target-Following Methods for Linear Programming

### Interior Point Methods of Mathematical Programming (1996-01-01) 5: 83-124 , January 01, 1996

We give a unifying approach to various primal-dual interior point methods by performing the analysis in ‘the space of complementary products’, or ν-space, which is closely related to the use of weighted logarithmic barrier functions. We analyze central and weighted path- following methods, Dikin-path-following methods, variants of a shifted barrier method and the cone-affine scaling method, efficient centering strategies, and efficient strategies for computing weighted centerss

## Back Matter - Interior Point Methods for Linear Optimization

### Interior Point Methods for Linear Optimization (2005-01-01) , January 01, 2005

## On Copositive Programming and Standard Quadratic Optimization Problems

### Journal of Global Optimization (2000-12-01) 18: 301-320 , December 01, 2000

A standard quadratic problem consists of finding global maximizers of a quadratic form over the standard simplex. In this paper, the usual semidefinite programming relaxation is strengthened by replacing the cone of positive semidefinite matrices by the cone of completely positive matrices (the positive semidefinite matrices which allow a factorization *FF*^{T} where *F* is some non-negative matrix). The dual of this cone is the cone of copositive matrices (i.e., those matrices which yield a non-negative quadratic form on the positive orthant). This conic formulation allows us to employ primal-dual affine-scaling directions. Furthermore, these approaches are combined with an evolutionary dynamics algorithm which generates primal-feasible paths along which the objective is monotonically improved until a local solution is reached. In particular, the primal-dual affine scaling directions are used to escape from local maxima encountered during the evolutionary dynamics phase.