## SEARCH

#### Institution

##### ( see all 631)

- National University of Singapore 19 (%)
- Nanjing University 16 (%)
- University of British Columbia 16 (%)
- Chemnitz University of Technology 14 (%)
- Georgia Institute of Technology 14 (%)

#### Author

##### ( see all 1120)

- Toh, Kim-Chuan 17 (%)
- Bauschke, Heinz H. 12 (%)
- Boţ, Radu Ioan 11 (%)
- Lan, Guanghui 11 (%)
- Sun, Defeng 11 (%)

#### Subject

##### ( see all 79)

- Mathematics [x] 649 (%)
- Calculus of Variations and Optimal Control; Optimization 295 (%)
- Mathematics of Computing 251 (%)
- Numerical Analysis 236 (%)
- Mathematical Methods in Physics 234 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 649 matching Articles
Results per page:

## Bilevel Programming: Optimality Conditions and Duality

### Encyclopedia of Optimization (2001-01-01): 180-185 , January 01, 2001

## Approximation algorithms for general packing problems and their application to the multicast congestion problem

### Mathematical Programming (2008-07-01) 114: 183-206 , July 01, 2008

In this paper we present approximation algorithms based on a Lagrangian decomposition via a logarithmic potential reduction to solve a general packing or min–max resource sharing problem with *M* non-negative convex constraints on a convex set *B*. We generalize a method by Grigoriadis et al. to the case with weak approximate block solvers (i.e., with only constant, logarithmic or even worse approximation ratios). Given an accuracy
$$\varepsilon \in (0,1)$$
, we show that our algorithm needs
$$O(M(ln M+ \varepsilon^{-2} ln \varepsilon^{-1}))$$
calls to the block solver, a bound independent of the data and the approximation ratio of the block solver. For small approximation ratios the algorithm needs
$$O(M(ln M + \varepsilon^{-2}))$$
calls to the block solver. As an application we study the problem of minimizing the maximum edge congestion in a multicast communication network. Interestingly the block problem here is the classical Steiner tree problem that can be solved only approximately. We show how to use approximation algorithms for the Steiner tree problem to solve the multicast congestion problem approximately.

## Monotone Operators Without Enlargements

### Computational and Analytical Mathematics (2013-01-01) 50: 79-103 , January 01, 2013

Enlargements have proven to be useful tools for studying maximally monotone mappings. It is therefore natural to ask in which cases the enlargement does not change the original mapping. Svaiter has recently characterized non-enlargeable operators in reflexive Banach spaces and has also given some partial results in the nonreflexive case. In the present paper, we provide another characterization of non-enlargeable operators in nonreflexive Banach spaces under a closedness assumption on the graph. Furthermore, and still for general Banach spaces, we present a new proof of the maximality of the sum of two maximally monotone linear relations. We also present a new proof of the maximality of the sum of a maximally monotone linear relation and a normal cone operator when the domain of the linear relation intersects the interior of the domain of the normal cone.

## Semidefinite representation of convex sets

### Mathematical Programming (2010-03-01) 122: 21-64 , March 01, 2010

Let
$${S =\{x\in \mathbb{R}^n: g_1(x)\geq 0, \ldots, g_m(x)\geq 0\}}$$
be a semialgebraic set defined by multivariate polynomials *g*_{i}(*x*). Assume *S* is convex, compact and has nonempty interior. Let
$${S_i =\{x\in \mathbb{R}^n:\, g_i(x)\geq 0\}}$$
, and ∂ *S* (resp. ∂ *S*_{i}) be the boundary of *S* (resp. *S*_{i}). This paper, as does the subject of semidefinite programming (SDP), concerns linear matrix inequalities (LMIs). The set *S* is said to have an LMI representation if it equals the set of solutions to some LMI and it is known that some convex *S* may not be LMI representable (Helton and Vinnikov in Commun Pure Appl Math 60(5):654–674, 2007). A question arising from Nesterov and Nemirovski (SIAM studies in applied mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, 1994), see Helton and Vinnikov in Commun Pure Appl Math 60(5):654–674, 2007 and Nemirovski in Plenary lecture, International Congress of Mathematicians (ICM), Madrid, Spain, 2006, is: given a subset *S* of
$${\mathbb{R}^n}$$
, does there exist an LMI representable set Ŝ in some higher dimensional space
$${ \mathbb{R}^{n+N}}$$
whose projection down onto
$${\mathbb{R}^n}$$
equals *S*. Such *S* is called semidefinite representable or SDP representable. This paper addresses the SDP representability problem. The following are the main contributions of this paper: (i) assume *g*_{i}(*x*) are all concave on *S*. If the positive definite Lagrange Hessian condition holds, i.e., the Hessian of the Lagrange function for optimization problem of minimizing any nonzero linear function *ℓ*^{T}*x* on *S* is positive definite at the minimizer, then *S* is SDP representable. (ii) If each *g*_{i}(*x*) is either sos-concave ( − ∇^{2}*g*_{i}(*x*) = *W*(*x*)^{T}*W*(*x*) for some possibly nonsquare matrix polynomial *W*(*x*)) or strictly quasi-concave on *S*, then *S* is SDP representable. (iii) If each *S*_{i} is either sos-convex or poscurv-convex (*S*_{i} is compact convex, whose boundary has positive curvature and is nonsingular, i.e., ∇*g*_{i}(*x*) ≠ 0 on ∂ *S*_{i} ∩ *S*), then *S* is SDP representable. This also holds for *S*_{i} for which ∂ *S*_{i} ∩ *S* extends smoothly to the boundary of a poscurv-convex set containing *S*. (iv) We give the complexity of Schmüdgen and Putinar’s matrix Positivstellensatz, which are critical to the proofs of (i)–(iii).

## First-Order Algorithms for Convex Optimization with Nonseparable Objective and Coupled Constraints

### Journal of the Operations Research Society of China (2017-06-01) 5: 131-159 , June 01, 2017

In this paper, we consider a block-structured convex optimization model, where in the objective the block variables are nonseparable and they are further linearly coupled in the constraint. For the 2-block case, we propose a number of first-order algorithms to solve this model. First, the alternating direction method of multipliers (ADMM) is extended, assuming that it is easy to optimize the augmented Lagrangian function with one block of variables at each time while fixing the other block. We prove that
iteration complexity bound holds under suitable conditions, where *t* is the number of iterations. If the subroutines of the ADMM cannot be implemented, then we propose new alternative algorithms to be called alternating proximal gradient method of multipliers, alternating gradient projection method of multipliers, and the hybrids thereof. Under suitable conditions, the
iteration complexity bound is shown to hold for all the newly proposed algorithms. Finally, we extend the analysis for the ADMM to the general multi-block case.

## Iterative methods for convex proximal split feasibility problems and fixed point problems

### Afrika Matematika (2016-06-01) 27: 501-517 , June 01, 2016

In this paper we prove strong convergence result for a problem of finding a point which minimizes a proper convex lower-semicontinuous function *f* which is also a fixed point of a total asymptotically strict pseudocontractive mapping such that its image under a bounded linear operator *A* minimizes another proper convex lower-semicontinuous function *g* in real Hilbert spaces. In our result in this work, our iterative scheme is proposed with a way of selecting the step-size such that its implementation does not need any prior information about the operator norm ||*A*|| because the calculation or at least an estimate of the operator norm ||*A*|| is very difficult, if it is not an impossible task. Our result complements many recent and important results in this direction.

## On Existence and Solution Methods for Strongly Pseudomonotone Equilibrium Problems

### Vietnam Journal of Mathematics (2015-06-01) 43: 229-238 , June 01, 2015

We study the equilibrium problems with strongly pseudomonotone bifunctions in real Hilbert spaces. We show the existence of a unique solution. We then propose a strongly convergent generalized projection method for equilibrium problems with strongly pseudomonotone bifunctions. The proposed method uses only one projection without requiring Lipschitz continuity. Application to variational inequalities is discussed.

## Deriving robust counterparts of nonlinear uncertain inequalities

### Mathematical Programming (2015-02-01) 149: 265-299 , February 01, 2015

In this paper we provide a systematic way to construct the robust counterpart of a nonlinear uncertain inequality that is concave in the uncertain parameters. We use convex analysis (support functions, conjugate functions, Fenchel duality) and conic duality in order to convert the robust counterpart into an explicit and computationally tractable set of constraints. It turns out that to do so one has to calculate the support function of the uncertainty set and the concave conjugate of the nonlinear constraint function. Conveniently, these two computations are completely independent. This approach has several advantages. First, it provides an easy structured way to construct the robust counterpart both for linear and nonlinear inequalities. Second, it shows that for new classes of uncertainty regions and for new classes of nonlinear optimization problems tractable counterparts can be derived. We also study some cases where the inequality is nonconcave in the uncertain parameters.

## Semidefinite Programming and Determinant Maximization

### Encyclopedia of Optimization (2001-01-01): 2302-2306 , January 01, 2001

## Dual subgradient algorithms for large-scale nonsmooth learning problems

### Mathematical Programming (2014-12-01) 148: 143-180 , December 01, 2014

“Classical” First Order (FO) algorithms of convex optimization, such as Mirror Descent algorithm or Nesterov’s optimal algorithm of smooth convex optimization, are well known to have optimal (theoretical) complexity estimates which do not depend on the problem dimension. However, to attain the optimality, the domain of the problem should admit a “good proximal setup”. The latter essentially means that (1) the problem domain should satisfy certain geometric conditions of “favorable geometry”, and (2) the practical use of these methods is conditioned by our ability to compute at a moderate cost *proximal transformation* at each iteration. More often than not these two conditions are satisfied in optimization problems arising in computational learning, what explains why proximal type FO methods recently became methods of choice when solving various learning problems. Yet, they meet their limits in several important problems such as multi-task learning with large number of tasks, where the problem domain does not exhibit favorable geometry, and learning and matrix completion problems with nuclear norm constraint, when the numerical cost of computing proximal transformation becomes prohibitive in large-scale problems. We propose a novel approach to solving nonsmooth optimization problems arising in learning applications where Fenchel-type representation of the objective function is available. The approach is based on applying FO algorithms to the dual problem and using the *accuracy certificates* supplied by the method to recover the primal solution. While suboptimal in terms of accuracy guaranties, the proposed approach does not rely upon “good proximal setup” for the primal problem but requires the problem domain to admit a *Linear Optimization oracle*—the ability to efficiently maximize a linear form on the domain of the primal problem.