## SEARCH

#### Institution

##### ( see all 838)

- University of Alberta 14 (%)
- Technische Universität Berlin 12 (%)
- Vienna University of Technology 12 (%)
- Rice University 10 (%)
- University of Tennessee 10 (%)

#### Author

##### ( see all 1307)

- Christensen, G. S. 11 (%)
- Feichtinger, G. 10 (%)
- Lenhart, Suzanne 9 (%)
- Tröltzsch, Fredi 8 (%)
- Kort, P. M. 7 (%)

#### Subject

##### ( see all 120)

- Mathematics [x] 747 (%)
- Optimization 405 (%)
- Calculus of Variations and Optimal Control; Optimization 382 (%)
- Operation Research/Decision Theory 337 (%)
- Applications of Mathematics 330 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 747 matching Articles
Results per page:

## Singular Arcs in the Generalized Goddard’s Problem

### Journal of Optimization Theory and Applications (2008-11-01) 139: 439-461 , November 01, 2008

We investigate variants of Goddard’s problems for nonvertical trajectories. The control is the thrust force, and the objective is to maximize a certain final cost, typically, the final mass. In this article, performing an analysis based on the Pontryagin maximum principle, we prove that optimal trajectories may involve singular arcs (along which the norm of the thrust is neither zero nor maximal), that are computed and characterized. Numerical simulations are carried out, both with direct and indirect methods, demonstrating the relevance of taking into account singular arcs in the control strategy. The indirect method that we use is based on our previous theoretical analysis and consists in combining a shooting method with an homotopic method. The homotopic approach leads to a quadratic regularization of the problem and is a way to tackle the problem of nonsmoothness of the optimal control.

## Optimal Advertising and Pricing in a New-Product Adoption Model

### Journal of Optimization Theory and Applications (2008-10-04) 139: 351-360 , October 04, 2008

A model of new-product adoption is proposed that incorporates price and advertising effects. An optimal control problem that uses the model as its dynamics is solved explicitly to obtain the optimal price and advertising effort over time. The model has a great potential to be used in obtaining solutions and insights in a variety of differential game settings.

## Study of a One-Dimensional Optimal Control Problem with a Purely State-Dependent Cost

### Differential Equations and Dynamical Systems (2016-06-25): 1-19 , June 25, 2016

A one-dimensional optimal control problem with a state-dependent cost and a unimodular integrand is considered. It is shown that, under some standard assumptions, this problem can be solved without using the Pontryagin maximum principle, by simple methods of the classical analysis, basing on the Tchyaplygin comparison theorem. However, in some modifications of the problem, the usage of Pontryagin’s maximum principle is preferable. The optimal synthesis for the problem and for its modifications is obtained.

## Use of Augmented Lagrangian Methods for the Optimal Control of Obstacle Problems

### Journal of Optimization Theory and Applications (1997-10-01) 95: 101-126 , October 01, 1997

We investigate optimal control problems governed by variational inequalities involving constraints on the control, and more precisely the example of the obstacle problem. In this paper, we discuss some augmented Lagrangian algorithms to compute the solution.

## On the existence of sporadically catching-up optimal solutions for infinite-horizon optimal control problems

### Journal of Optimization Theory and Applications (1987-05-01) 53: 219-235 , May 01, 1987

In this paper, we extend the existence theory of Brock and Haurie concerning the existence of sporadically catching-up optimal solutions for autonomous, infinite-horizon optimal control problems. This notion of optimality is one of a hierarchy of types of optimality that have appeared in the literature to deal with optimal control problems whose cost functionals, described by an improper integral, either diverge or are unbounded below. Our results rely on the now classical convexity and seminormality hypotheses due to Cesari and are weaker than those assumed in the work of Brock and Haurie. An example is presented where our results are applicable, but those of the above-mentioned authors do not.

## Stochastic Maximum Principle for Optimal Control of SPDEs

### Applied Mathematics & Optimization (2013-10-01) 68: 181-217 , October 01, 2013

We prove a version of the maximum principle, in the sense of Pontryagin, for the optimal control of a stochastic partial differential equation driven by a finite dimensional Wiener process. The equation is formulated in a semi-abstract form that allows direct applications to a large class of controlled stochastic parabolic equations. We allow for a diffusion coefficient dependent on the control parameter, and the space of control actions is general, so that in particular we need to introduce two adjoint processes. The second adjoint process takes values in a suitable space of operators on *L*^{4}.

## Direct methods with maximal lower bound for mixed-integer optimal control problems

### Mathematical Programming (2009-04-01) 118: 109-149 , April 01, 2009

Many practical optimal control problems include discrete decisions. These may be either time-independent parameters or time-dependent control functions as gears or valves that can only take discrete values at any given time. While great progress has been achieved in the solution of optimization problems involving integer variables, in particular mixed-integer linear programs, as well as in continuous optimal control problems, the combination of the two is yet an open field of research. We consider the question of lower bounds that can be obtained by a relaxation of the integer requirements. For general nonlinear mixed-integer programs such lower bounds typically suffer from a huge integer gap. We convexify (with respect to binary controls) and relax the original problem and prove that the optimal solution of this continuous control problem yields the best lower bound for the nonlinear integer problem. Building on this theoretical result we present a novel algorithm to solve mixed-integer optimal control problems, with a focus on discrete-valued control functions. Our algorithm is based on the direct multiple shooting method, an adaptive refinement of the underlying control discretization grid and tailored heuristic integer methods. Its applicability is shown by a challenging application, the energy optimal control of a subway train with discrete gears and velocity limits.

## Augmented Lagrangian method for distributed optimal control problems with state constraints

### Journal of Optimization Theory and Applications (1993-09-01) 78: 493-521 , September 01, 1993

We consider state-constrained optimal control problems governed by elliptic equations. Doing Slater-like assumptions, we know that Lagrange multipliers exist for such problems, and we propose a decoupled augmented Lagrangian method. We present the algorithm with a simple example of a distributed control problem.

## An analytic approach to stochastic Volterra equations with completely monotone kernels

### Journal of Evolution Equations (2009-06-01) 9: 315-339 , June 01, 2009

We apply the semigroup setting of Desch and Miller to a class of stochastic integral equations of Volterra type with completely monotone kernels with a multiplicative noise term; the corresponding equation is an infinite dimensional stochastic equation with unbounded diffusion operator that we solve with the semigroup approach of Da Prato and Zabczyk. As a motivation of our results, we study an optimal control problem when the control enters the system together with the noise.

## Dynamic programming using singular perturbations

### Journal of Optimization Theory and Applications (1982-10-01) 38: 221-230 , October 01, 1982

The singular perturbation method is used in dynamic programming to reduce the order and the computational requirements of linear systems composed of slow and fast modes. After the fast modes are separated, a near-optimum solution is computed at two different iteration rates determined by the slow and fast subsystem dynamics. The result is a reduction in the order of the computational requirement of the given system to that of the slow subsystem.