## SEARCH

#### Institution

##### ( see all 2876)

- Rutgers University 58 (%)
- Georgia Institute of Technology 43 (%)
- University of Florida 41 (%)
- Carnegie Mellon University 34 (%)
- Université de Montréal 31 (%)

#### Author

##### ( see all 8969)

- Glover, Fred 22 (%)
- Gendreau, Michel 18 (%)
- Drezner, Zvi 14 (%)
- Berman, Oded 12 (%)
- Burke, Edmund K. 12 (%)

#### Subject

- Combinatorics 4167 (%)
- Economics / Management Science [x] 4167 (%)
- Theory of Computation 4167 (%)
- Operations Research/Decision Theory 3291 (%)
- Operation Research/Decision Theory 876 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 4167 matching Articles
Results per page:

## Selecting a quality control attribute sample:An information‐economics method

### Annals of Operations Research (1999-01-01) 91: 83-104 , January 01, 1999

The information‐economics approach to assessing the value of information is different fromthe statistical approach. The statistical approach focuses on determining the probabilities oftype I and II errors, while the information‐economics approach focuses on maximizing theexpected monetary value of the whole process. This attitude is the basis for the models ofsequential decision processes, especially Markov decision processes (MDP) or partiallyobserved Markov decision processes (POMDP). However, as in traditional single‐samplingmodels, the sample size and sampling costs are not treated as decision variables in a cost‐effectivemanner. This paper uses a well‐known information‐economics model ‐ the InformationStructure Model ‐ to determine the optimal sample size and decision rule in QC single‐samplingproblems. The method uses rough information about the costs of types I and IIerrors and other parameters of the sampling problem. That method can be applied by decisionmakers to decide whether to use a QC sample and to determine the optimal QC plan in orderto maximize the long‐range expected monetary value of sampling gained by the firm. Analgorithm for single‐sampling plan determination is presented toward the end of the paper.Applications to double‐sampling or sequential‐sampling problems need further research.

## Algorithms for solving nonlinear dynamic decision models

### Annals of Operations Research (1993-06-01) 44: 115-142 , June 01, 1993

In this paper we discuss two Newton-type algorithms for solving economic models. The models are preprocessed by reordering the equations in order to minimize the dimension of the simultaneous block. The solution algorithms are then applied to this block. The algorithms evaluate numerically, as required, selected columns of the Jacobian of the simultaneous part. Provisions also exist for similar systems to be solved, if possible, without actually reinitialising the Jacobian. One of the algorithms also uses the Broyden update to improve the Jacobian. Global convergence is maintained by an Armijo-type stepsize strategy.

The global and local convergence of the quasi-Newton algorithm is discussed. A novel result is established for convergence under relaxed descent directions and relating the achievement of unit stepsizes to the accuracy of the Jacobian approximation. Furthermore, a simple derivation of the Dennis-Moré characterisation of the Q-superlinear convergence rate is given.

The model equation reordering algorithm is also described. The model is reordered to define heart and loop variables. This is also applied recursively to the subgraph formed by the loop variables to reduce the total number of above diagonal elements in the Jacobian of the complete system. The extension of the solution algorithms to consistent expectations are discussed. The algorithms are compared with Gauss-Seidel SOR algorithms using the USA and Spanish models of the OECD Interlink system.

## Subjective expected utility with nonincreasing risk aversion

### Annals of Operations Research (1989-12-01) 19: 219-228 , December 01, 1989

It is shown that assumptions about risk aversion, usually studied under the pre-supposition of expected utility maximization, have a surprising extra merit at an earlier stage of the measurement work: together with the sure-thing principle, these assumptions imply subjective expected utility maximization for monotonic continuous weak orders.

## Testing successive regression approximations by large-scale two-stage problems

### Annals of Operations Research (2011-06-01) 186: 83-99 , June 01, 2011

A heuristic procedure, called successive regression approximations (*SRA*) has been developed for solving stochastic programming problems. They range from equation solving to probabilistic constrained and two-stage models through a combined model of Prékopa. We show here, that due to enhancements in the computer program, *SRA* can be used to solve large-scale two-stage problems with 100 first stage decision variables and a 120 dimensional normally distributed random right hand side vector in the second stage problem. A FORTRAN source program and computational results for 124 problems are presented at www.uni-corvinus.hu/~ideak1.

## Financial scenario generation for stochastic multi-stage decision processes as facility location problems

### Annals of Operations Research (2007-07-01) 152: 257-272 , July 01, 2007

The quality of multi-stage stochastic optimization models as they appear in asset liability management, energy planning, transportation, supply chain management, and other applications depends heavily on the quality of the underlying scenario model, describing the uncertain processes influencing the profit/cost function, such as asset prices and liabilities, the energy demand process, demand for transportation, and the like. A common approach to generate scenarios is based on estimating an unknown distribution and matching its moments with moments of a discrete scenario model. This paper demonstrates that the problem of finding valuable scenario approximations can be viewed as the problem of optimally approximating a given distribution with some distance function. We show that for Lipschitz continuous cost/profit functions it is best to employ the Wasserstein distance. The resulting optimization problem can be viewed as a multi-dimensional facility location problem, for which at least good heuristic algorithms exist. For multi-stage problems, a scenario tree is constructed as a nested facility location problem. Numerical convergence results for financial mean-risk portfolio selection conclude the paper.

## Monotone Methods for Markovian Equilibrium in Dynamic Economies

### Annals of Operations Research (2002-08-01) 114: 117-144 , August 01, 2002

In this paper, we provide an overview of an emerging class of “monotone map methods” in analyzing distorted equilibrium in dynamic economies. In particular, we focus on proving the existence and characterization of competitive equilibrium in non-optimal versions of the optimal growth models. We suggest two alternative methods: an Euler equation method for a smooth, strongly concave environment, and a value function method for a non-smooth supermodular environment. We are able to extend this analysis to study models that allow for unbounded growth or a labor–leisure choice.

## Optimal open loop cheating in dynamic reversedLinear ‐ Quadratic Stackelberg games

### Annals of Operations Research (1999-01-01) 88: 217-232 , January 01, 1999

The distinctive characteristic of a “Reversed Stackelberg Game” is that the leader playstwice, first by announcing his future action, second by implementing a possibly differentaction given the follower's reaction to his announcement. In such a game, if the leader usesthe normal Stackelberg solution to find (and announce) his optimal strategy, there is a strongtemptation for him to cheat, that is, to implement another action than the one announced. Inthis paper, within the framework of a standard discrete time Linear‐Quadratic DynamicReversed Stackelberg game, we discuss and derive the best possible open‐loop cheatingstrategy for an unscrupulous leader.

## Building the Additive Utility Functions for CAD-UFRJ Evaluation Staff Criteria

### Annals of Operations Research (2002-10-01) 116: 271-288 , October 01, 2002

This paper presents an application of the UTA method for building utility functions for the evaluation criteria defined by the Staff Evaluation Commission (CAD) of the Rio de Janeiro Federal University (UFRJ). Every year, the CAD-UFRJ gives the staff evaluation results for each Postgraduate Engineering Programme. However, the method used to generate the staff evaluation is assumed unknown. Trying to find the CAD-UFRJ preference structure, the evaluation results supplied by CAD-UFRJ are used to apply the UTA method. Some additional information obtained from the CAD-UFRJ data is incorporated in the optimal solutions analysis.

## Data, expert knowledge, and decisions: an introduction to the volume

### Annals of Operations Research (1995-02-01) 55: 1-7 , February 01, 1995

## Interactive bicriterion decision support for a large scale industrial scheduling system

### Annals of Operations Research (2015-04-01) 227: 45-61 , April 01, 2015

In this paper we develop an interactive decision analysis approach to treat a large scale bicriterion integer programming problem, addressing a real world assembly line scheduling problem of a manufacturing company. This company receives periodically a set of orders for the production of specific items (jobs) through a number of specialised production (assembly) lines. The paper presents a non compensatory approach based on an interactive implementation of the *ε*-constraint method that enables the decision maker to achieve a satisfactory goal for each objective separately. In fact, the method generates and evaluates a large number of non dominated solutions that constitute a representative sample of the criteria ranges. The experience with a specific numerical example shows the efficiency and usefulness of the proposed model in solving large scale bicriterion industrial integer programming problems, highlighting at the same time the modelling limitations.