## SEARCH

#### Institution

##### ( see all 3533)

- Rutgers University 67 (%)
- University of Florida 53 (%)
- Georgia Institute of Technology 46 (%)
- Carnegie Mellon University 37 (%)
- University of Toronto 36 (%)

#### Author

##### ( see all 10798)

- Glover, Fred 23 (%)
- Drezner, Zvi 18 (%)
- Gendreau, Michel 18 (%)
- Liang, Liang 16 (%)
- Pardalos, Panos M. 15 (%)

#### Subject

##### ( see all 6)

- Combinatorics 5048 (%)
- Theory of Computation 5048 (%)
- Economics / Management Science 4167 (%)
- Operations Research/Decision Theory 3298 (%)
- Operation Research/Decision Theory 1750 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 5048 matching Articles
Results per page:

## Profit maximization and reduction of the cannibalization effect in chain expansion

### Annals of Operations Research (2016-11-01) 246: 57-75 , November 01, 2016

We consider the facility location problem for an expanding chain which competes with other chains offering the same goods or service in a geographical area. Customers are supposed to select the facility with maximum utility to be served and facilities in the expanding chain may have different owners. We first use the weighted method to develop an integer linear programming model to obtain Pareto optimal locations related to the inner competition between the owners of the old facilities and the owners of the new facilities. This model is applied to maximizing the profit of the expanding chain taking into account the loss in market share of its old facilities caused by the entering of new facilities (cannibalization effect). A study with data of Spanish municipalities shows that the cannibalization effect can be significantly reduced by sacrificing a small portion of profit.

## Selecting a quality control attribute sample:An information‐economics method

### Annals of Operations Research (1999-01-01) 91: 83-104 , January 01, 1999

The information‐economics approach to assessing the value of information is different fromthe statistical approach. The statistical approach focuses on determining the probabilities oftype I and II errors, while the information‐economics approach focuses on maximizing theexpected monetary value of the whole process. This attitude is the basis for the models ofsequential decision processes, especially Markov decision processes (MDP) or partiallyobserved Markov decision processes (POMDP). However, as in traditional single‐samplingmodels, the sample size and sampling costs are not treated as decision variables in a cost‐effectivemanner. This paper uses a well‐known information‐economics model ‐ the InformationStructure Model ‐ to determine the optimal sample size and decision rule in QC single‐samplingproblems. The method uses rough information about the costs of types I and IIerrors and other parameters of the sampling problem. That method can be applied by decisionmakers to decide whether to use a QC sample and to determine the optimal QC plan in orderto maximize the long‐range expected monetary value of sampling gained by the firm. Analgorithm for single‐sampling plan determination is presented toward the end of the paper.Applications to double‐sampling or sequential‐sampling problems need further research.

## Algorithms for solving nonlinear dynamic decision models

### Annals of Operations Research (1993-06-01) 44: 115-142 , June 01, 1993

In this paper we discuss two Newton-type algorithms for solving economic models. The models are preprocessed by reordering the equations in order to minimize the dimension of the simultaneous block. The solution algorithms are then applied to this block. The algorithms evaluate numerically, as required, selected columns of the Jacobian of the simultaneous part. Provisions also exist for similar systems to be solved, if possible, without actually reinitialising the Jacobian. One of the algorithms also uses the Broyden update to improve the Jacobian. Global convergence is maintained by an Armijo-type stepsize strategy.

The global and local convergence of the quasi-Newton algorithm is discussed. A novel result is established for convergence under relaxed descent directions and relating the achievement of unit stepsizes to the accuracy of the Jacobian approximation. Furthermore, a simple derivation of the Dennis-Moré characterisation of the Q-superlinear convergence rate is given.

The model equation reordering algorithm is also described. The model is reordered to define heart and loop variables. This is also applied recursively to the subgraph formed by the loop variables to reduce the total number of above diagonal elements in the Jacobian of the complete system. The extension of the solution algorithms to consistent expectations are discussed. The algorithms are compared with Gauss-Seidel SOR algorithms using the USA and Spanish models of the OECD Interlink system.

## Subjective expected utility with nonincreasing risk aversion

### Annals of Operations Research (1989-12-01) 19: 219-228 , December 01, 1989

It is shown that assumptions about risk aversion, usually studied under the pre-supposition of expected utility maximization, have a surprising extra merit at an earlier stage of the measurement work: together with the sure-thing principle, these assumptions imply subjective expected utility maximization for monotonic continuous weak orders.

## Testing successive regression approximations by large-scale two-stage problems

### Annals of Operations Research (2011-06-01) 186: 83-99 , June 01, 2011

A heuristic procedure, called successive regression approximations (*SRA*) has been developed for solving stochastic programming problems. They range from equation solving to probabilistic constrained and two-stage models through a combined model of Prékopa. We show here, that due to enhancements in the computer program, *SRA* can be used to solve large-scale two-stage problems with 100 first stage decision variables and a 120 dimensional normally distributed random right hand side vector in the second stage problem. A FORTRAN source program and computational results for 124 problems are presented at www.uni-corvinus.hu/~ideak1.

## Financial scenario generation for stochastic multi-stage decision processes as facility location problems

### Annals of Operations Research (2007-07-01) 152: 257-272 , July 01, 2007

The quality of multi-stage stochastic optimization models as they appear in asset liability management, energy planning, transportation, supply chain management, and other applications depends heavily on the quality of the underlying scenario model, describing the uncertain processes influencing the profit/cost function, such as asset prices and liabilities, the energy demand process, demand for transportation, and the like. A common approach to generate scenarios is based on estimating an unknown distribution and matching its moments with moments of a discrete scenario model. This paper demonstrates that the problem of finding valuable scenario approximations can be viewed as the problem of optimally approximating a given distribution with some distance function. We show that for Lipschitz continuous cost/profit functions it is best to employ the Wasserstein distance. The resulting optimization problem can be viewed as a multi-dimensional facility location problem, for which at least good heuristic algorithms exist. For multi-stage problems, a scenario tree is constructed as a nested facility location problem. Numerical convergence results for financial mean-risk portfolio selection conclude the paper.

## Monotone Methods for Markovian Equilibrium in Dynamic Economies

### Annals of Operations Research (2002-08-01) 114: 117-144 , August 01, 2002

In this paper, we provide an overview of an emerging class of “monotone map methods” in analyzing distorted equilibrium in dynamic economies. In particular, we focus on proving the existence and characterization of competitive equilibrium in non-optimal versions of the optimal growth models. We suggest two alternative methods: an Euler equation method for a smooth, strongly concave environment, and a value function method for a non-smooth supermodular environment. We are able to extend this analysis to study models that allow for unbounded growth or a labor–leisure choice.

## Optimal open loop cheating in dynamic reversedLinear ‐ Quadratic Stackelberg games

### Annals of Operations Research (1999-01-01) 88: 217-232 , January 01, 1999

The distinctive characteristic of a “Reversed Stackelberg Game” is that the leader playstwice, first by announcing his future action, second by implementing a possibly differentaction given the follower's reaction to his announcement. In such a game, if the leader usesthe normal Stackelberg solution to find (and announce) his optimal strategy, there is a strongtemptation for him to cheat, that is, to implement another action than the one announced. Inthis paper, within the framework of a standard discrete time Linear‐Quadratic DynamicReversed Stackelberg game, we discuss and derive the best possible open‐loop cheatingstrategy for an unscrupulous leader.

## Building the Additive Utility Functions for CAD-UFRJ Evaluation Staff Criteria

### Annals of Operations Research (2002-10-01) 116: 271-288 , October 01, 2002

This paper presents an application of the UTA method for building utility functions for the evaluation criteria defined by the Staff Evaluation Commission (CAD) of the Rio de Janeiro Federal University (UFRJ). Every year, the CAD-UFRJ gives the staff evaluation results for each Postgraduate Engineering Programme. However, the method used to generate the staff evaluation is assumed unknown. Trying to find the CAD-UFRJ preference structure, the evaluation results supplied by CAD-UFRJ are used to apply the UTA method. Some additional information obtained from the CAD-UFRJ data is incorporated in the optimal solutions analysis.

## Hybrid constructive heuristics for the critical node problem

### Annals of Operations Research (2016-03-01) 238: 637-649 , March 01, 2016

We consider the *Critical Node Problem*: given an undirected graph and an integer number *K*, at most *K* nodes have to be deleted from the graph in order to minimize a connectivity measure in the residual graph. We combine the basic steps used in common greedy algorithms with some flavour of local search, in order to obtain simple hybrid heuristic algorithms. The obtained algorithms are shown to be effective, delivering improved performances (solution quality and speed) with respect to known greedy algorithms and other more sophisticated state of the art methods.