## SEARCH

#### Keywords

62F15 Gibbs sampling Markov chain Monte Carlo order statistics point process probability generating function random effects simulated annealing confidence intervals EM algorithm maximum likelihood estimation reliability survival analysis additive accumulation of damages Box-Cox transformation#### Institution

##### ( see all 410)

- Cornell University 29 (%)
- Late of the Hebrew University of Jerusalem 26 (%)
- ETH Zürich 25 (%)
- Research Institute on Addictions 24 (%)
- SUNY at Buffalo 24 (%)

#### Author

##### ( see all 666)

- Eckstein, Peter P. 35 (%)
- Bachi, Roberto 26 (%)
- Stahel, Werner A. 25 (%)
- Gerber, Susan B. 24 (%)
- Voelkl, Kristin E. 24 (%)

#### Publication

##### ( see all 40)

- Annals of the Institute of Statistical Mathematics 48 (%)
- Statistics and Computing 46 (%)
- Computational Statistics 41 (%)
- Statistical Papers 40 (%)
- Extremes 32 (%)

#### Subject

##### ( see all 45)

- Statistics [x] 777 (%)
- Statistics, general 329 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 260 (%)
- Statistical Theory and Methods 157 (%)
- Probability Theory and Stochastic Processes 155 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 777 matching Articles
Results per page:

## The Utility of the Hui-Walter Paradigm for the Evaluation of Diagnostic Test in the Analysis of Social Science Data

### Diagnosis and Prediction (1999-01-01) 114: 7-29 , January 01, 1999

Just as in medical research, social scientists are concerned with the correct classification of individuals into well defined categories. Many economic policy decisions rely on the unemployment rate and related labor statistics. As the unemployment rate is the ratio of the estimated number of unemployed persons to the total labor force, misclassification of survey respondents may lead to an under or over estimate of it. Thus, estimating the accuracy of the original interview is quite important and the Census Bureau conducts a special reinterview study of about 20,000 respondents per year to monitor their error rates. In law, a large body of research (Hans and Vidmar; 1991, Blank and Rosenthal; 1991) has raised questions about how well the jury functions. The basic problem can be placed in the classification frame work. How well does the current system perform in correctly determining that a guilty party is found guilty and in not convicting an individual who should be acquitted ? This article reports some exploratory work we have carried out on extending and modifying the Hui-Walter methodology for evaluating the accuracy of diagnostic tests (see Vianna, 1995, for related work) to enable us to estimate the accuracy of the labor force data and to reanalyze a classic study (Kalven and Zeisel, 1966) of judge-jury agreements to estimate the accuracy of jury verdicts.

## Statistik — Begriff, Anwendungsgebiete, historische Notizen

### Repetitorium Statistik (1999-01-01): 2-3 , January 01, 1999

## Beschreibung eindimensionaler Stichproben

### Statistische Datenanalyse (1999-01-01): 11-32 , January 01, 1999

### Zusammenfassung

In der Einleitung (1.1.b) wurde das *Beispiel* einer Studie über *Schlafmittel* erwähnt. Zur Erinnerung: Die Daten waren

1.2 2.4 1.3 1.3 0.0 1.0 1.8 0.8 4.6 1.4 .

## Testtheorie

### Repetitorium Statistik (1999-01-01): 285-324 , January 01, 1999

### Zusammenfassung

Die Testtheorie ist das Teilgebiet der Induktiven Statistik, das die theoretischen Grundlagen und die mathematisch-statistischen Verfahren zum Prüfen von Hypothesen über unbekannte Verteilungen und/oder ihrer Parameter auf der Basis von Zufallsstichproben zum Gegenstand hat. Bei statistischen Testverfahren unterscheidet man zwischen parametrischen und nichtparametrischen Tests.

## Optimal design of air quality networks detecting warning and alert conditions

### Journal of the Italian Statistical Society (1999-04-01) 8: 61-73 , April 01, 1999

### Summary

A statistical method is presented to determine the optima design of air quality networks detecting warning and alert levels. A simulation model is used to describe temporal and spatial variations of atmospheric pollutants; air quality patterns serve as the database of the procedure to design the network. Only the sites exceeding warning and alert levels, at different meteorological scenarios, are considered as potential monitoring stations. For the selection of the optima set, spatial and temporal representativity criteria are introduced; accordingly, the optima set provides a complete representativity of the space and time considered. The method is applied to the Mestre urban area, in Venice district, for the carbon monoxide pollutant.

## A new approach to stock price modelling and the efficiency of the Italian stock exchange

### Journal of the Italian Statistical Society (1999-04-01) 8: 25-47 , April 01, 1999

### Summary

In this paper, we propose a new model of asset prices which takes account of the investment strategies of three different kinds of agents: the market-makers, who operate rationally on the basis of the asset fundamentals, the smart buy-and-sell agents, who intervene when the prices reach particular levels and the non-smart buy-and-sell agents, who trade infrequently, mainly following psychological motivations. The different behavior of these groups of agents can determine temporary inefficiences on financial markets and we show that, by considering these inefficiences, it is possible to improve forecasting of asset prices.

## Forecasting with unequally spaced data by a functional principal component approach

### Test (1999-06-01) 8: 233-253 , June 01, 1999

The Principal Component Regression model of multiple responses is extended to forccast a continuous-time stochastic process. Orthogonal projection on a subspace of trigonometric functions is applied in order to estimate the principal components using discrete-time observations from a sample of regular curves. The forecasts provided by this approach are compared with classical principal component regression on simulated data.

## Limit distributions for point processes of exceedances of random levels

### Test (1999-06-01) 8: 191-200 , June 01, 1999

We give a sufficient condition for the convergence, as*n*→∞, of the point process of exceedances of the random levels U={*u*_{n}*(T)*}_{n≥1} by the random variables of the sequence X={*X*_{n}}_{n≥1}, defined by
$$S_n [X, U] = \sum\nolimits_{i = 1}^n {1_{\{ X_i > u_n (T)\} } } \delta _{i/n} $$
. The limiting point process is a doubly stochastic compound Poisson process, whose stochastic intensity measure is regulated by the random variable*T*. The main result is applied to the study of the asymptotic behaviour of some exceedances point processes of real levels, which can be rewritten as point processes of exceedances of random levels. We consider exceedances by some linear models*Y*_{n}=α*X*_{n}+β*T* and by chain-dependent sequences, which do not satisfy, in general, the long range dependence condition Δ from Hsing*et al.* (1988).

## Algorithms to Find Exact Inclusion Probabilities for Conditional Poisson Sampling and Pareto πps Sampling Designs

### Methodology And Computing In Applied Probability (1999-12-01) 1: 457-469 , December 01, 1999

Conditional Poisson Sampling Design as developed by Haje´k may be defined as a Poisson sampling conditioned by the requirement that the sample has fixed size. In this paper, an algorithm is implemented to calculate the conditional inclusion probabilities given the inclusion probabilities under Poisson Sampling. A simple algorithm is also given for second order inclusion probabilities in Conditional Poisson Sampling. Furthermore a numerical method is introduced to compute the unconditional inclusion probabilities when the conditional inclusion probabilities are predetermined. Simultaneously, we study the Pareto πps sampling design. This method, introduced by Rose´n, belongs to a class of sampling schemes called Order Sampling with Fixed Distribution Shape. Methods are provided to compute the first and second order inclusion probabilities numerically also in this case, as well as two procedures to adjust the parameters to get predetermined inclusion probabilities.