## SEARCH

#### Author

##### ( see all 1046)

- Glaz, Joseph 27 (%)
- Naus, Joseph 27 (%)
- Wallenstein, Sylvan 27 (%)
- Eckstein, Peter P. 26 (%)
- Bourier, Günther 25 (%)

#### Subject

##### ( see all 60)

- Statistics [x] 1044 (%)
- Statistics, general 477 (%)
- Statistical Theory and Methods 301 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 299 (%)
- Probability Theory and Stochastic Processes 189 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 1044 matching Articles
Results per page:

## On Optimal Designs for High Dimensional Binary Regression Models

### Optimum Design 2000 (2001-01-01) 51: 275-285 , January 01, 2001

We consider the problem of deriving optimal designs for generalised linear models depending on several design variables. Ford, Torsney and Wu (1992) consider a two parameter/single design variable case. They derive a range of optimal designs, while making conjectures about *D*-optimal designs for all possible design intervals in the case of binary regression models. Motivated by these we establish results concerning the number of support points in the multi-design-variable case, an area which, in respect of non-linear models, has uncharted prospects.

## Gröbner Basis Methods in Mixture Experiments and Generalisations

### Optimum Design 2000 (2001-01-01) 51: 33-44 , January 01, 2001

The theory of mixture designs has a considerable history. We address here the important issue of the analysis of an experiment having in mind the algebraic interpretation of the structural restriction Σ*x*_{i} = 1. We present an approach for rewriting models for mixture experiments, based on constructing homogeneous orthogonal polynomials using Gröbner bases. Examples are given utilising the approach.

## Darstellung des statistischen Materials

### Grundlagen der Statistik (2001-01-01): 19-38 , January 01, 2001

### Zusammenfassung

Sie sollen in der Lage sein, statistisches Datenmaterial in Tabellen und grafischen Darstellungen übersichtlich zu präsentieren und die wichtigsten grafischen Darstellungen zu interpretieren.

## Bootstrapping Error Component Models

### Computational Statistics (2001-07-01) 16: 221-231 , July 01, 2001

### Summary

This paper proposes several resampling algorithms suitable for error component models and evaluates them in the context of bootstrap testing. In short, all the algorithms work well and lead to tests with correct or close to correct size. There is thus little or no reason not to use the bootstrap with error component models.

## Minimum Divergence Estimators Based on Grouped Data

### Annals of the Institute of Statistical Mathematics (2001-06-01) 53: 277-288 , June 01, 2001

The paper considers statistical models with real-valued observations i.i.d. by *F*(*x*, θ_{0}) from a family of distribution functions (*F*(*x*, θ); θ ε Θ), Θ ⊂ *R*^{s}, *s* ≥ 1. For random quantizations defined by sample quantiles (*F*_{n}^{−1} (λ_{1}),θ, *F*_{n}^{−1} (λ_{m−1})) of arbitrary fixed orders 0 < λ_{1} θ < λ_{m-1} < 1, there are studied estimators θ_{φ,n} of θ_{0} which minimize φ-divergences of the theoretical and empirical probabilities. Under an appropriate regularity, all these estimators are shown to be as efficient (first order, in the sense of Rao) as the MLE in the model quantified nonrandomly by (*F*^{−1} (λ_{1},θ_{0}),θ, *F*^{−1} (λ_{m−1}, θ_{0})). Moreover, the Fisher information matrix *I*_{m} (θ_{0}, λ) of the latter model with the equidistant orders λ = (λ_{j} = *j*/*m* : 1 ≤ *j* ≤ *m* − 1) arbitrarily closely approximates the Fisher information *J*(θ_{0}) of the original model when *m* is appropriately large. Thus the random binning by a large number of quantiles of equidistant orders leads to appropriate estimates of the above considered type.

## An Introduction to Sequential Monte Carlo Methods

### Sequential Monte Carlo Methods in Practice (2001-01-01): 3-14 , January 01, 2001

Many real-world data analysis tasks involve estimating unknown quantities from some given observations. In most of these applications, prior knowledge about the phenomenon being modelled is available. This knowledge allows us to formulate Bayesian models, that is prior distributions for the unknown quantities and likelihood functions relating these quantities to the observations. Within this setting, all inference on the unknown quantities is based on the posterior distribution obtained from Bayes’ theorem. Often, the observations arrive sequentially in time and one is interested in performing inference on-line. It is therefore necessary to update the posterior distribution as data become available. Examples include tracking an aircraft using radar measurements, estimating a digital communications signal using noisy measurements, or estimating the volatility of financial instruments using stock market data. Computational simplicity in the form of not having to store all the data might also be an additional motivating factor for sequential methods.

## Ronald Aylmer Fisher

### Statisticians of the Centuries (2001-01-01): 389-397 , January 01, 2001

R. A. Fisher transformed the statistics of his day from a modest collection of useful ad hoc techniques into a powerful and systematic body of theoretical concepts and practical methods. This achievement was all the more impressive because at the same time he pursued a dual career as a biologist, laying down, together with Sewall Wright and J. B. S. Haldane, the foundations of modern theoretical population genetics.

## Einführung

### Wahrscheinlichkeitsrechnung und schließende Statistik (2001-01-01): 1-3 , January 01, 2001

### Zusammenfassung

Unternehmen sind in hohem Maße auf Datenmaterial angewiesen, durch das sie über Zustände und Entwicklungen innerhalb und außerhalb des Unternehmens informiert werden. Ohne das Datenmaterial wären die Planung, Steuerung und Kontrolle des gesamten Unternehmensgeschehens nicht möglich. Die erforderlichen Daten werden dabei zum einen in ihrer ursprünglichen Form verwendet, zum anderen müssen sie für die Verwendung zuerst zweckorientiert aufbereitet und analysiert werden.

## Interacting Particle Filtering with Discrete-Time Observations: Asymptotic Behaviour in the Gaussian Case

### Stochastics in Finite and Infinite Dimensions (2001-01-01): 101-122 , January 01, 2001

In this paper we are concerned with a very particular case of the following general filtering problem. The state process *X*is the solution of a stochastic differential equation of the form
$$\begin{array}{*{20}{c}}
{d{X_t} = \alpha ({X_t})dt + \beta ({X_t})d{W_t},}&{\mathcal{L}({X_0}) = {\pi _0},}
\end{array}$$
where π_{0} is a known distribution on ℝ^{d}, and α,β are known functions, and *W* is a *d*-dimensional Wiener process. We have noisy observations *Y*_{1},...,*Y*_{N} at *N* regularly spaced times, and without loss of generality we will assume that these times are. That is, at each time *i* ∈ ℕ*
we have an ℝ^{d}-valued observation *Y*_{i} given by
$${Y_i} = h({X_i},{\varepsilon _i}),$$
where the ε_{i} are i.i.d. *q*′-dimensional variables, independent of *X* and with a law having a known density g, and h is a known function from ℝ^{d} × ℝ^{q′} into ℝ*q*. We denote by π_{Y,N} the filter for *X*_{N}, that is a regular version of the conditional distribution of the random variable *X*_{N} knowing *Y*_{1},…,*Y*_{N}.