## SEARCH

#### Institution

##### ( see all 71)

- Loyola University of Chicago 17 (%)
- University of Chicago 6 (%)
- Purdue University 4 (%)
- Universidad Complutense de Madrid 3 (%)
- University of Toronto 3 (%)

#### Author

##### ( see all 143)

- Kurtz, Albert K. 17 (%)
- Mayo, Samuel T. 17 (%)
- Goodman, Leo A. 6 (%)
- Kruskal, William H. 6 (%)
- Schmidt, Götz 6 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 159 matching Articles
Results per page:

## Programacion no lineal sin diferenciabilidad

### Trabajos de estadistica y de investigacion operativa (1979-10-01) 30: 37-53 , October 01, 1979

## Discretized likelihood methods—asymptotic properties of discretized likelihood estimators (DLE's)

### Annals of the Institute of Statistical Mathematics (1979-12-01) 31: 39-56 , December 01, 1979

Suppose that*X*_{1},*X*_{2}, ...,*X*_{n}, ... is a sequence of i.i.d. random variables with a density*f(x*, θ). Let*c*_{n} be a maximum order of consistency. We consider a solution
$$\hat \theta _n $$
of the discretized likelihood equation
$$\sum\limits_{i = 1}^n {\log f(X_i ,\hat \theta _n + rc_n^{ - 1} ) - } \sum\limits_{i = 1}^n {\log f(X_i ,\hat \theta _n ) = a_n (\hat \theta _n ,r)} $$
where*a*_{n}(θ,*r*) is chosen so that
$$\hat \theta _n $$
is asymptotically median unbiased (AMU). Then the solution
$$\hat \theta _n $$
is called a discretized likelihood estimator (DLE). In this paper it is shown in comparison with DLE that a maximum likelihood estimator (MLE) is second order asymptotically efficient but not third order asymptotically efficient in the regular case. Further it is seen that the asymptotic efficiency (including higher order cases) may be systematically discussed by the discretized likelihood methods.

## An interpretation of the factorial indexes in the light of Divisia Integral Indexes

### Statistische Hefte (1979-12-01) 20: 261-269 , December 01, 1979

## Back Matter - Erhebungstechniken

### Erhebungstechniken (1979-01-01) , January 01, 1979

## Estimation of a regression coefficient after two preliminary tests of significance

### Metrika (1979-12-01) 26: 183-193 , December 01, 1979

### Summary

The estimation of the regression coefficient of a population, defined by*E (y)*= γ+βx, incorporating two preliminary tests of significance has been discussed. The experimenter has two random samples of different sizes from two such populations, as defined above, with regression coefficients*β*_{1} and*β*_{2} respectively, where*β*_{2} may possibly be equal to*β*_{1}. Besides this, it is also conjectured that the common conditional variance*σ*^{2} of the two populations has a specified value*σ*_{0}^{2}
. The two preliminary tests are used to resolve these two uncertainties.

## Ein optimaler Test für den Parameter der Exponentialverteilung und andere Lebensdauerverteilungen

### Statistische Hefte (1979-09-01) 20: 183-190 , September 01, 1979

## A random packing model for elections

### Annals of the Institute of Statistical Mathematics (1979-12-01) 31: 157-167 , December 01, 1979

## Measures of Association for Cross Classifications III: Approximate Sampling Theory

### Measures of Association for Cross Classifications (1979-01-01): 76-130 , January 01, 1979

The population measures of association for cross classifications, discussed in the authors’ prior publications, have sample analogues that are approximately normally distributed for large samples. (Some qualifications and restrictions are necessary.) These large sample normal distributions with their associated standard errors, are derived for various measures of association and various methods of sampling. It is explained how the large sample normality may be used to test hypotheses about the measures and about differences between them, and to construct corresponding confidence intervals. Numerical results are given about the adequacy of the large sample normal approximations. In order to facilitate extension of the large sample results to other measures of association, and to other modes of sampling, than those treated here, the basic manipulative tools of large sample theory are explained and illustrated.