## SEARCH

#### Institution

##### ( see all 169)

- Universidade de São Paulo 4 (%)
- Universidad Complutense de Madrid 3 (%)
- University of Connecticut 3 (%)
- University of Granada 3 (%)
- University of Toronto 3 (%)

#### Author

##### ( see all 232)

- Castro, Mário 3 (%)
- Moreno, Elías 3 (%)
- Consonni, Guido 2 (%)
- Gómez-Villegas, Miguel A. 2 (%)
- Guglielmi, Alessandra 2 (%)

#### Publication

##### ( see all 18)

- Test 30 (%)
- TEST 24 (%)
- Statistical Papers 14 (%)
- Methodology and Computing in Applied Probability 8 (%)
- Trabajos de Estadistica 8 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 111 matching Articles
Results per page:

## Bayesian tolerance intervals for the balanced two-factor nested random effects model

### TEST (2007-12-01) 16: 598-612 , December 01, 2007

Statistical intervals, properly calculated from sample data, are likely to be substantially more informative to decision makers than obtaining a point estimate alone and are often of paramount interest to practitioners and thus management (and are usually a great deal more meaningful than statistical significance or hypothesis tests). Wolfinger (1998, J Qual Technol 36:162–170) presented a simulation-based approach for determining Bayesian tolerance intervals in a balanced one-way random effects model. In this note the theory and results of Wolfinger are extended to the balanced two-factor nested random effects model. The example illustrates the flexibility and unique features of the Bayesian simulation method for the construction of tolerance intervals.

## Statistical analysis for Kumaraswamy’s distribution based on record data

### Statistical Papers (2013-05-01) 54: 355-369 , May 01, 2013

In this paper we review some results that have been derived on record values for some well known probability density functions and based on *m* records from Kumaraswamy’s distribution we obtain estimators for the two parameters and the future sth record value. These estimates are derived using the maximum likelihood and Bayesian approaches. In the Bayesian approach, the two parameters are assumed to be random variables and estimators for the parameters and for the future sth record value are obtained, when we have observed *m* past record values, using the well known squared error loss (SEL) function and a linear exponential (LINEX) loss function. The findings are illustrated with actual and computer generated data.

## Marginal and simultaneous predictive classification using stratified graphical models

### Advances in Data Analysis and Classification (2016-09-01) 10: 305-326 , September 01, 2016

An inductive probabilistic classification rule must generally obey the principles of Bayesian predictive inference, such that all observed and unobserved stochastic quantities are jointly modeled and the parameter uncertainty is fully acknowledged through the posterior predictive distribution. Several such rules have been recently considered and their asymptotic behavior has been characterized under the assumption that the observed features or variables used for building a classifier are conditionally independent given a simultaneous labeling of both the training samples and those from an unknown origin. Here we extend the theoretical results to predictive classifiers acknowledging feature dependencies either through graphical models or sparser alternatives defined as stratified graphical models. We show through experimentation with both synthetic and real data that the predictive classifiers encoding dependencies have the potential to substantially improve classification accuracy compared with both standard discriminative classifiers and the predictive classifiers based on solely conditionally independent features. In most of our experiments stratified graphical models show an advantage over ordinary graphical models.

## Compatible priors for Bayesian model comparison with an application to the Hardy–Weinberg equilibrium model

### TEST (2008-11-01) 17: 585-605 , November 01, 2008

Suppose we entertain Bayesian inference under a collection of models. This requires assigning a corresponding collection of prior distributions, one for each model’s parameter space. In this paper we address the issue of relating priors across models, and provide both a conceptual and a pragmatic justification for this task. Specifically, we consider the notion of “compatible” priors across models, and discuss and compare several strategies to construct such distributions. To explicate the issues involved, we refer to a specific problem, namely, testing the Hardy–Weinberg Equilibrium model, for which we provide a detailed analysis using Bayes factors.

## A unified approach to estimation of noncentrality parameters, the multiple correlation coefficient, and mixture models

### Mathematical Methods of Statistics (2017-04-01) 26: 134-148 , April 01, 2017

We consider a class of mixture models for positive continuous data and the estimation of an underlying parameter *θ* of the mixing distribution. With a unified approach, we obtain classes of dominating estimators under squared error loss of an unbiased estimator, which include smooth estimators. Applications include estimating noncentrality parameters of chi-square and *F*-distributions, as well as *ρ*^{2}/(1 − *ρ*^{2}), where *ρ* is amultivariate correlation coefficient in a multivariate normal set-up. Finally, the findings are extended to situations, where there exists a lower bound constraint on *θ*.

## On the concepts of admissibility and coherence

### Test (1999-12-01) 8: 319-338 , December 01, 1999

In the present paper it will be argued that if a parameter value assigns zero probability to an open set containing the actual response, then that parameter value should be excluded from the parameter space. When this is done, the model becomes a restricted model, and sampling theory inferences should be focused on the sampling distribution of an hypothetical independent response from the restricted model.

The above situation may arise when probability densities vanish outside a compact set. This phenomenon arises frequently in the real world, but it is usually ignored for reasons of mathematical convenience. Although many statistical procedures remain substantially the same when we consider this restricted model, admissibility propertics may be drastically changed, and inferences which are known to be inadmissible may turn out to be really admissible. Thus, Stein's phenomenon concerning the inadmissibility of the sample mean as an estimator of the population mean of a*p*-variate normal distribution when*p*≥3may be explained by the fact that the distribution has a compact support but this has been ignored by reasons of mathematical convenience.

The introduction of a restricted model is also important in the study of coherence. Thus, it will be shown that Brunk's theory of countably additive coherence, which admits the use of countably additive improper priors, can be improved with the introduction of a restricted model. Thus, the theory will be unified because it will be proved that the posterior is really coherent if and only if it is a Bayes posterior, and it will be simplified because it will not be required that the prior be minimally compatible with the model.

## Bayesian benchmarking with applications to small area estimation

### TEST (2011-11-01) 20: 574-588 , November 01, 2011

It is well-known that small area estimation needs explicit or at least implicit use of models (cf. Rao in Small Area Estimation, Wiley, New York, 2003). These model-based estimates can differ widely from the direct estimates, especially for areas with very low sample sizes. While model-based small area estimates are very useful, one potential difficulty with such estimates is that when aggregated, the overall estimate for a larger geographical area may be quite different from the corresponding direct estimate, the latter being usually believed to be quite reliable. This is because the original survey was designed to achieve specified inferential accuracy at this higher level of aggregation. The problem can be more severe in the event of model failure as often there is no real check for validity of the assumed model. Moreover, an overall agreement with the direct estimates at an aggregate level may sometimes be politically necessary to convince the legislators of the utility of small area estimates.

One way to avoid this problem is the so-called “benchmarking approach”, which amounts to modifying these model-based estimates so that we get the same aggregate estimate for the larger geographical area. Currently, the most popular approach is the so-called “raking” or ratio adjustment method, which involves multiplying all the small area estimates by a constant data-dependent factor so that the weighted total agrees with the direct estimate. There are alternate proposals, mostly from frequentist considerations, which meet also the aforementioned benchmarking criterion.

We propose in this paper a general class of constrained Bayes estimators which also achieve the necessary benchmarking. Many of the frequentist estimators, including some of the raked estimators, follow as special cases of our general result. Explicit Bayes estimators are derived which benchmark the weighted mean or both the weighted mean and weighted variability. We illustrate our methodology by developing poverty rates in school-aged children at the state level and then benchmarking these estimates to match at the national level. Unlike the existing frequentist benchmarking literature, which is primarily based on linear models, the proposed Bayesian approach can accommodate any arbitrary model, and the benchmarked Bayes estimators are based only on the posterior mean and the posterior variance-covariance matrix.

## Coconut Plant Growth, Mahalanobis Distance, and Jeffreys’ Prior

### Growth Curve Models and Applications (2017-01-01) 204: 115-125 , January 01, 2017

We study coconut plant growth in saline soil of Sunderban, West Bengal. Two growth environments are compared by Mahalanobis distance. Jeffreys’ noninformative prior and related matching priors are investigated in relation to cases including bi-exponential distribution for first principal component in the analyzed data. Fisher’s information *I*(
$$\theta $$
) is seen to be a measure of distribution sensitivity in terms of chi-square distance, extending a result given in Rao (1974).

## Variational Bayes model averaging for graphon functions and motif frequencies inference in W-graph models

### Statistics and Computing (2016-11-01) 26: 1173-1185 , November 01, 2016

*W*-graph refers to a general class of random graph models that can be seen as a random graph limit. It is characterized by both its graphon function and its motif frequencies. In this paper, relying on an existing variational Bayes algorithm for the stochastic block models (SBMs) along with the corresponding weights for model averaging, we derive an estimate of the graphon function as an average of SBMs with increasing number of blocks. In the same framework, we derive the variational posterior frequency of any motif. A simulation study and an illustration on a social network complete our work.

## Multiple Priors and Asset Pricing

### Methodology and Computing in Applied Probability (2009-06-01) 11: 211-229 , June 01, 2009

The asset pricing implications of a statistical model consistent with multiple priors, or beliefs about return distributions, are developed. It is shown that quite generally equilibrium differences in mean returns across priors are to be explained in terms of perceived risk differences between these priors. Advances in filtering theory are employed on time series data to filter all the multiple state conditional components of risks and rewards. It is then observed that excess return differentials across priors are broadly consistent with required risk compensations under these priors, though the sharp hypothesis of zero intercept and unit slope is rejected. The filtered results also deliver numerous other interesting statistics. Here we focus on the construction of long horizon return distributions from data on daily returns using a Markov chain approach to incorporate stochasticity in elementary risk characterizations like volatility, skewness and kurtosis.