## SEARCH

#### Country

##### ( see all 262)

- United States 3840 (%)
- Germany 1573 (%)
- Canada 1078 (%)
- United Kingdom 1072 (%)

#### Institution

##### ( see all 9944)

- The Institute of Statistical Mathematics 213 (%)
- University of California 206 (%)
- Indian Statistical Institute 182 (%)
- McMaster University 119 (%)
- University of Washington 109 (%)

#### Author

##### ( see all 31989)

- Balakrishnan, N. 82 (%)
- Nadarajah, Saralees 82 (%)
- Akaike, Hirotugu 46 (%)
- Grams, Ralph R. 46 (%)
- Heyer, H. 44 (%)

#### Publication

##### ( see all 51)

- Annals of the Institute of Statistical Mathematics 2989 (%)
- Journal of Medical Systems 2297 (%)
- Metrika 2046 (%)
- Statistical Papers 1716 (%)
- Statistics and Computing 1253 (%)

#### Subject

##### ( see all 53)

- Statistics [x] 21447 (%)
- Statistics, general 14237 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 10978 (%)
- Probability Theory and Stochastic Processes 5806 (%)
- Economic Theory 5098 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 21447 matching Articles
Results per page:

## Bayesian model comparison with un-normalised likelihoods

### Statistics and Computing (2017-03-01) 27: 403-422 , March 01, 2017

Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of *biased* weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.

## Estimating conception statistics using gestational age information from NHS Numbers for Babies data

### Health Statistics Quarterly (2009-05-01) 41: 21-27 , May 01, 2009

Conception statistics routinely published for England and Wales include pregnancies that result in one or more live- or stillbirths (a maternity) or an abortion. All live births are assumed to be 38 weeks gestation as information on gestation is not collected at birth registration. For the first time, gestational age information from the National Health Service (NHS) Numbers for Babies (NN4B) data has been used to re-estimate conception statistics for 2005. This shows that 72 per cent of conceptions leading to a maternity in fact have a gestation period that differs from 38 weeks and most of these fall at either 37 or 39 weeks. The age-specific conception rates using this revised method are not significantly different to those produced using the current method.

## Sequential Monte Carlo for counting vertex covers in general graphs

### Statistics and Computing (2016-05-01) 26: 591-607 , May 01, 2016

In this paper we describe a sequential importance sampling (SIS) procedure for counting the number of vertex covers in general graphs. The optimal SIS proposal distribution is the uniform over a suitably restricted set, but is not implementable. We will consider two proposal distributions as approximations to the optimal. Both proposals are based on randomization techniques. The first randomization is the classic probability model of random graphs, and in fact, the resulting SIS algorithm shows polynomial complexity for random graphs. The second randomization introduces a probabilistic relaxation technique that uses Dynamic Programming. The numerical experiments show that the resulting SIS algorithm enjoys excellent practical performance in comparison with existing methods. In particular the method is compared with *cachet*—an exact model counter, and the state of the art *SampleSearch*, which is based on Belief Networks and importance sampling.

## Editorial

### Statistical Methods and Applications (2007-06-01) 16: 5 , June 01, 2007

## The MAP test for multimodality

### Journal of Classification (1994-03-01) 11: 5-36 , March 01, 1994

We introduce a test for detecting multimodality in distributions based on minimal constrained spanning trees. We define a Minimal Ascending Path Spanning Tree (MAPST) on a set of points as a spanning tree that has the minimal possible sum of lengths of links with the constraint that starting from any link, the lengths of the links are non-increasing towards a root node. We define similarly MAPSTs with more than one root. We present some algorithms for finding such trees. Based on these trees, we devise a test for multimodality, called the MAP Test (for Minimal Ascending Path). Using simulations, we estimate percentage points of the MAP statistic and assess the power of the test. Finally, we illustrate the use of MAPSTs for determining the number of modes in a distribution of positions of galaxies on photographic plates from a rich galaxy cluster.

## A Cautionary Note on Likelihood Ratio Tests in Mixture Models

### Annals of the Institute of Statistical Mathematics (2000-09-01) 52: 481-487 , September 01, 2000

We show that iterative methods for maximizing the likelihood in a mixture of exponentials model depend strongly on their particular implementation. Different starting strategies and stopping rules yield completely different estimators of the parameters. This is demonstrated for the likelihood ratio test of homogeneity against two-component exponential mixtures, when the test statistic is calculated by the EM algorithm.

## Local expectations of the population spectral distribution of a high-dimensional covariance matrix

### Statistical Papers (2014-05-01) 55: 563-573 , May 01, 2014

This paper discusses the relationship between the population spectral distribution and the limit of the empirical spectral distribution in high-dimensional situations. When the support of the limiting spectral distribution is split into several intervals, the population one gains a meaningful division, and general functional expectations of each part from the division, referred as local expectations, can be formulated as contour integrals around these intervals. Basing on these knowledge we present consistent estimators of the local expectations and prove a central limit theorem for them. The results are then used to analyze an estimator of the population spectral distribution in recent literature.

## Statistics of random processes I: General theory

### Metrika (1983-12-01) 30: 100 , December 01, 1983

## Goodness-of-fit tests for semiparametric and parametric hypotheses based on the probability weighted empirical characteristic function

### Statistical Papers (2016-12-01) 57: 957-976 , December 01, 2016

We investigate the finite-sample properties of certain procedures which employ the novel notion of the probability weighted empirical characteristic function. The procedures considered are: (1) Testing for symmetry in regression, (2) Testing for multivariate normality with independent observations, and (3) Testing for multivariate normality of random effects in mixed models. Along with the new tests alternative methods based on the ordinary empirical characteristic function as well as other more well known procedures are implemented for the purpose of comparison.

## Web-based Multi-center Data Management System for Clinical Neuroscience Research

### Journal of Medical Systems (2010-02-01) 34: 25-33 , February 01, 2010

Modern clinical research often involves multi-center studies, large and heterogeneous data flux, and intensive demands of collaboration, security and quality assurance. In the absence of commercial or academic management systems, we designed an open-source system to meet these requirements. Based on the Apache-PHP-MySQL platform on a Linux server, the system allows multiple users to access the database from any location on the internet using a web browser, and requires no specialized computer skills. Multi-level security system is implemented to safeguard the protected health information and allow partial or full access to the data by individual or class privilege. The system stores and manipulates various types of data including images, scanned documents, laboratory data and clinical ratings. Built-in functionality allows for various search, quality control, analytic data operations, visit scheduling and visit reminders. This approach offers a solution to a growing need for management of large multi-center clinical studies.