## SEARCH

#### Country

##### ( see all 60)

- United Kingdom 297 (%)
- United States 265 (%)
- France 134 (%)
- Australia 87 (%)
- Italy 87 (%)

#### Institution

##### ( see all 1123)

- Lancaster University 29 (%)
- University of Warwick 25 (%)
- National University of Singapore 22 (%)
- University of Bristol 20 (%)
- University of Toronto 18 (%)

#### Author

##### ( see all 2480)

- Aitkin, Murray 13 (%)
- Robert, Christian P. 13 (%)
- Hand, David J. 10 (%)
- Jasra, Ajay 10 (%)
- Fearnhead, Paul 9 (%)

#### Subject

##### ( see all 10)

- Statistics [x] 1259 (%)
- Artificial Intelligence (incl. Robotics) 841 (%)
- Statistics and Computing/Statistics Programs 841 (%)
- Numeric Computing 811 (%)
- Statistics, general 811 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 1259 matching Articles
Results per page:

## Bayesian model comparison with un-normalised likelihoods

### Statistics and Computing (2017-03-01) 27: 403-422 , March 01, 2017

Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of *biased* weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.

## Sequential Monte Carlo for counting vertex covers in general graphs

### Statistics and Computing (2016-05-01) 26: 591-607 , May 01, 2016

In this paper we describe a sequential importance sampling (SIS) procedure for counting the number of vertex covers in general graphs. The optimal SIS proposal distribution is the uniform over a suitably restricted set, but is not implementable. We will consider two proposal distributions as approximations to the optimal. Both proposals are based on randomization techniques. The first randomization is the classic probability model of random graphs, and in fact, the resulting SIS algorithm shows polynomial complexity for random graphs. The second randomization introduces a probabilistic relaxation technique that uses Dynamic Programming. The numerical experiments show that the resulting SIS algorithm enjoys excellent practical performance in comparison with existing methods. In particular the method is compared with *cachet*—an exact model counter, and the state of the art *SampleSearch*, which is based on Belief Networks and importance sampling.

## Estimating non-simplified vine copulas using penalized splines

### Statistics and Computing (2017-02-27): 1-23 , February 27, 2017

Vine copulas (or pair-copula constructions) have become an important tool for high-dimensional dependence modeling. Typically, so-called simplified vine copula models are estimated where bivariate conditional copulas are approximated by bivariate unconditional copulas. We present the first nonparametric estimator of a non-simplified vine copula that allows for varying conditional copulas using penalized hierarchical B-splines. Throughout the vine copula, we test for the simplifying assumption in each edge, establishing a data-driven non-simplified vine copula estimator. To overcome the curse of dimensionality, we approximate conditional copulas with more than one conditioning argument by a conditional copula with the first principal component as conditioning argument. An extensive simulation study is conducted, showing a substantial improvement in the out-of-sample Kullback–Leibler divergence if the null hypothesis of a simplified vine copula can be rejected. We apply our method to the famous uranium data and present a classification of an eye state data set, demonstrating the potential benefit that can be achieved when conditional copulas are modeled.

## Implied distributions in multiple change point problems

### Statistics and Computing (2012-07-01) 22: 981-993 , July 01, 2012

A method for efficiently calculating exact marginal, conditional and joint distributions for change points defined by general finite state Hidden Markov Models is proposed. The distributions are not subject to any approximation or sampling error once parameters of the model have been estimated. It is shown that, in contrast to sampling methods, very little computation is needed. The method provides probabilities associated with change points within an interval, as well as at specific points.

## Multilevel particle filters: normalizing constant estimation

### Statistics and Computing (2016-11-07): 1-14 , November 07, 2016

In this article, we introduce two new estimates of the normalizing constant (or marginal likelihood) for partially observed diffusion (POD) processes, with discrete observations. One estimate is biased but non-negative and the other is unbiased but not almost surely non-negative. Our method uses the multilevel particle filter of Jasra et al. (Multilevel particle lter, arXiv:1510.04977 , 2015). We show that, under assumptions, for Euler discretized PODs and a given $$\varepsilon >0$$ in order to obtain a mean square error (MSE) of $${\mathcal {O}}(\varepsilon ^2)$$ one requires a work of $${\mathcal {O}}(\varepsilon ^{-2.5})$$ for our new estimates versus a standard particle filter that requires a work of $${\mathcal {O}}(\varepsilon ^{-3})$$ . Our theoretical results are supported by numerical simulations.

## Tractable Bayesian learning of tree belief networks

### Statistics and Computing (2006-01-01) 16: 77-92 , January 01, 2006

In this paper we present *decomposable priors*, a family of priors over structure and parameters of tree belief nets for which Bayesian learning with complete observations is tractable, in the sense that the posterior is also decomposable and can be completely determined analytically in polynomial time. Our result is the first where computing the normalization constant and averaging over a super-exponential number of graph structures can be performed in polynomial time. This follows from two main results: First, we show that factored distributions over spanning trees in a graph can be integrated in closed form. Second, we examine priors over tree parameters and show that a set of assumptions similar to Heckerman, Geiger and Chickering (1995) constrain the tree parameter priors to be a compactly parametrized product of Dirichlet distributions. Besides allowing for exact Bayesian learning, these results permit us to formulate a new class of tractable latent variable models in which the likelihood of a data point is computed through an ensemble average over tree structures.

## Data skeletons: simultaneous estimation of multiple quantiles for massive streaming datasets with applications to density estimation

### Statistics and Computing (2007-12-01) 17: 311-321 , December 01, 2007

We consider the problem of density estimation when the data is in the form of a continuous stream with no fixed length. In this setting, implementations of the usual methods of density estimation such as kernel density estimation are problematic. We propose a method of density estimation for massive datasets that is based upon taking the derivative of a smooth curve that has been fit through a set of quantile estimates. To achieve this, a low-storage, single-pass, sequential method is proposed for simultaneous estimation of multiple quantiles for massive datasets that form the basis of this method of density estimation. For comparison, we also consider a sequential kernel density estimator. The proposed methods are shown through simulation study to perform well and to have several distinct advantages over existing methods.

## Parametric bootstrap goodness-of-fit testing for Wehrly–Johnson bivariate circular distributions

### Statistics and Computing (2016-11-01) 26: 1307-1317 , November 01, 2016

The Wehrly–Johnson family of bivariate circular distributions is by far the most general one currently available for modelling data on the torus. It allows complete freedom in the specification of the marginal circular densities as well as the binding circular density which regulates any dependence that might exist between them. We propose a parametric bootstrap approach for testing the goodness-of-fit of Wehrly–Johnson distributions when the forms of their marginal and binding densities are assumed known. The approach admits the use of any test for toroidal uniformity, and we consider versions of it incorporating three such tests. Simulation is used to illustrate the operating characteristics of the approach when the underlying distribution is assumed to be bivariate wrapped Cauchy. An analysis of wind direction data recorded at a Texan weather station illustrates the use of the proposed goodness-of-fit testing procedure.

## An alternative to model selection in ordinary regression

### Statistics and Computing (2003-02-01) 13: 67-80 , February 01, 2003

The weaknesses of established model selection procedures based on hypothesis testing and similar criteria are discussed and an alternative based on synthetic (composite) estimation is proposed. It is developed for the problem of prediction in ordinary regression and its properties are explored by simulations for the simple regression. Extensions to a general setting are described and an example with multiple regression is analysed. Arguments are presented against using a selected model for any inferences.

## Adaptive thresholding of sequences with locally variable strength

### Statistics and Computing (2009-03-01) 19: 57-71 , March 01, 2009

This paper addresses, via thresholding, the estimation of a possibly sparse signal observed subject to Gaussian noise. Conceptually, the optimal threshold for such problems depends upon the strength of the underlying signal. We propose two new methods that aim to adapt to potential local variation in this signal strength and select a variable threshold accordingly. Our methods are based upon an empirical Bayes approach with a smoothly variable mixing weight chosen via either spline or kernel based marginal maximum likelihood regression. We demonstrate the excellent performance of our methods in both one and two-dimensional estimation when compared to various alternative techniques. In addition, we consider the application to wavelet denoising where reconstruction quality is significantly improved with local adaptivity.