## SEARCH

#### Country

##### ( see all 71)

- United Kingdom [x] 2281 (%)
- United States 223 (%)
- Canada 173 (%)
- Australia 114 (%)
- Italy 102 (%)

#### Institution

##### ( see all 1487)

- University of Manchester 123 (%)
- University of Warwick 96 (%)
- University College London 92 (%)
- University of Oxford 87 (%)
- University of Kent 84 (%)

#### Author

##### ( see all 3058)

- Nadarajah, Saralees 76 (%)
- Andrews, D. F. 74 (%)
- Herzberg, A. M. 74 (%)
- Ramsay, J. O. 44 (%)
- Silverman, B. W. 44 (%)

#### Publication

##### ( see all 214)

- Statistics and Computing 301 (%)
- Annals of the Institute of Statistical Mathematics 99 (%)
- Data 74 (%)
- Methodology and Computing in Applied Probability 69 (%)
- Journal of Classification 66 (%)

#### Subject

##### ( see all 134)

- Statistics [x] 2281 (%)
- Statistics, general 1114 (%)
- Statistical Theory and Methods 796 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 552 (%)
- Statistics for Life Sciences, Medicine, Health Sciences 513 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 2281 matching Articles
Results per page:

## Bayesian model comparison with un-normalised likelihoods

### Statistics and Computing (2017-03-01) 27: 403-422 , March 01, 2017

Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of *biased* weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.

## Hybrid k-Means: Combining Regression-Wise and Centroid-Based Criteria for QSAR

### Selected Contributions in Data Analysis and Classification (2007-01-01): 225-233 , January 01, 2007

This paper further extends the ‘kernel’-based approach to clustering proposed by E. Diday in early 70s. According to this approach, a cluster’s centroid can be represented by parameters of any analytical model, such as linear regression equation, built over the cluster. We address the problem of producing regression-wise clusters to be separated in the input variable space by building a hybrid clustering criterion that combines the regression-wise clustering criterion with the conventional centroid-based one.

## Gröbner Basis Methods in Mixture Experiments and Generalisations

### Optimum Design 2000 (2001-01-01) 51: 33-44 , January 01, 2001

The theory of mixture designs has a considerable history. We address here the important issue of the analysis of an experiment having in mind the algebraic interpretation of the structural restriction Σ*x*_{i} = 1. We present an approach for rewriting models for mixture experiments, based on constructing homogeneous orthogonal polynomials using Gröbner bases. Examples are given utilising the approach.

## Completing the Ecological Jigsaw

### Modeling Demographic Processes In Marked Populations (2009-01-01) 3: 513-539 , January 01, 2009

A challenge for integrated population methods is to examine the extent to which different surveys that measure different demographic features for a given species are compatible. Do the different pieces of the jigsaw fit together? One convenient way of proceeding is to generate a likelihood for census data using the Kalman filter, which is then suitably combined with other likelihoods that might arise from independent studies of mortality, fecundity, and so forth. The combined likelihood may then be used for inference. Typically the underlying model for the census data is a state-space model, and capture–recapture methods of various kinds are used to construct the additional likelihoods. In this paper we provide a brief review of the approach; we present a new way to start the Kalman filter, designed specifically for ecological processes; we investigate the effect of break-down of the independence assumption; we show how the Kalman filter may be used to incorporate density-dependence, and we consider the effect of introducing heterogeneity in the state-space model.

## The Parallel Between Clinical Trials and Diagnostic Tests

### Quantitative Decisions in Drug Development (2017-01-01): 41-51 , January 01, 2017

In this chapter, we compare successive trials designed and conducted to assess the efficacy of a new drug to a series of diagnostic tests. The condition to diagnose is whether the new drug has a clinically meaningful efficacious effect. This comparison offers us the opportunity to apply properties pertaining to diagnostic tests discussed in Chap. 3 to clinical trials. Building on the results in Chap. 3, we discuss why replication is such a critically important concept in drug development and show why replication is not as easy as some might have hoped. We end the chapter by highlighting the difference between statistical power and the probability of a positive trial. This last point becomes more important as a new drug moves through the various development stages as will be illustrated in Chap. 9.

## Statistics and Measurement in the Earth Sciences

### Statistical Methods for the Earth Scientist (1974-01-01): 1-6 , January 01, 1974

The word statistics was first used in 1770, but with a rather different meaning from that used today. One chapter of Hooper’s *The Elements of Universal Erudition* published in 1770 is entitled ‘Statistics’ and deals with ‘the science that teaches us what is the political arrangement of all the modern States of the known world’ (Yule and Kendall, 1953). In the early decades of the nineteenth century the change to ‘statistics’ representing the characters of a State by numerical methods was taking place. Only by the end of the century were ‘statistics’ the summary figures used to describe and compare the properties of a set of observations. At about this time the theoretical basis of the science of statistics was being laid, and today we find the ideas of statistics on a firm basis and applied to the collection, summary and analysis of all types of data.

## Implied distributions in multiple change point problems

### Statistics and Computing (2012-07-01) 22: 981-993 , July 01, 2012

A method for efficiently calculating exact marginal, conditional and joint distributions for change points defined by general finite state Hidden Markov Models is proposed. The distributions are not subject to any approximation or sampling error once parameters of the model have been estimated. It is shown that, in contrast to sampling methods, very little computation is needed. The method provides probabilities associated with change points within an interval, as well as at specific points.

## Essentials of Statistics for Scientists and Technologists

### Essentials of Statistics for Scientists and Technologists (1966-01-01) , January 01, 1966

## Secondary Sources of Data for Business, Finance and Marketing Students

### Quantitative Analysis and IBM® SPSS® Statistics (2016-01-01): 171-179 , January 01, 2016

The purpose of this chapter is to describe and locate sources of external secondary data that may be of use to students of Finance, Economics, Marketing and general Business. By definition, the discussion cannot be exhaustive.

## An Introduction to Meta-Analysis in R

### Meta-Analysis with R (2015-01-01): 3-17 , January 01, 2015

The world is awash with information. For any question, the briefest of internet searches will throw up a range of frequently contradictory answers. This underlies increasing awareness of the value of systematic evidence synthesis—both qualitative and quantitative—by researchers, policy makers and the broader public. It is reflected in the continuing development of the Cochrane Collaboration (http://www.cochrane.org/), an international collaboration devoted to undertaking, publishing and promoting systematic evidence synthesis [2].