## SEARCH

#### Country

##### ( see all 275)

- United States 10903 (%)
- Germany 5683 (%)
- United Kingdom 2246 (%)
- Canada 1954 (%)

#### Institution

##### ( see all 14016)

- University of California 611 (%)
- Webster University 368 (%)
- Indian Statistical Institute 333 (%)
- Columbia University 264 (%)
- The Institute of Statistical Mathematics 249 (%)

#### Author

##### ( see all 44228)

- Quirk, Thomas J. 343 (%)
- Eckstein, Peter P. 245 (%)
- Härdle, Wolfgang Karl 202 (%)
- Bourier, Günther 186 (%)
- Balakrishnan, N. 143 (%)

#### Publication

##### ( see all 1160)

- Annals of the Institute of Statistical Mathematics 2989 (%)
- Journal of Medical Systems 2297 (%)
- Metrika 2046 (%)
- Statistical Papers 1716 (%)
- Statistics and Computing 1253 (%)

#### Subject

##### ( see all 317)

- Statistics [x] 45553 (%)
- Statistics, general 20455 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 17398 (%)
- Statistical Theory and Methods 12242 (%)
- Statistics for Life Sciences, Medicine, Health Sciences 10489 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 45553 matching Articles
Results per page:

## Fiducial Theory for Free-Knot Splines

### Contemporary Developments in Statistical Theory (2014-01-01) 68: 155-189 , January 01, 2014

We construct the fiducial model for free-knot splines and derive sufficient conditions to show asymptotic consistency of a multivariate fiducial estimator. We show that splines of degree four and higher satisfy those conditions and conduct a simulation study to evaluate quality of the fiducial estimates compared to the competing Bayesian solution. The fiducial confidence intervals achieve the desired confidence level while tending to be shorter than the corresponding Bayesian credible interval using the reference prior. AMS 2000 subject classifications: Primary 62F99, 62G08; secondary 62P10.

## Statistical Decision Theory

### Estimation and Inferential Statistics (2015-01-01): 181-235 , January 01, 2015

In this chapter we discuss the problems of point estimation, hypothesis testing and interval estimation of a parameter from a different standpoint.

## Bayesian model comparison with un-normalised likelihoods

### Statistics and Computing (2017-03-01) 27: 403-422 , March 01, 2017

Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of *biased* weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.

## The Method of Random Groups

### Introduction to Variance Estimation (2007-01-01): 21-106 , January 01, 2007

###
*Abstract*

The random group method of variance estimation amounts to selecting two or more samples from the population, usually using the same sampling design for each sample; constructing a separate estimate of the population parameter of interest from each sample and an estimate from the combination of all samples; and computing the sample variance among the several estimates. Historically, this was one of the first techniques developed to simplify variance estimation for complex sample surveys. It was introduced in jute acreage surveys in Bengal by Mahalanobis (1939, 1946), who called the various samples *interpenetrating samples*. Deming (1956), the United Nations Subcommission on Statistical Sampling (1949), and others proposed the alternative term *replicated samples*. Hansen, Hurwitz, and Madow (1953) referred to the *ultimate cluster* technique in multistage surveys and to the *random group* method in general survey applications. Beginning in the 1990s, various writers have referred to the *resampling* technique. All of these terms have been used in the literature by various authors, and all refer to the same basic method. We will employ the term *random group* when referring to this general method of variance estimation.

## Interactive Information System of Self-Treatment in Haemophilia

### The Computer and Blood Banking (1981-01-01) 13: 241-242 , January 01, 1981

Hemophilia is a congenital coagulopathy characterized by a differently marked deficiency of the coagulation factor VIII or IX. In the late sixties the development of highly purified factor VIII/IX concentrates led to a major change in the treatment of hemophilic patients. Thus, in 1971, in addition to the existing in-hospital and outpatient treatment, the Hemophilia Center Bonn introduced the controlled self-treatment. This new treatment concept enables the hemophiliac to lead a normal social life. He is no longer excluded from attending Kindergarten or school or leading a professional life.

## A model for a comprehensive LIMS

### Advanced LIMS Technology (1995-01-01): 15-36 , January 01, 1995

Demands on many laboratory organizations are becoming a driving force to automate analytical procedures. Automation, which is generally focused at the bench, allows an analyst to complete more work per unit time, resulting in higher productivity. Laboratory Information Management Systems (LIMS) have been developed to carry out many associated administrative tasks and procedures required to run a laboratory. However, many organizations often take a narrow view of both a laboratory and the functions that can be automated, preventing them from extending automation to provide real scientific and business benefits.

## Estimating conception statistics using gestational age information from NHS Numbers for Babies data

### Health Statistics Quarterly (2009-05-01) 41: 21-27 , May 01, 2009

Conception statistics routinely published for England and Wales include pregnancies that result in one or more live- or stillbirths (a maternity) or an abortion. All live births are assumed to be 38 weeks gestation as information on gestation is not collected at birth registration. For the first time, gestational age information from the National Health Service (NHS) Numbers for Babies (NN4B) data has been used to re-estimate conception statistics for 2005. This shows that 72 per cent of conceptions leading to a maternity in fact have a gestation period that differs from 38 weeks and most of these fall at either 37 or 39 weeks. The age-specific conception rates using this revised method are not significantly different to those produced using the current method.

## Hybrid k-Means: Combining Regression-Wise and Centroid-Based Criteria for QSAR

### Selected Contributions in Data Analysis and Classification (2007-01-01): 225-233 , January 01, 2007

This paper further extends the ‘kernel’-based approach to clustering proposed by E. Diday in early 70s. According to this approach, a cluster’s centroid can be represented by parameters of any analytical model, such as linear regression equation, built over the cluster. We address the problem of producing regression-wise clusters to be separated in the input variable space by building a hybrid clustering criterion that combines the regression-wise clustering criterion with the conventional centroid-based one.

## Economic and Financial Modeling with Mathematica®

### Economic and Financial Modeling with Mathematica® (1993-01-01) , January 01, 1993

## Sequential Monte Carlo for counting vertex covers in general graphs

### Statistics and Computing (2016-05-01) 26: 591-607 , May 01, 2016

In this paper we describe a sequential importance sampling (SIS) procedure for counting the number of vertex covers in general graphs. The optimal SIS proposal distribution is the uniform over a suitably restricted set, but is not implementable. We will consider two proposal distributions as approximations to the optimal. Both proposals are based on randomization techniques. The first randomization is the classic probability model of random graphs, and in fact, the resulting SIS algorithm shows polynomial complexity for random graphs. The second randomization introduces a probabilistic relaxation technique that uses Dynamic Programming. The numerical experiments show that the resulting SIS algorithm enjoys excellent practical performance in comparison with existing methods. In particular the method is compared with *cachet*—an exact model counter, and the state of the art *SampleSearch*, which is based on Belief Networks and importance sampling.