## SEARCH

#### Country

##### ( see all 82)

- United States 599 (%)
- Germany 404 (%)
- United Kingdom 159 (%)
- China 122 (%)
- Canada 121 (%)

#### Institution

##### ( see all 1762)

- The Ohio State University 43 (%)
- Universität Hamburg 39 (%)
- Fachbereich Wirtschaft & Gesundheit 31 (%)
- FH Bielefeld 31 (%)
- National Institute of Allergy and Infectious Diseases 29 (%)

#### Author

##### ( see all 3339)

- Kohn, Wolfgang 42 (%)
- Öztürk, Riza 42 (%)
- Janssen, Jürgen 39 (%)
- Laatz, Wilfried 39 (%)
- Qin, Jing 31 (%)

#### Publication

##### ( see all 85)

- Statistics and Computing 161 (%)
- Statistical Papers 132 (%)
- Computational Statistics 112 (%)
- Methodology and Computing in Applied Probability 102 (%)
- Annals of the Institute of Statistical Mathematics 78 (%)

#### Subject

##### ( see all 86)

- Statistics [x] 2217 (%)
- Statistical Theory and Methods 1139 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 956 (%)
- Statistics, general 676 (%)
- Statistics for Life Sciences, Medicine, Health Sciences 569 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 2217 matching Articles
Results per page:

## Bayesian model comparison with un-normalised likelihoods

### Statistics and Computing (2017-03-01) 27: 403-422 , March 01, 2017

Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of *biased* weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.

## A semiparametric regression cure model for doubly censored data

### Lifetime Data Analysis (2017-09-01): 1-17 , September 01, 2017

This paper discusses regression analysis of doubly censored failure time data when there may exist a cured subgroup. By doubly censored data, we mean that the failure time of interest denotes the elapsed time between two related events and the observations on both event times can suffer censoring (Sun in The statistical analysis of interval-censored failure time data. Springer, New York, 2006). One typical example of such data is given by an acquired immune deficiency syndrome cohort study. Although many methods have been developed for their analysis (De Gruttola and Lagakos in Biometrics 45:1–12, 1989; Sun et al. in Biometrics 55:909–914, 1999; 60:637–643, 2004; Pan in Biometrics 57:1245–1250, 2001), it does not seem to exist an established method for the situation with a cured subgroup. This paper discusses this later problem and presents a sieve approximation maximum likelihood approach. In addition, the asymptotic properties of the resulting estimators are established and an extensive simulation study indicates that the method seems to work well for practical situations. An application is also provided.

## The Parallel Between Clinical Trials and Diagnostic Tests

### Quantitative Decisions in Drug Development (2017-01-01): 41-51 , January 01, 2017

In this chapter, we compare successive trials designed and conducted to assess the efficacy of a new drug to a series of diagnostic tests. The condition to diagnose is whether the new drug has a clinically meaningful efficacious effect. This comparison offers us the opportunity to apply properties pertaining to diagnostic tests discussed in Chap. 3 to clinical trials. Building on the results in Chap. 3, we discuss why replication is such a critically important concept in drug development and show why replication is not as easy as some might have hoped. We end the chapter by highlighting the difference between statistical power and the probability of a positive trial. This last point becomes more important as a new drug moves through the various development stages as will be illustrated in Chap. 9.

## Statistical Process Control Charts as a Tool for Analyzing Big Data

### Big and Complex Data Analysis (2017-01-01): 123-138 , January 01, 2017

Big data often take the form of data streams with observations of certain processes collected sequentially over time. Among many different purposes, one common task to collect and analyze big data is to monitor the longitudinal performance/status of the related processes. To this end, statistical process control (SPC) charts could be a useful tool, although conventional SPC charts need to be modified properly in some cases. In this paper, we introduce some basic SPC charts and some of their modifications, and describe how these charts can be used for monitoring different types of processes. Among many potential applications, dynamic disease screening and profile/image monitoring will be discussed in some detail.

## Estimating non-simplified vine copulas using penalized splines

### Statistics and Computing (2017-02-27): 1-23 , February 27, 2017

Vine copulas (or pair-copula constructions) have become an important tool for high-dimensional dependence modeling. Typically, so-called simplified vine copula models are estimated where bivariate conditional copulas are approximated by bivariate unconditional copulas. We present the first nonparametric estimator of a non-simplified vine copula that allows for varying conditional copulas using penalized hierarchical B-splines. Throughout the vine copula, we test for the simplifying assumption in each edge, establishing a data-driven non-simplified vine copula estimator. To overcome the curse of dimensionality, we approximate conditional copulas with more than one conditioning argument by a conditional copula with the first principal component as conditioning argument. An extensive simulation study is conducted, showing a substantial improvement in the out-of-sample Kullback–Leibler divergence if the null hypothesis of a simplified vine copula can be rejected. We apply our method to the famous uranium data and present a classification of an eye state data set, demonstrating the potential benefit that can be achieved when conditional copulas are modeled.

## A generalized partially linear framework for variance functions

### Annals of the Institute of Statistical Mathematics (2017-10-04): 1-29 , October 04, 2017

When model the heteroscedasticity in a broad class of partially linear models, we allow the variance function to be a partial linear model as well and the parameters in the variance function to be different from those in the mean function. We develop a two-step estimation procedure, where in the first step some initial estimates of the parameters in both the mean and variance functions are obtained and then in the second step the estimates are updated using the weights calculated based on the initial estimates. The resulting weighted estimators of the linear coefficients in both the mean and variance functions are shown to be asymptotically normal, more efficient than the initial un-weighted estimators, and most efficient in the sense of semiparametric efficiency for some special cases. Simulation experiments are conducted to examine the numerical performance of the proposed procedure, which is also applied to data from an air pollution study in Mexico City.

## Prediction model-based kernel density estimation when group membership is subject to missing

### AStA Advances in Statistical Analysis (2017-07-01) 101: 267-288 , July 01, 2017

The density function is a fundamental concept in data analysis. When a population consists of heterogeneous subjects, it is often of great interest to estimate the density functions of the subpopulations. Nonparametric methods such as kernel smoothing estimates may be applied to each subpopulation to estimate the density functions if there are no missing values. In situations where the membership for a subpopulation is missing, kernel smoothing estimates using only subjects with membership available are valid only under missing complete at random (MCAR). In this paper, we propose new kernel smoothing methods for density function estimates by applying prediction models of the membership under the missing at random (MAR) assumption. The asymptotic properties of the new estimates are developed, and simulation studies and a real study in mental health are used to illustrate the performance of the new estimates.

## Nested Kriging predictions for datasets with a large number of observations

### Statistics and Computing (2017-07-26): 1-19 , July 26, 2017

This work falls within the context of predicting the value of a real function at some input locations given a limited number of observations of this function. The Kriging interpolation technique (or Gaussian process regression) is often considered to tackle such a problem, but the method suffers from its computational burden when the number of observation points is large. We introduce in this article nested Kriging predictors which are constructed by aggregating sub-models based on subsets of observation points. This approach is proven to have better theoretical properties than other aggregation methods that can be found in the literature. Contrarily to some other methods it can be shown that the proposed aggregation method is consistent. Finally, the practical interest of the proposed method is illustrated on simulated datasets and on an industrial test case with $$10^4$$ observations in a 6-dimensional space.

## Hypothesentests

### Statistik von Null auf Hundert (2017-01-01): 131-141 , January 01, 2017

### Zusammenfassung

Hypothesen sind Annahmen über Eigenschaften von Grundgesamtheiten. Um die Hypothesen zu prüfen, müssten Sie die Grundgesamtheit deskriptiv statistisch untersuchen. Das ist oft schon aufgrund der Größe der Grundgesamtheit nicht möglich. Der Ausweg: Sie prüfen die Eigenschaften einer Stichprobe und schließen auf die Grundgesamtheit. Damit bestätigen oder verwerfen Sie eine Hypothese.

Die Gegenhypothese oder Alternativhypothese ist die eigentliche Hypothese. Die Gegenhypothese ist eine Unterstellung, eine Behauptung, ein Angriff, formuliert z.B. einen Unterschied oder eine Abweichung vom bisher angenommenen Mittelwert einer Grundgesamtheit. Sie behaupten, etwas ist größer oder etwas ist kleiner. Die Prüfung einer Hypothese läuft nach folgendem Schema ab:

Sie formulieren die Hypothese eindeutig als Paar von Gegenhypothese und Nullhypothese, dann legen Sie die Wahrscheinlichkeit fest. Sie bilden die Konfidenzintervalle und schauen anhand der Überlappung, ob die Nullhypothese verworfen werden kann oder ob sie beibehalten werden muss. Dies erfolgt in einem Zwischenschritt über die Berechnung einer Prüfgröße. Wenn Sie die Nullhypothese verwerfen können, so nehmen Sie Ihre Gegenhypothese auf dem festgelegten Signifikanzniveau an.

## The Bennett-Orlicz Norm

### Sankhya A (2017-08-01) 79: 355-383 , August 01, 2017

van de Geer and Lederer (*Probab. Theory Related Fields**157*(1-2), 225–250, 2013) introduced a new Orlicz norm, the Bernstein-Orlicz norm, which is connected to Bernstein type inequalities. Here we introduce another Orlicz norm, the Bennett-Orlicz norm, which is connected to Bennett type inequalities. The new Bennett-Orlicz norm yields inequalities for expectations of maxima which are potentially somewhat tighter than those resulting from the Bernstein-Orlicz norm when they are both applicable. We discuss cross connections between these norms, exponential inequalities of the Bernstein, Bennett, and Prokhorov types, and make comparisons with results of Talagrand (*Ann. Probab.*, *17*(4), 1546–1570, 1989, 1991), and Boucheron *et al*. (2013).