## SEARCH

#### Institution

##### ( see all 1184)

- Universität Hamburg 39 (%)
- Fachbereich Wirtschaft & Gesundheit 31 (%)
- FH Bielefeld 31 (%)
- RWTH Aachen 29 (%)
- The Ohio State University 24 (%)

#### Author

##### ( see all 2151)

- Kohn, Wolfgang 42 (%)
- Öztürk, Riza 42 (%)
- Janssen, Jürgen 39 (%)
- Laatz, Wilfried 39 (%)
- RizaÖztürk Riza Öztürk 32 (%)

#### Subject

##### ( see all 54)

- Statistics [x] 1268 (%)
- Statistical Theory and Methods 621 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 541 (%)
- Statistics, general 412 (%)
- Statistics and Computing/Statistics Programs 389 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 1268 matching Articles
Results per page:

## Bayesian model comparison with un-normalised likelihoods

### Statistics and Computing (2017-03-01) 27: 403-422 , March 01, 2017

Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of *biased* weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.

## The Parallel Between Clinical Trials and Diagnostic Tests

### Quantitative Decisions in Drug Development (2017-01-01): 41-51 , January 01, 2017

In this chapter, we compare successive trials designed and conducted to assess the efficacy of a new drug to a series of diagnostic tests. The condition to diagnose is whether the new drug has a clinically meaningful efficacious effect. This comparison offers us the opportunity to apply properties pertaining to diagnostic tests discussed in Chap. 3 to clinical trials. Building on the results in Chap. 3, we discuss why replication is such a critically important concept in drug development and show why replication is not as easy as some might have hoped. We end the chapter by highlighting the difference between statistical power and the probability of a positive trial. This last point becomes more important as a new drug moves through the various development stages as will be illustrated in Chap. 9.

## Statistical Process Control Charts as a Tool for Analyzing Big Data

### Big and Complex Data Analysis (2017-01-01): 123-138 , January 01, 2017

Big data often take the form of data streams with observations of certain processes collected sequentially over time. Among many different purposes, one common task to collect and analyze big data is to monitor the longitudinal performance/status of the related processes. To this end, statistical process control (SPC) charts could be a useful tool, although conventional SPC charts need to be modified properly in some cases. In this paper, we introduce some basic SPC charts and some of their modifications, and describe how these charts can be used for monitoring different types of processes. Among many potential applications, dynamic disease screening and profile/image monitoring will be discussed in some detail.

## Estimating non-simplified vine copulas using penalized splines

### Statistics and Computing (2017-02-27): 1-23 , February 27, 2017

Vine copulas (or pair-copula constructions) have become an important tool for high-dimensional dependence modeling. Typically, so-called simplified vine copula models are estimated where bivariate conditional copulas are approximated by bivariate unconditional copulas. We present the first nonparametric estimator of a non-simplified vine copula that allows for varying conditional copulas using penalized hierarchical B-splines. Throughout the vine copula, we test for the simplifying assumption in each edge, establishing a data-driven non-simplified vine copula estimator. To overcome the curse of dimensionality, we approximate conditional copulas with more than one conditioning argument by a conditional copula with the first principal component as conditioning argument. An extensive simulation study is conducted, showing a substantial improvement in the out-of-sample Kullback–Leibler divergence if the null hypothesis of a simplified vine copula can be rejected. We apply our method to the famous uranium data and present a classification of an eye state data set, demonstrating the potential benefit that can be achieved when conditional copulas are modeled.

## Hypothesentests

### Statistik von Null auf Hundert (2017-01-01): 131-141 , January 01, 2017

### Zusammenfassung

Hypothesen sind Annahmen über Eigenschaften von Grundgesamtheiten. Um die Hypothesen zu prüfen, müssten Sie die Grundgesamtheit deskriptiv statistisch untersuchen. Das ist oft schon aufgrund der Größe der Grundgesamtheit nicht möglich. Der Ausweg: Sie prüfen die Eigenschaften einer Stichprobe und schließen auf die Grundgesamtheit. Damit bestätigen oder verwerfen Sie eine Hypothese.

Die Gegenhypothese oder Alternativhypothese ist die eigentliche Hypothese. Die Gegenhypothese ist eine Unterstellung, eine Behauptung, ein Angriff, formuliert z.B. einen Unterschied oder eine Abweichung vom bisher angenommenen Mittelwert einer Grundgesamtheit. Sie behaupten, etwas ist größer oder etwas ist kleiner. Die Prüfung einer Hypothese läuft nach folgendem Schema ab:

Sie formulieren die Hypothese eindeutig als Paar von Gegenhypothese und Nullhypothese, dann legen Sie die Wahrscheinlichkeit fest. Sie bilden die Konfidenzintervalle und schauen anhand der Überlappung, ob die Nullhypothese verworfen werden kann oder ob sie beibehalten werden muss. Dies erfolgt in einem Zwischenschritt über die Berechnung einer Prüfgröße. Wenn Sie die Nullhypothese verwerfen können, so nehmen Sie Ihre Gegenhypothese auf dem festgelegten Signifikanzniveau an.

## Sequential Monte Carlo Methods in Random Intercept Models for Longitudinal Data

### Bayesian Statistics in Action (2017-01-01) 194: 3-9 , January 01, 2017

Longitudinal modelling is common in the field of Biostatistical research. In some studies, it becomes mandatory to update posterior distributions based on new data in order to perform inferential process on-line. In such situations, the use of posterior distribution as the prior distribution in the new application of the Bayes’ theorem is sensible. However, the analytic form of the posterior distribution is not always available and we only have an approximated sample of it, thus making the process “not-so-easy”. Equivalent inferences could be obtained through a Bayesian inferential process based on the set that integrates the old and new data. Nevertheless, this is not always a real alternative, because it may be computationally very costly in terms of both time and resources. This work uses the dynamic characteristics of sequential Monte Carlo methods for “static” setups in the framework of longitudinal modelling scenarios. We used this methodology in real data through a random intercept model.

## Multidimensionale Skalierung

### Statistische Datenanalyse mit SPSS (2017-01-01): 617-629 , January 01, 2017

### Zusammenfassung

∎

## Grundlagen

### Statistik für Ökonomen (2017-01-01): 13-20 , January 01, 2017

### Zusammenfassung

Die Grundbegriffe der Statistik dienen dazu, das Vokabular und die Symbole festzulegen, um Beobachtungen zu beschreiben.

## Properties of Second-Order Exponential Models as Multidimensional Response Models

### Quantitative Psychology (2017-01-01) 196: 9-19 , January 01, 2017

Second-order exponential (SOE) models have been proposed as item response models (e.g., Anderson et al., J. Educ. Behav. Stat. 35:422–452, 2010; Anderson, J. Classif. 30:276–303, 2013. doi: 10.1007/s00357-00357-013-9131-x; Hessen, Psychometrika 77:693–709, 2012. doi:10.1007/s11336-012-9277-1 Holland, Psychometrika 55:5–18, 1990); however, the philosophical and theoretical underpinnings of the SOE models differ from those of standard item response theory models. Although presented as reexpressions of item response theory models (Holland, Psychometrika 55:5–18, 1990), which are reflective models, the SOE models are formative measurement models. We extend Anderson and Yu (Psychometrika 72:5–23, 2007) who studied unidimensional models for dichotomous items to multidimensional models for dichotomous and polytomous items. The properties of the models for multiple latent variables are studied theoretically and empirically. Even though there are mathematical differences between the second-order exponential models and multidimensional item response theory (MIRT) models, the SOE models behave very much like standard MIRT models and in some cases better than MIRT models.

## Improving the Efficiency of the Monte-Carlo Methods Using Ranked Simulated Approach

### Monte-Carlo Simulation-Based Statistical Modeling (2017-01-01): 17-40 , January 01, 2017

This chapter explores the concept of using ranked simulated sampling approach (RSIS) to improve the well-known Monte-Carlo methodsMonte-Carlo method , introduced by Samawi (1999), and extended to steady-state ranked simulated samplingSteady state ranked simulated sampling (SRSIS) by Al-Saleh and Samawi (2000). Both simulation sampling approaches are then extended to multivariate ranked simulated sampling (MVRSIS) and multivariate steady-state ranked simulated samplingSteady state ranked simulated sampling approach (MVSRSIS) by Samawi and Al-Saleh (2007) and Samawi and Vogel (2013). These approaches have been demonstrated as providing unbiased estimators and improving the performance of some of the Monte-Carlo methodsMonte-Carlo method of single and multiple integrals approximation. Additionally, the MVSRSIS approach has been shown to improve the performance and efficiency of Gibbs sampling (Samawi et al. 2012). Samawi and colleagues showed that their approach resulted in a large savings in cost and time needed to attain a specified level of accuracy.