## SEARCH

#### Country

##### ( see all 89)

- United States 829 (%)
- Germany 446 (%)
- USA 276 (%)
- Italy 157 (%)
- United Kingdom 136 (%)

#### Institution

##### ( see all 1668)

- Humboldt-Universität zu Berlin 103 (%)
- Webster University 77 (%)
- Bayerische Landesanstalt für Landwirtschaft Hochschule Weihenstephan-Triesdorf 64 (%)
- Indian Statistical Institute 39 (%)
- Ostbayerische Technische Hochschule Amberg Weiden | ifo Institut für Wirtschaftsforschung an der Universität München 35 (%)

#### Author

##### ( see all 4124)

- Härdle, Wolfgang Karl 100 (%)
- Quirk, Thomas J. 76 (%)
- Munzert, Manfred 68 (%)
- Auer, Benjamin 35 (%)
- Rottmann, Horst 35 (%)

#### Subject

##### ( see all 88)

- Statistics [x] 2521 (%)
- Statistical Theory and Methods 1159 (%)
- Statistics for Business/Economics/Mathematical Finance/Insurance 893 (%)
- Statistics for Life Sciences, Medicine, Health Sciences 775 (%)
- Statistics, general 724 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 2521 matching Articles
Results per page:

## Statistical Decision Theory

### Estimation and Inferential Statistics (2015-01-01): 181-235 , January 01, 2015

In this chapter we discuss the problems of point estimation, hypothesis testing and interval estimation of a parameter from a different standpoint.

## An Algorithm for Construction of Constrained D-Optimum Designs

### Stochastic Models, Statistics and Their Applications (2015-01-01) 122: 461-468 , January 01, 2015

A computational algorithm is proposed for determinant maximization over the set of all convex combinations of a finite number of nonnegative definite matrices subject to additional box constraints on the weights of those combinations. The underlying idea is to apply a simplicial decomposition algorithm in which the restricted master problem reduces to an uncomplicated multiplicative weight optimization algorithm.

## A Thick Modeling Approach to Multivariate Volatility Prediction

### Advances in Latent Variables (2015-01-01) , January 01, 2015

This paper proposes a modified approach to the combination of forecasts from multivariate volatility models where the combination is performed over a restricted subset including only the best performing models. Such a subset is identified over a rolling window by means of the Model Confidence Set (MCS) approach. The analysis is performed using different combination schemes, both linear and non linear, and considering different loss functions for the evaluation of the forecasting performance. An application to a vast dimensional portfolio of 50 NYSE stocks shows that (a) in non-extreme volatility periods the use of forecast combinations allows to improve over the predictive accuracy of the single candidate models (b) performing the combination over the subset of most accurate models does not significantly reduce the accuracy of the combined predictor.

## Maximum likelihood estimation of Gaussian mixture models without matrix operations

### Advances in Data Analysis and Classification (2015-12-01) 9: 371-394 , December 01, 2015

The Gaussian mixture model (GMM) is a popular tool for multivariate analysis, in particular, cluster analysis. The expectation–maximization (EM) algorithm is generally used to perform maximum likelihood (ML) estimation for GMMs due to the M-step existing in closed form and its desirable numerical properties, such as monotonicity. However, the EM algorithm has been criticized as being slow to converge and thus computationally expensive in some situations. In this article, we introduce the linear regression characterization (LRC) of the GMM. We show that the parameters of an LRC of the GMM can be mapped back to the natural parameters, and that a minorization–maximization (MM) algorithm can be constructed, which retains the desirable numerical properties of the EM algorithm, without the use of matrix operations. We prove that the ML estimators of the LRC parameters are consistent and asymptotically normal, like their natural counterparts. Furthermore, we show that the LRC allows for simple handling of singularities in the ML estimation of GMMs. Using numerical simulations in the R programming environment, we then demonstrate that the MM algorithm can be faster than the EM algorithm in various large data situations, where sample sizes range in the tens to hundreds of thousands and for estimating models with up to 16 mixture components on multivariate data with up to 16 variables.

## Front Matter - Landwirtschaftliche und gartenbauliche Versuche mit SAS

### Landwirtschaftliche und gartenbauliche Versuche mit SAS (2015-01-01) , January 01, 2015

## Application to Financial and Economic Time Series Data

### Indexation and Causation of Financial Markets (2015-01-01) , January 01, 2015

A method for constructing a distribution-free indexDistribution-free index is applied to financial and economic time series data and causations are analyzed based on power contributionsPower contribution . Highlighting the current sequential financial crises, the applications focus primarily on credit default swap (CDS)Credit default swap CDS markets, which often have heavy-tailed spread distributions. The first application detects that the European debt crisis has already spilled over worldwide in terms of sovereign CDS (SCDS)Sovereign CDS SCDS markets. The second application measures the impact of the US subprime crisis on Japanese domestic markets. Finally, in order to examine the usability of a distribution-free index, the clear polarization between advanced and emerging regions by GDP growth regional distribution-free indices, GDP growth regional distribution-free index and the importance of examining sovereign risks in estimating the economic growth, are observed. Moreover, the Japanese SCDS distribution-free indexJapanese SCDS distribution-free index can be regarded as an underlying SCDS spread level reflecting a domestic credit strength. These applications verify the effectiveness of a distribution-free index and confirm that applying our method to markets with insufficient information, such as fast-growing or immature markets, can be effective.

## The Influence of Upper Level NUTS on Lower Level Classification of EU Regions

### Data Science, Learning by Latent Structures, and Knowledge Discovery (2015-01-01): 525-531 , January 01, 2015

The Nomenclature of Territorial Units for Statistics or Nomenclature of Units for Territorial Statistics (NUTS) is a geocode standard for referencing the subdivision of countries for statistical purposes. It covers the member states of the European Union. For each EU member country, a hierarchy of three levels is established by Eurostat. In 27 EU countries we have 97 regions at NUTS1, 271 regions at NUTS2 and 1,303 regions at NUTS3. They are subject of many statistical analysis involving clustering methods. Having a partition of units on a given level, we can ask the question, whether this partition has been influenced by the upper level division of Europe. For example, after finding groups of homogeneous levels of NUTS 2 regions we would like to know if the partition has been influenced by differences between countries. In the paper we propose a procedure for testing the statistical significance of influence of upper level units on a given partition. If there is no such influence, we can expect that the number of between-groups borders which are also country borders should have a proper probability distribution. A simulation procedure for finding this distribution and its critical values for testing significance is proposed in this paper. The real data analysis shown as an example deals with the innovativeness of German districts and the influence of government regions on innovation processes.

## Front Matter - Model-Free Prediction and Regression

### Model-Free Prediction and Regression (2015-01-01) , January 01, 2015

## An Introduction to Meta-Analysis in R

### Meta-Analysis with R (2015-01-01): 3-17 , January 01, 2015

The world is awash with information. For any question, the briefest of internet searches will throw up a range of frequently contradictory answers. This underlies increasing awareness of the value of systematic evidence synthesis—both qualitative and quantitative—by researchers, policy makers and the broader public. It is reflected in the continuing development of the Cochrane Collaboration (http://www.cochrane.org/), an international collaboration devoted to undertaking, publishing and promoting systematic evidence synthesis [2].

## The exact and near-exact distributions of the main likelihood ratio test statistics used in the complex multivariate normal setting

### TEST (2015-06-01) 24: 386-416 , June 01, 2015

In this paper the authors show how it is possible to establish a common structure for the exact distribution of the main likelihood ratio test (LRT) statistics used in the complex multivariate normal setting. In contrast to what happens when dealing with real random variables, for complex random variables it is shown that it is possible to obtain closed-form expressions for the exact distributions of the LRT statistics to test independence, equality of mean vectors and the equality of an expected value matrix to a given matrix. For the LRT statistics to test sphericity and the equality of covariance matrices, cases where the exact distribution has a non-manageable expression, easy to implement and very accurate near-exact distributions are developed. Numerical studies show how these near-exact distributions outperform by far any other available approximations. As an example of application of the results obtained, the authors develop a near-exact approximation for the distribution of the LRT statistic to test the equality of several complex normal distributions.