## SEARCH

#### Institution

#### Author

#### Subject

- Business and Management, general 18 (%)
- Geography [x] 18 (%)
- Geography, general 18 (%)
- Migration 18 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 18 matching Articles
Results per page:

## General Spatial Econometric Conclusions

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 243 , January 01, 2011

What should be clear from the exercises presented is that in most of them, classical “regression” has been combined with mathematical programming to obtain the desired estimators.

## A Mixed Linear-Logarithmic Specification for Lotka-Volterra Models with Endogenously Generated SDLS-Variables

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 179-187 , January 01, 2011

In Arbia and Paelinck (2003a, b), a Lotka-Volterra model (LVM) is applied to the convergence-divergence problem of European regions in terms of incomes per capita. As the latter have to be non-negative, a double logarithmic version may be substituted for the original specification, a modification that removes at least part of the non-linearity of LVMs; this chapter introduces this non-linearity again. Discussion begins with a general section on LVMs, to go on with a mixed linear-logarithmic specification, of which the positivity of the (possible) equilibrium solution is proved, and for which a (sufficient) stability condition is derived.

## Introduction: Spatial Statistics

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 3 , January 01, 2011

A wide array of topics in spatial statistics introduce methodological controversy: aggregate versus disaggregated data inference (e.g., the ecological fallacy), modelling the spatial covariance versus the spatial inverse covariance matrix, including fixed and/or random effects terms in a model specification, spatial autocorrelation specified as part of the mean response versus part of the variance parameter, and methods for simulating spatially autocorrelated random variables.

## Selecting Spatial Regimes by Threshold Analysis

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 189-197 , January 01, 2011

The existence of differential spatial regimes has been revealed on different occasions (see for instance Arbia and Paelinck, 2003a, b; also see Chap. 14). Hence the necessity exists for developing workable specifications to compute possible frontiers or thresholds between those regimes.

## Frequency Distributions for Simulated Spatially Autocorrelated Random Variables

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 37-73 , January 01, 2011

Often quantitative data analysis begins with an inspection of attribute variable histograms. Ratio scale demographic variables, such as population density (which has a natural, meaningful absolute 0 value), are expected to conform, at least approximately, to a normal probability distribution. Frequently this conformity requires that these variables be subjected to a symmetricizing, variance stabilizing transformation, such as the Box-Cox class of power functions or the Manley exponential function. Counts (i.e., aggregated nominal measurement scale) data used to construct ratios, such as the crude fertility rate (i.e., number of births per number of women in the child bearing age cohort), are expected to conform to a Poisson probability distribution. And, counts data that constitute some subset of a total, such as the percentage of people at least 100 years of age or the percentage of a population that is the women in the child bearing age cohort, are expected to conform to a binomial probability distribution.

## Spatial Filter Versus Conventional Spatial Model Specifications: Some Comparisons

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 117-149 , January 01, 2011

Spatial statistical analysis of geographically distributed counts data has been widely undertaken for many years, with initial analyses involving log-Gaussian approximations because only the normal probability model was first adapted in an implementable form (Ripley, 1990, pp. 9–10) to handle spatial autocorrelation (SA) effects (i.e., similar values tend to cluster on a map, indicating positive self-correlation among observations). In more recent years, linear regression techniques have given way to generalized linear model techniques that account for non-normality (e.g., logistic and Poisson regression), as well as geographic dependence. In very recent years, both linear and generalized linear models have been supplemented with hierarchical Bayesian models, in part to deal with geographic regions having small counts. The objective of this chapter is to furnish a comparison of this variety of principal techniques—both frequentist and Bayesian—available for map analysis with the newly formulated spatial filtering approach.

## Finite Automata

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 199-206 , January 01, 2011

In Paelinck (2002), attention was drawn to a special algebra—called a min-algebra—that might rule quite a few spatial econometrics specifications; hereafter, applications of this idea are be presented in the form of finite automata.

## Learning from Residuals

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 207-215 , January 01, 2011

Residuals often are considered as a troublesome noise in spatial—or, for that matter—non-spatial econometric models. Current practice in spatial econometrics is to set up a spatial error model, more often than not with an exogenous W spatial weight matrix, in order to improve the efficiency of the estimators.

Looking closely into the residuals is less common practice. And still, residuals can represent extremely precious building blocks for further work, as other disciplines have shown. Around 1850 the British chemists, Mansfield and Perkin, had the—for that era of chemistry—strange idea to analyze the composition of tar, until then exclusively used to improve coverage of roads (John London McAdam had his name attached to that technique, tarmacadam); the result of the British chemists’ investigation was the roaring development of a whole branch of (industrial) chemistry: carbochemistry.

## Statistical Models for Spatial Data: Some Linkages and Communalities

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 25-35 , January 01, 2011

Introductory mathematical statistics textbooks discuss topics such as the sample variance by invoking the assumption of independent and identically distributed (*iid*). In other words, in terms of second moments, of the *n*^{2} possible covariations for a set of n observations, the independence assumption posits that n(n – 1) of these covariations have an expected value of 0, leaving only the n individual observation variance terms for analysis. This independence assumption is for convenience, historically making mathematical statistical theory tractable. But it is an arcane specification that fails to provide an acceptable approximation to reality in many contexts.

## Spatially Structured Random Effects: A Comparison of Three Popular Specifications

### Non-standard Spatial Statistics and Spatial Econometrics (2011-01-01) 1: 97-115 , January 01, 2011

Random effects models are increasing in popularity (see, for example, Demidenko, 2004), partially because they have become estimable. One common specification is for the intercept term to be cast as a random effects, resulting in it representing variability about the conventional single-value, constant mean. The role of a random effects in this context may be twofold: (1) supporting inferences beyond the specific fixed values of covariates employed in an analysis; and, (2) accounting for correlation in a non-random sample of data being analyzed. Including a random effects term moves a frequentist analysis a bit closer to a Bayesian analysis, given that, for instance, the intercept term becomes a random variable rather than being a constant, and has a prior probability distribution (usually normal) attached to it. Nevertheless, a *bone fide* Bayesian analysis would have a random variable for each of the n intercept term components comprising such a random effects, maintaining some degree of differentiation here between the frequentist and Bayesian approaches.