## SEARCH

#### Country

##### ( see all 143)

- United States 18758 (%)
- China 13833 (%)
- United Kingdom 7008 (%)
- Germany 6960 (%)

#### Institution

##### ( see all 45391)

- Chinese Academy of Sciences 1313 (%)
- University of California 992 (%)
- Tsinghua University 983 (%)
- Russian Academy of Sciences 856 (%)
- Zhejiang University 839 (%)

#### Author

##### ( see all 157337)

- Gesellschaft für Informatik 190 (%)
- Reimer, Helmut 146 (%)
- Rihaczek, Karl 127 (%)
- Glänzel, Wolfgang 103 (%)
- Wang, Wei 100 (%)

#### Publication

##### ( see all 179)

- Multimedia Tools and Applications 5722 (%)
- Scientometrics 5248 (%)
- Neural Computing and Applications 3174 (%)
- Datenschutz und Datensicherheit - DuD 2852 (%)
- The Journal of Supercomputing 2816 (%)

#### Subject

##### ( see all 159)

- Computer Science [x] 102811 (%)
- Artificial Intelligence (incl. Robotics) 31200 (%)
- Computer Science, general 23363 (%)
- Data Structures, Cryptology and Information Theory 21066 (%)
- Theory of Computation 17950 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 102811 matching Articles
Results per page:

## Performance improvement of spatial modulation-assisted FSO systems over Gamma–Gamma fading channels with geometric spreading

### Photonic Network Communications (2017-10-01) 34: 213-220 , October 01, 2017

A number of studies recently have proposed optical spatial modulation (SM) as a simple, power- and bandwidth efficient modulation scheme for free-space optical communication (FSO) systems. In these studies, it was assumed that an active laser source only sends the signal to one targeted photodetector (PD). However, undesirable PDs still can receive the signal from the active source due to geometric spreading (i.e., laser beam broadening). In addition, if the fading channels between the active source and multiple PDs are correlated, the probability of wrong detection of the active source’s index during spatial demodulation process may increase. In this paper, we first analyze the impact of geometric spreading on the performance of FSO systems using SM over uncorrelated Gamma–Gamma fading channel. We find that the advantage in reducing the transmission bandwidth of SM cannot compensate its limitation in suffering from geometric spreading. We then propose to combine *N*-SM with pulse-position modulation (*L*-PPM) and transmit diversity (
$$M\,\times \,1$$
MISO) to improve the performance of SM-based FSO systems. The numerical results, which are validated by Monte–Carlo simulations, confirm the superiority of the proposed system in comparison with the conventional ones.

## 3D-line clipping algorithms — a comparative study

### The Visual Computer (1994-02-01) 11: 96-104 , February 01, 1994

Some well-known line-polyhedron intersection methods are summed up and new accelerating modifications presented. Results of comparison of known and newly developed methods are included. New methods use the fact that each line can be described as the intersection of two planes.

## A note on the variance of rank-based selection strategies for genetic algorithms and genetic programming

### Genetic Programming and Evolvable Machines (2007-09-01) 8: 221-237 , September 01, 2007

This paper evaluates different forms of rank-based selection that are used with genetic algorithms and genetic programming. Many types of rank based selection have exactly the same expected value in terms of the sampling rate allocated to each member of the population. However, the variance associated with that sampling rate can vary depending on how selection is implemented. We examine two forms of tournament selection and compare these to linear rank-based selection using an explicit formula. Because selective pressure has a direct impact on population diversity, we also examine the interaction between selective pressure and different mutation strategies.

## Development of a unified FDTD-FEM library for electromagnetic analysis with CPU and GPU computing

### The Journal of Supercomputing (2013-04-01) 64: 28-37 , April 01, 2013

The present paper describes an optimized C++ library for the study of electromagnetics. The implementation is based on the Finite-Difference Time-Domain method for transient analysis, and the Finite Element Method for electrostatics. Both methods share the same core and are optimized for CPU and GPU computing. To illustrate its running, FEM method is applied for solving Laplace’s equation analyzing the relation between surface curvature and electrostatic potential of a long cylindrical conductor, whereas FDTD is applied for analyzing Thin Film Filters at optical wavelengths. Furthermore, a comparison of the performance of both CPU and GPU versions is analyzed as a function of the grid size simulation. This approach allows the study of a wide range of electromagnetic problems taking advantage of the benefits of each numerical method and the computing power of the modern CPUs and GPUs.

## Proportional fair throughput allocation in multirate IEEE 802.11e wireless LANs

### Wireless Networks (2007-10-01) 13: 649-662 , October 01, 2007

Under heterogeneous radio conditions, Wireless LAN stations may use different modulation schemes, leading to a heterogeneity of bit rates. In such a situation, 802.11 DCF allocates the same throughput to all stations independently of their transmitting bit rate; as a result, the channel is used by low bit rate stations most of the time, and efficiency is low. In this paper, we propose a more efficient throughput allocation criterion based on proportional fairness. We find out that, in a proportional fair allocation, the same share of channel time is given to high and low bit rate stations, and, as a result, high bit rate stations obtain more throughput. We propose two schemes of the upcoming 802.11e standard to achieve this allocation, and compare their delay and throughput performance.

## Semantic image classification using statistical local spatial relations model

### Multimedia Tools and Applications (2008-09-01) 39: 169-188 , September 01, 2008

In this paper, a statistical model called statistical local spatial relations (SLSR) is presented as a novel technique of a learning model with spatial and statistical information for semantic image classification. The model is inspired by probabilistic Latent Semantic Analysis (PLSA) for text mining. In text analysis, PLSA is used to discover topics in a corpus using the bag-of-word document representation. In SLSR, we treat image categories as topics, therefore an image containing instances of multiple categories can be modeled as a mixture of topics. More significantly, SLSR introduces spatial relation information as a factor which is not present in PLSA. SLSR has rotation, scale, translation and affine invariant properties and can solve partial occlusion problems. Using the Dirichlet process and variational Expectation-Maximization learning algorithm, SLSR is developed as an implementation of an image classification algorithm. SLSR uses an unsupervised process which can capture both spatial relations and statistical information simultaneously. The experiments are demonstrated on some standard data sets and show that the SLSR model is a promising model for semantic image classification problems.

## A formal proof of the ε-optimality of absorbing continuous pursuit algorithms using the theory of regular functions

### Applied Intelligence (2014-10-01) 41: 974-985 , October 01, 2014

The most difficult part in the design and analysis of Learning Automata (LA) consists of the formal proofs of their convergence accuracies. The mathematical techniques used for the different families (Fixed Structure, Variable Structure, Discretized etc.) are quite distinct. Among the families of LA, Estimator Algorithms (EAs) are certainly the fastest, and within this family, the set of Pursuit algorithms have been considered to be the pioneering schemes. Informally, if the environment is stationary, their *ε*-optimality is defined as their ability to converge to the optimal action with an arbitrarily large probability, if the learning parameter is sufficiently small/large. The existing proofs of all the reported EAs follow the same fundamental principles, and to clarify this, in the interest of simplicity, we shall concentrate on the family of Pursuit algorithms. Recently, it has been reported Ryan and Omkar (J Appl Probab 49(3):795–805, ) that the previous proofs for *ε*-optimality of all the reported EAs have a common flaw. The flaw lies in the condition which apparently supports the so-called “monotonicity” property of the probability of selecting the optimal action, which states that after some time instant *t*_{0}, the reward probability estimates will be ordered correctly *forever*. The authors of the various proofs have rather offered a proof for the fact that the reward probability estimates are ordered correctly *at a single point of time* after *t*_{0}, which, in turn, does not guarantee the ordering *forever*, rendering the previous proofs incorrect. While in Ryan and Omkar (J Appl Probab 49(3):795–805, ), a rectified proof was presented to prove the *ε*-optimality of the Continuous Pursuit Algorithm (CPA), which was the pioneering EA, in this paper, a new proof is provided for the Absorbing CPA (ACPA), i.e., an algorithm which follows the CPA paradigm but which artificially has absorbing states whenever any action probability is arbitrarily close to unity. Unlike the previous flawed proofs, instead of examining the monotonicity property of the action probabilities, it rather examines their submartingale property, and then, unlike the traditional approach, invokes the theory of Regular functions to prove that the probability of converging to the optimal action can be made arbitrarily close to unity. We believe that the proof is both unique and pioneering, and adds insights into the convergence of different EAs. It can also form the basis for formally demonstrating the *ε*-optimality of other Estimator algorithms which are artificially rendered absorbing.

## Synthesizing trees by plantons

### The Visual Computer (2006-03-31) 22: 238-248 , March 31, 2006

In this paper, we present a two-level statistical model for characterizing the stochastic and specific nature of trees. At the low level, we define *plantons*, which are a group of similar organs, to depict tree organ details statistically. At the high level, a set of transitions between plantons is provided to describe the stochastic distribution of organs.

Based on such a tree model, we propose a novel tree modeling approach, synthesizing trees by plantons, which are extracted from tree samples. All tree samples are captured from the real world. We have designed a maximum likelihood estimation algorithm to acquire the two-level statistical tree model from single samples or multi- samples. Experimental results show that our new model is capable of synthesizing new trees with similar, yet visually different shapes.

## Towards evolvable Internet architecture-design constraints and models analysis

### Science China Information Sciences (2014-11-01) 57: 1-24 , November 01, 2014

There is a general consensus about the success of Internet architecture in academia and industry. However, with the development of diversified application, the existing Internet architecture is facing more and more challenges in scalability, security, mobility and performance. A novel evolvable Internet architecture framework is proposed in this paper to meet the continuous changing application requirements. The basic idea of evolvability is relaxing the constraints that limit the development of the architecture while adhering to the core design principles of the Internet. Three important design constraints used to ensure the construction of the evolvable architecture, including the evolvability constraint, the economic adaptability constraint and the manageability constraint, are comprehensively described. We consider that the evolvable architecture can be developed from the network layer under these design constraints. What’s more, we believe that the address system is the foundation of the Internet. Therefore, we propose a general address platform which provides a more open and efficient network environment for the research and development of the evolvable architecture.

## Semantic change computation: A successive approach

### World Wide Web (2016-05-01) 19: 375-415 , May 01, 2016

The prevalence of creativity in the emergent online media language calls for more effective computational approach to semantic change. Two divergent metaphysical understandings are found with the task: juxtaposition-view of change and succession-view of change. This paper argues that the succession-view better reflects the essence of semantic change and proposes a successive framework for automatic semantic change detection. The framework analyzes the semantic change at both the word level and the individual-sense level inside a word by transforming the task into change pattern detection over time series data. At the word level, the framework models the word’s semantic change with S-shaped model and successfully correlates change patterns with classical semantic change categories such as broadening, narrowing, new word coining, metaphorical change, and metonymic change. At the sense level, the framework measures the conventionality of individual senses and distinguishes categories of temporary word usage, basic sense, novel sense and disappearing sense, again with S-shaped model. Experiments at both levels yield increased precision rate as compared with the baseline, supporting the succession-view of semantic change.