## SEARCH

#### Country

##### ( see all 659)

- United States 106528 (%)
- Germany 84000 (%)
- China 61672 (%)
- United Kingdom 53109 (%)

#### Institution

##### ( see all 205173)

- University of California 5316 (%)
- Chinese Academy of Sciences 4366 (%)
- Carnegie Mellon University 3881 (%)
- Tsinghua University 3658 (%)
- Technische Universität München 2818 (%)

#### Author

##### ( see all 689520)

- Shekhar, Shashi 1282 (%)
- Xiong, Hui 1254 (%)
- MacDonald, Matthew 1217 (%)
- Freeman, Adam 833 (%)
- Troelsen, Andrew 490 (%)

#### Publication

##### ( see all 16625)

- Computer Science and Communications Dictionary 21379 (%)
- Multimedia Tools and Applications 5225 (%)
- Scientometrics 5086 (%)
- Encyclopedia of Database Systems 4179 (%)
- Encyclopedia of GIS 3744 (%)

#### Subject

##### ( see all 679)

- Computer Science [x] 838535 (%)
- Artificial Intelligence (incl. Robotics) 290189 (%)
- Computer Communication Networks 210055 (%)
- Information Systems Applications (incl. Internet) 192940 (%)
- Algorithm Analysis and Problem Complexity 143299 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 31 to 40 of 838535 matching Articles
Results per page:

## Benutzeranforderungen an PC-CAD-Systeme

### GI — 18. Jahrestagung (1988-01-01): 187 , January 01, 1988

### Kurzfassung

Aufgrund ihrer gestiegenen Rechenleistung und Speicherkapazität werden Personal Computer immer mehr auch für komplexe Aufgaben der graphischen Datenverarbeitung benutzt. Objektorientierte Zeichenprogramme werden durch zusätzliche Funktionen zu CAD-Programmen ausgebaut. Hierbei treten die von universalen CAD-Programmen auf größeren Rechnern bekannten Probleme in der Gestaltung der Benutzerschnittstelle erneut auf. Demgegenüber lassen sich CAD-Programme für bestimmte Aufgaben mit benutzerfreundlicheren Dialogschnittstellen ausstatten, wie an einem Beispiel gezeigt wird. Dies ist im wesentlichen darauf zurückzuführen, daß in ihnen Wissen über Ziele und Kriterien des Entwurfes enthalten ist. Hieraus lassen sich SchluBfolgerungen für die Konzeption neuer CAD-Systeme ziehen.

## Zusammenfassung

### Erfahrung, Intuition, Diskursives Denken und Künstliche Intelligenz als Grundlage ärztlicher Entscheidungen (1992-01-01) 1992 / 3: 25-26 , January 01, 1992

### Zusammenfassung

Ich habe versucht, durch die Grundlagen medizinischer Entscheidungsfindung, besonders ihrer diagnostischen Seite, zu führen. Umso kürzer und einfacher ist meine Zusammenfassung (Darst. 30).

## Bilateral Teleoperation System with Time Varying Communication Delay: Stability and Convergence

### Autonomous and Intelligent Systems (2011-01-01) 6752: 156-166 , January 01, 2011

The trajectory tracking control problem of internet-based bilateral nonlinear teleoperators with the presence of the symmetric and unsymmetrical time varying communication delay is addressed in this paper. The design comprises proportional derivative (PD) terms with nonlinear adaptive control terms in order to cope with parametric uncertainty of the master and slave robot dynamics. The master-slave teleoperators are coupled by velocity and delayed position signals. The Lyapunov-Krasovskii-like functional is employed to ensure asymptotic stability of the master-slave closed-loop teleoperator systems under time varying communication delay. The stability condition allows the designer to estimate the control gains in order to achieve desired tracking property of the position and velocity signals for the master and slave systems.

## Gamifying Support

### Human-Computer Interaction. Applications and Services (2013-01-01) 8005: 284-291 , January 01, 2013

When applied with care and consideration, gamification can have significant positive effects on support. Utilizing gamification elements, such an leaderboards, levels, badges, and rewards, within a community can help engage customers and encourage them to generate support content. This allows them to self-serve and more quickly resolve their issues. Internal support engineers can also be motivated when exposed to a point system with appropriate challenges, levels, and rewards. The result can increase overall job satisfaction, increase engineer positivity, and lead to better customer service.

## Proportional fair throughput allocation in multirate IEEE 802.11e wireless LANs

### Wireless Networks (2007-10-01) 13: 649-662 , October 01, 2007

Under heterogeneous radio conditions, Wireless LAN stations may use different modulation schemes, leading to a heterogeneity of bit rates. In such a situation, 802.11 DCF allocates the same throughput to all stations independently of their transmitting bit rate; as a result, the channel is used by low bit rate stations most of the time, and efficiency is low. In this paper, we propose a more efficient throughput allocation criterion based on proportional fairness. We find out that, in a proportional fair allocation, the same share of channel time is given to high and low bit rate stations, and, as a result, high bit rate stations obtain more throughput. We propose two schemes of the upcoming 802.11e standard to achieve this allocation, and compare their delay and throughput performance.

## Avoiding Ontology Confusion in ETL Processes

### New Trends in Databases and Information Systems (2015-01-01) 539: 119-126 , January 01, 2015

Extract-Transform-Load (
$$\mathcal {ETL}$$
) is a crucial phase in Data Warehouse (
$$\mathcal {DW}$$
) design life-cycle that copes with many issues: data provenance, data heterogeneity, process automation, data refreshment, execution time, etc. Ontologies and Semantic Web technologies have been largely used in the
$$\mathcal {ETL}$$
phase. Ontologies are a *buzzword* used by many research communities such as: *Databases*, *Artificial Intelligence* (AI), *Natural Language Processing* (NLP), where each community has its type of ontologies: conceptual canonical ontologies (for databases), conceptual non-canonical ontologies (for AI), and linguistic ontologies (for NLP). In
$$\mathcal {ETL}$$
approaches, these three types of ontologies are considered. However, these studies do not consider the types of the used ontologies which usually affect the quality of the managed data. We propose in this paper a semantic
$$\mathcal {ETL}$$
approach which considers both canonical and non-canonical layers. To evaluate the effectiveness of our approach, experiments are conducted using Oracle semantic databases referencing LUBM benchmark ontology.

## Semantic image classification using statistical local spatial relations model

### Multimedia Tools and Applications (2008-09-01) 39: 169-188 , September 01, 2008

In this paper, a statistical model called statistical local spatial relations (SLSR) is presented as a novel technique of a learning model with spatial and statistical information for semantic image classification. The model is inspired by probabilistic Latent Semantic Analysis (PLSA) for text mining. In text analysis, PLSA is used to discover topics in a corpus using the bag-of-word document representation. In SLSR, we treat image categories as topics, therefore an image containing instances of multiple categories can be modeled as a mixture of topics. More significantly, SLSR introduces spatial relation information as a factor which is not present in PLSA. SLSR has rotation, scale, translation and affine invariant properties and can solve partial occlusion problems. Using the Dirichlet process and variational Expectation-Maximization learning algorithm, SLSR is developed as an implementation of an image classification algorithm. SLSR uses an unsupervised process which can capture both spatial relations and statistical information simultaneously. The experiments are demonstrated on some standard data sets and show that the SLSR model is a promising model for semantic image classification problems.

## system loading

### Computer Science and Communications Dictionary (2001-01-01) : 1720 , January 01, 2001

## Formal Correctness of Security Protocols

### Formal Correctness of Security Protocols (2007-01-01) , January 01, 2007

## A formal proof of the ε-optimality of absorbing continuous pursuit algorithms using the theory of regular functions

### Applied Intelligence (2014-10-01) 41: 974-985 , October 01, 2014

The most difficult part in the design and analysis of Learning Automata (LA) consists of the formal proofs of their convergence accuracies. The mathematical techniques used for the different families (Fixed Structure, Variable Structure, Discretized etc.) are quite distinct. Among the families of LA, Estimator Algorithms (EAs) are certainly the fastest, and within this family, the set of Pursuit algorithms have been considered to be the pioneering schemes. Informally, if the environment is stationary, their *ε*-optimality is defined as their ability to converge to the optimal action with an arbitrarily large probability, if the learning parameter is sufficiently small/large. The existing proofs of all the reported EAs follow the same fundamental principles, and to clarify this, in the interest of simplicity, we shall concentrate on the family of Pursuit algorithms. Recently, it has been reported Ryan and Omkar (J Appl Probab 49(3):795–805, ) that the previous proofs for *ε*-optimality of all the reported EAs have a common flaw. The flaw lies in the condition which apparently supports the so-called “monotonicity” property of the probability of selecting the optimal action, which states that after some time instant *t*_{0}, the reward probability estimates will be ordered correctly *forever*. The authors of the various proofs have rather offered a proof for the fact that the reward probability estimates are ordered correctly *at a single point of time* after *t*_{0}, which, in turn, does not guarantee the ordering *forever*, rendering the previous proofs incorrect. While in Ryan and Omkar (J Appl Probab 49(3):795–805, ), a rectified proof was presented to prove the *ε*-optimality of the Continuous Pursuit Algorithm (CPA), which was the pioneering EA, in this paper, a new proof is provided for the Absorbing CPA (ACPA), i.e., an algorithm which follows the CPA paradigm but which artificially has absorbing states whenever any action probability is arbitrarily close to unity. Unlike the previous flawed proofs, instead of examining the monotonicity property of the action probabilities, it rather examines their submartingale property, and then, unlike the traditional approach, invokes the theory of Regular functions to prove that the probability of converging to the optimal action can be made arbitrarily close to unity. We believe that the proof is both unique and pioneering, and adds insights into the convergence of different EAs. It can also form the basis for formally demonstrating the *ε*-optimality of other Estimator algorithms which are artificially rendered absorbing.