## SEARCH

#### Institution

##### ( see all 1409)

- Kyung Hee University 21 (%)
- National University of Kaohsiung 17 (%)
- Nanjing University 15 (%)
- Islamic Azad University 14 (%)
- National Taiwan University of Science and Technology 13 (%)

#### Author

##### ( see all 4133)

- Hong, Tzung-Pei 14 (%)
- Lee, Sungyoung 13 (%)
- Alhajj, Reda 11 (%)
- Treur, Jan 10 (%)
- Granmo, Ole-Christoffer 9 (%)

#### Subject

- Artificial Intelligence (incl. Robotics) 1388 (%)
- Computer Science 1388 (%)
- Manufacturing, Machines, Tools 1388 (%)
- Mechanical Engineering 1388 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 1388 matching Articles
Results per page:

## A formal proof of the ε-optimality of absorbing continuous pursuit algorithms using the theory of regular functions

### Applied Intelligence (2014-10-01) 41: 974-985 , October 01, 2014

The most difficult part in the design and analysis of Learning Automata (LA) consists of the formal proofs of their convergence accuracies. The mathematical techniques used for the different families (Fixed Structure, Variable Structure, Discretized etc.) are quite distinct. Among the families of LA, Estimator Algorithms (EAs) are certainly the fastest, and within this family, the set of Pursuit algorithms have been considered to be the pioneering schemes. Informally, if the environment is stationary, their *ε*-optimality is defined as their ability to converge to the optimal action with an arbitrarily large probability, if the learning parameter is sufficiently small/large. The existing proofs of all the reported EAs follow the same fundamental principles, and to clarify this, in the interest of simplicity, we shall concentrate on the family of Pursuit algorithms. Recently, it has been reported Ryan and Omkar (J Appl Probab 49(3):795–805, ) that the previous proofs for *ε*-optimality of all the reported EAs have a common flaw. The flaw lies in the condition which apparently supports the so-called “monotonicity” property of the probability of selecting the optimal action, which states that after some time instant *t*_{0}, the reward probability estimates will be ordered correctly *forever*. The authors of the various proofs have rather offered a proof for the fact that the reward probability estimates are ordered correctly *at a single point of time* after *t*_{0}, which, in turn, does not guarantee the ordering *forever*, rendering the previous proofs incorrect. While in Ryan and Omkar (J Appl Probab 49(3):795–805, ), a rectified proof was presented to prove the *ε*-optimality of the Continuous Pursuit Algorithm (CPA), which was the pioneering EA, in this paper, a new proof is provided for the Absorbing CPA (ACPA), i.e., an algorithm which follows the CPA paradigm but which artificially has absorbing states whenever any action probability is arbitrarily close to unity. Unlike the previous flawed proofs, instead of examining the monotonicity property of the action probabilities, it rather examines their submartingale property, and then, unlike the traditional approach, invokes the theory of Regular functions to prove that the probability of converging to the optimal action can be made arbitrarily close to unity. We believe that the proof is both unique and pioneering, and adds insights into the convergence of different EAs. It can also form the basis for formally demonstrating the *ε*-optimality of other Estimator algorithms which are artificially rendered absorbing.

## Improving the reliability of heuristic multiple fault diagnosis via the EC-based Genetic Algorithm

### Applied Intelligence (1992-07-01) 2: 5-23 , July 01, 1992

Engineered Conditioning (EC) is a Genetic Algorithm operator that works together with the typical genetic algorithm operators: mate selection, crossover, and mutation, in order to improve convergence toward an optimal multiple fault diagnosis. When incorporated within a typical genetic algorithm, the resulting *hybrid* scheme produces improved reliability by exploiting the global nature of the genetic algorithm as well as “local” improvement capabilities of the Engineered Conditioning operator.

We show the significance of the Engineered Conditioning operator during Multiple Fault Diagnosis (i.e., finding the collection of simultaneously occurring disorders that best explains the observed symptoms or disorder manifestations). Within the Multiple Fault Diagnosis domain, we show the improvement of diagnostic reliability when using the engineered conditioning operator with the genetic algorithm compared to results from the genetic algorithm without the new operator. Reliability is based on the number of diagnostic trials for which the two versions of the genetic algorithm find the optimal diagnosis. For comparison purposes, optimal diagnoses have been computed using a search method that is guaranteed to find the optimal solution.

## Experiments in active vision with real and virtual robot heads

### Applied Intelligence (1995-07-01) 5: 237-250 , July 01, 1995

In the emerging paradigm of animate vision, the visual processes are not thought of as being independent of cognitive or motor processing, but as an integrated system within the context of visual behavior. Intimate coupling of sensory and motor systems have found to improve significantly the performance of behavior based vision systems. In order to study active vision systems one requires sensory-motor systems. Designing, building, and operating such a test bed is a challenging task. In this paper we describe the status of on-going work in developing a sensory-motor robotic system, R2H, with ten degrees of freedoms (DOF) for research in active vision. To complement the R2H system a Graphical Simulation and Animation (GSA) environment is also developed. The objective of building the GSA system is to create a comprehensive design tool to design and study the behavior of active systems and their interactions with the environment. GSA system aids the researchers to develop high performance and reliable software and hardware in a most effective manner. The GSA environment integrates sensing and motor actions and features complete kinematic simulation of the R2H system, it's sensors and it's workspace. With the aid of the GSA environment a Depth from Focus (DFF), Depth from Vergence, and Depth from Stereo modules are implemented and tested. The power and usefulness of the GSA system as a research tool is demonstrated by acquiring and analyzing images in the real and virtual worlds using the same software implemented and tested in the virtual world.

## Repeated patterns detection in big data using classification and parallelism on LERP Reduced Suffix Arrays

### Applied Intelligence (2016-10-01) 45: 567-597 , October 01, 2016

Suffix array is a powerful data structure, used mainly for pattern detection in strings. The main disadvantage of a full suffix array is its quadratic *O*(*n*^{2}) space capacity when the actual suffixes are needed. In our previous work [39], we introduced the innovative All Repeated Patterns Detection (ARPaD) algorithm and the Moving Longest Expected Repeated Pattern (MLERP) process. The former detects all repeated patterns in a string using a partition of the full Suffix Array and the latter is capable of analyzing large strings regardless of their size. Furthermore, the notion of Longest Expected Repeated Pattern (LERP), also introduced by the authors in a previous work, significantly reduces to linear *O**(**n**)* the space capacity needed for the full suffix array. However, so far the LERP value has to be specified in ad hoc manner based on experimental or empirical values. In order to overcome this problem, the Probabilistic Existence of LERP theorem has been proven in this paper and, furthermore, a formula for an accurate upper bound estimation of the LERP value has been introduced using only the length of the string and the size of the alphabet used in constructing the string. The importance of this method is the optimum upper bounding of the LERP value without any previous preprocess or knowledge of string characteristics. Moreover, the new data structure LERP Reduced Suffix Array is defined; it is a variation of the suffix array, and has the advantage of permitting the classification and parallelism to be implemented directly on the data structure. All other alternative methodologies deal with the very common problem of fitting any kind of data structure in a computer memory or disk in order to apply different time efficient methods for pattern detection. The current advanced and elegant proposed methodology allows us to alter the above-mentioned problem such that smaller classes of the problem can be distributed on different systems and then apply current, state-of-the-art, techniques such as parallelism and cloud computing using advanced DBMSs which are capable of handling the storage and analysis of big data. The implementation of the above-described methodology can be achieved by invoking our innovative ARPaD algorithm. Extensive experiments have been conducted on small, comparable strings of Champernowne Constant and DNA as well as on extremely large strings of *π* with length up to 68 billion digits. Furthermore, the novelty and superiority of our methodology have been also tested on real life application such as a Distributed Denial of Service (DDoS) attack early warning system.

## Distributed routing in packet-switching networks by counterpropagation network

### Applied Intelligence (1994-03-01) 4: 67-82 , March 01, 1994

Routing is a problem of considerable importance in a packet-switching network, because it allows both optimization of the transmission speeds available and minimization of the time required to deliver information. In classical centralized routing algorithms, each packet reaches its destination along the shortest path, although some network bandwidth is lost through overheads. By contrast, distributed routing algorithms usually limit the overloading of transmission links, but they cannot guarantee optimization of the paths between source and destination nodes on account of the mainly local vision they have of the problem. The aim of the authors is to reconcile the two advantages of classical routing strategies mentioned above through the use of neural networks. The approach proposed here is one in which the routing strategy guarantees the delivery of information along almost optimal paths, but distributes calculation to the various switching nodes. The article assesses the performance of this approach in terms of both routing paths and efficiency in bandwidth use, through comparison with classical approaches.

## A comparative study of stochastic optimization methods in electric motor design

### Applied Intelligence (2007-10-01) 27: 101-111 , October 01, 2007

The efficiency of universal electric motors that are widely used in home appliances can be improved by optimizing the geometry of the rotor and the stator. Expert designers traditionally approach this task by iteratively evaluating candidate designs and improving them according to their experience. However, the existence of reliable numerical simulators and powerful stochastic optimization techniques make it possible to automate the design procedure. We present a comparative study of six stochastic optimization algorithms in designing optimal rotor and stator geometries of a universal electric motor where the primary objective is to minimize the motor power losses. We compare three methods from the domain of evolutionary computation, generational evolutionary algorithm, steady-state evolutionary algorithm and differential evolution, two particle-based methods, particle-swarm optimization and electromagnetism-like algorithm, and a recently proposed multilevel ant stigmergy algorithm. By comparing their performance, the most efficient method for solving the problem is identified and an explanation of its success is offered.

## Erratum to: Improved initial vertex ordering for exact maximum clique search

### Applied Intelligence (2017-01-01) 46: 240 , January 01, 2017

## Discrete time neural networks

### Applied Intelligence (1993-02-01) 3: 91-105 , February 01, 1993

Traditional feedforward neural networks are static structures that simply map input to output. To better reflect the dynamics in the biological system, time dependency is incorporated into the network by using Finite Impulse Response (FIR) linear filters to model the processes of axonal transport, synaptic modulation, and charge dissipation. While a constructive proof gives a theoretical equivalence between the class of problems solvable by the FIR model and the static structure, certain practical and computational advantages exist for the FIR model. Adaptation of the network is achieved through an efficient gradient descent algorithm, which is shown to be a temporal generalization of the popular backpropagation algorithm for static networks. Applications of the network are discussed with a detailed example of using the network for time series prediction.

## Outlier-eliminated k-means clustering algorithm based on differential privacy preservation

### Applied Intelligence (2016-12-01) 45: 1179-1191 , December 01, 2016

Individual privacy may be compromised during the process of mining for valuable information, and the potential for data mining is hindered by the need to preserve privacy. It is well known that *k*-means clustering algorithms based on differential privacy require preserving privacy while maintaining the availability of clustering. However, it is difficult to balance both aspects in traditional algorithms. In this paper, an outlier-eliminated differential privacy (OEDP) *k*-means algorithm is proposed that both preserves privacy and improves clustering efficiency. The proposed approach selects the initial centre points in accordance with the distribution density of data points, and adds Laplacian noise to the original data for privacy preservation. Both a theoretical analysis and comparative experiments were conducted. The theoretical analysis shows that the proposed algorithm satisfies *ε*-differential privacy. Furthermore, the experimental results show that, compared to other methods, the proposed algorithm effectively preserves data privacy and improves the clustering results in terms of accuracy, stability, and availability.

## Combining rational and biological factors in virtual agent decision making

### Applied Intelligence (2011-02-01) 34: 87-101 , February 01, 2011

To enhance believability of virtual agents, this paper presents an agent-based modelling approach for decision making, which integrates rational reasoning based on means-end analysis with personal psychological and biological aspects. The agent model developed is a combination of a BDI-model and a utility-based decision model in the context of specific desires and beliefs. The approach is illustrated by addressing the behaviour of violent criminals, thereby creating a model for virtual criminals. Within a number of simulation experiments, the model has been tested in the context of a street robbery scenario. In addition, a user study has been performed, which confirms the fact that the model enhances believability of virtual agents.