## SEARCH

#### Institution

##### ( see all 1547)

- Kyung Hee University 21 (%)
- National University of Kaohsiung 18 (%)
- Islamic Azad University 17 (%)
- Nanjing University 15 (%)
- Amirkabir University of Technology 14 (%)

#### Author

##### ( see all 3589)

- Hong, Tzung-Pei 15 (%)
- Lee, Sungyoung 13 (%)
- Alhajj, Reda 11 (%)
- Meybodi, Mohammad Reza 11 (%)
- Treur, Jan 10 (%)

#### Subject

- Artificial Intelligence (incl. Robotics) 1569 (%)
- Computer Science 1569 (%)
- Manufacturing, Machines, Tools 1569 (%)
- Mechanical Engineering 1569 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 1569 matching Articles
Results per page:

## A formal proof of the ε-optimality of absorbing continuous pursuit algorithms using the theory of regular functions

### Applied Intelligence (2014-10-01) 41: 974-985 , October 01, 2014

The most difficult part in the design and analysis of Learning Automata (LA) consists of the formal proofs of their convergence accuracies. The mathematical techniques used for the different families (Fixed Structure, Variable Structure, Discretized etc.) are quite distinct. Among the families of LA, Estimator Algorithms (EAs) are certainly the fastest, and within this family, the set of Pursuit algorithms have been considered to be the pioneering schemes. Informally, if the environment is stationary, their *ε*-optimality is defined as their ability to converge to the optimal action with an arbitrarily large probability, if the learning parameter is sufficiently small/large. The existing proofs of all the reported EAs follow the same fundamental principles, and to clarify this, in the interest of simplicity, we shall concentrate on the family of Pursuit algorithms. Recently, it has been reported Ryan and Omkar (J Appl Probab 49(3):795–805, ) that the previous proofs for *ε*-optimality of all the reported EAs have a common flaw. The flaw lies in the condition which apparently supports the so-called “monotonicity” property of the probability of selecting the optimal action, which states that after some time instant *t*_{0}, the reward probability estimates will be ordered correctly *forever*. The authors of the various proofs have rather offered a proof for the fact that the reward probability estimates are ordered correctly *at a single point of time* after *t*_{0}, which, in turn, does not guarantee the ordering *forever*, rendering the previous proofs incorrect. While in Ryan and Omkar (J Appl Probab 49(3):795–805, ), a rectified proof was presented to prove the *ε*-optimality of the Continuous Pursuit Algorithm (CPA), which was the pioneering EA, in this paper, a new proof is provided for the Absorbing CPA (ACPA), i.e., an algorithm which follows the CPA paradigm but which artificially has absorbing states whenever any action probability is arbitrarily close to unity. Unlike the previous flawed proofs, instead of examining the monotonicity property of the action probabilities, it rather examines their submartingale property, and then, unlike the traditional approach, invokes the theory of Regular functions to prove that the probability of converging to the optimal action can be made arbitrarily close to unity. We believe that the proof is both unique and pioneering, and adds insights into the convergence of different EAs. It can also form the basis for formally demonstrating the *ε*-optimality of other Estimator algorithms which are artificially rendered absorbing.

## Improving the reliability of heuristic multiple fault diagnosis via the EC-based Genetic Algorithm

### Applied Intelligence (1992-07-01) 2: 5-23 , July 01, 1992

Engineered Conditioning (EC) is a Genetic Algorithm operator that works together with the typical genetic algorithm operators: mate selection, crossover, and mutation, in order to improve convergence toward an optimal multiple fault diagnosis. When incorporated within a typical genetic algorithm, the resulting *hybrid* scheme produces improved reliability by exploiting the global nature of the genetic algorithm as well as “local” improvement capabilities of the Engineered Conditioning operator.

We show the significance of the Engineered Conditioning operator during Multiple Fault Diagnosis (i.e., finding the collection of simultaneously occurring disorders that best explains the observed symptoms or disorder manifestations). Within the Multiple Fault Diagnosis domain, we show the improvement of diagnostic reliability when using the engineered conditioning operator with the genetic algorithm compared to results from the genetic algorithm without the new operator. Reliability is based on the number of diagnostic trials for which the two versions of the genetic algorithm find the optimal diagnosis. For comparison purposes, optimal diagnoses have been computed using a search method that is guaranteed to find the optimal solution.

## Repeated patterns detection in big data using classification and parallelism on LERP Reduced Suffix Arrays

### Applied Intelligence (2016-10-01) 45: 567-597 , October 01, 2016

Suffix array is a powerful data structure, used mainly for pattern detection in strings. The main disadvantage of a full suffix array is its quadratic *O*(*n*^{2}) space capacity when the actual suffixes are needed. In our previous work [39], we introduced the innovative All Repeated Patterns Detection (ARPaD) algorithm and the Moving Longest Expected Repeated Pattern (MLERP) process. The former detects all repeated patterns in a string using a partition of the full Suffix Array and the latter is capable of analyzing large strings regardless of their size. Furthermore, the notion of Longest Expected Repeated Pattern (LERP), also introduced by the authors in a previous work, significantly reduces to linear *O**(**n**)* the space capacity needed for the full suffix array. However, so far the LERP value has to be specified in ad hoc manner based on experimental or empirical values. In order to overcome this problem, the Probabilistic Existence of LERP theorem has been proven in this paper and, furthermore, a formula for an accurate upper bound estimation of the LERP value has been introduced using only the length of the string and the size of the alphabet used in constructing the string. The importance of this method is the optimum upper bounding of the LERP value without any previous preprocess or knowledge of string characteristics. Moreover, the new data structure LERP Reduced Suffix Array is defined; it is a variation of the suffix array, and has the advantage of permitting the classification and parallelism to be implemented directly on the data structure. All other alternative methodologies deal with the very common problem of fitting any kind of data structure in a computer memory or disk in order to apply different time efficient methods for pattern detection. The current advanced and elegant proposed methodology allows us to alter the above-mentioned problem such that smaller classes of the problem can be distributed on different systems and then apply current, state-of-the-art, techniques such as parallelism and cloud computing using advanced DBMSs which are capable of handling the storage and analysis of big data. The implementation of the above-described methodology can be achieved by invoking our innovative ARPaD algorithm. Extensive experiments have been conducted on small, comparable strings of Champernowne Constant and DNA as well as on extremely large strings of *π* with length up to 68 billion digits. Furthermore, the novelty and superiority of our methodology have been also tested on real life application such as a Distributed Denial of Service (DDoS) attack early warning system.

## A comparative study of stochastic optimization methods in electric motor design

### Applied Intelligence (2007-10-01) 27: 101-111 , October 01, 2007

The efficiency of universal electric motors that are widely used in home appliances can be improved by optimizing the geometry of the rotor and the stator. Expert designers traditionally approach this task by iteratively evaluating candidate designs and improving them according to their experience. However, the existence of reliable numerical simulators and powerful stochastic optimization techniques make it possible to automate the design procedure. We present a comparative study of six stochastic optimization algorithms in designing optimal rotor and stator geometries of a universal electric motor where the primary objective is to minimize the motor power losses. We compare three methods from the domain of evolutionary computation, generational evolutionary algorithm, steady-state evolutionary algorithm and differential evolution, two particle-based methods, particle-swarm optimization and electromagnetism-like algorithm, and a recently proposed multilevel ant stigmergy algorithm. By comparing their performance, the most efficient method for solving the problem is identified and an explanation of its success is offered.

## Erratum to: Improved initial vertex ordering for exact maximum clique search

### Applied Intelligence (2017-01-01) 46: 240 , January 01, 2017

## Creating diversity in ensembles using synthetic neighborhoods of training samples

### Applied Intelligence (2017-09-01) 47: 570-583 , September 01, 2017

Diversity among base classifiers is known to be a key driver for the construction of an effective ensemble classifier. Several methods have been proposed to construct diverse base classifiers using artificially generated training samples. However, in these methods, diversity is often obtained at the expense of the accuracy of base classifiers. Inspired by the localized generalization error model a new sample generation method is proposed in this study. When preparing different training sets for base classifiers, the proposed method generates samples located within limited neighborhoods of the corresponding training samples. The generated samples are different with the original training samples but they also expand different parts of the original training data. Learning these datasets can result in a set of base classifiers that are accurate in different regions of the input space as well as maintaining appropriate diversity. Experiments performed on 26 benchmark datasets showed that: (1) our proposed method significantly outperformed some state-of-the-art ensemble methods in term of the classification accuracy; (2) our proposed method was significantly more efficient that other sample generation based ensemble methods.

## Outlier-eliminated k-means clustering algorithm based on differential privacy preservation

### Applied Intelligence (2016-12-01) 45: 1179-1191 , December 01, 2016

Individual privacy may be compromised during the process of mining for valuable information, and the potential for data mining is hindered by the need to preserve privacy. It is well known that *k*-means clustering algorithms based on differential privacy require preserving privacy while maintaining the availability of clustering. However, it is difficult to balance both aspects in traditional algorithms. In this paper, an outlier-eliminated differential privacy (OEDP) *k*-means algorithm is proposed that both preserves privacy and improves clustering efficiency. The proposed approach selects the initial centre points in accordance with the distribution density of data points, and adds Laplacian noise to the original data for privacy preservation. Both a theoretical analysis and comparative experiments were conducted. The theoretical analysis shows that the proposed algorithm satisfies *ε*-differential privacy. Furthermore, the experimental results show that, compared to other methods, the proposed algorithm effectively preserves data privacy and improves the clustering results in terms of accuracy, stability, and availability.

## Combining rational and biological factors in virtual agent decision making

### Applied Intelligence (2011-02-01) 34: 87-101 , February 01, 2011

To enhance believability of virtual agents, this paper presents an agent-based modelling approach for decision making, which integrates rational reasoning based on means-end analysis with personal psychological and biological aspects. The agent model developed is a combination of a BDI-model and a utility-based decision model in the context of specific desires and beliefs. The approach is illustrated by addressing the behaviour of violent criminals, thereby creating a model for virtual criminals. Within a number of simulation experiments, the model has been tested in the context of a street robbery scenario. In addition, a user study has been performed, which confirms the fact that the model enhances believability of virtual agents.

## Storing fuzzy description logic ontology knowledge bases in fuzzy relational databases

### Applied Intelligence (2017-06-27): 1-23 , June 27, 2017

In the context of the Semantic Web, fuzzy extensions to OWL (the W3C standard ontology language) and Description Logics (DLs, the logical foundation of OWL) have been extensively investigated, and there are many real fuzzy DL ontology knowledge bases. Therefore, how to store fuzzy DL ontology knowledge bases has become an important issue. In this paper, we propose an approach and implement a tool for storing fuzzy DL ontology knowledge bases in fuzzy relational databases. Our chosen formalism is a fuzzy extension of the very expressive DL *SHOIN(D)*, which is the main logical foundation of the standard ontology language OWL, so that our storage approach can store not only fuzzy DL-knowledge bases but also fuzzy ontology knowledge bases. *Firstly*, we give a formal definition of fuzzy DL-knowledge bases. In the definition, we consider the constructors of both fuzzy *SHOIN(D)* DL and fuzzy OWL ontology and add some common fuzzy datatypes (e.g., trapezoidal values, interval values, approximate values, and labels) into the knowledge bases. *On this basis*, we propose an approach which can store fuzzy DL-knowledge bases in fuzzy relational databases, and provide an example to well explain the approach. The correctness and quality of the storage approach are proved and analyzed. *Furthermore*, following the proposed approach, we implemented a prototype tool, which can automatically store fuzzy DL-knowledge bases. *Finally*, we make a discussion about the query problem and make a comparison with the existing works.

## Application of intelligence-based predictive scheme to load-frequency control in a two-area interconnected power system

### Applied Intelligence (2011-12-01) 35: 457-468 , December 01, 2011

This paper describes an application of intelligence-based predictive scheme to load-frequency control (LFC) in a two-area interconnected power system. In this investigation, at first, a dynamic model of the present system has to be considered and subsequently an efficient control scheme which is organized based on Takagi-Sugeno-Kang (TSK) fuzzy-based scheme and linear generalized predictive control (LGPC) scheme needs to be developed. In the control scheme proposed, frequency deviation versus load electrical power variation could efficiently be dealt with, at each instant of time. In conclusion, in order to validate the effectiveness of the proposed control scheme, the whole of outcomes are simulated and compared with those obtained using a nonlinear GPC (NLGPC), as a benchmark approach, which is implemented based on the Wiener model of this power system. The validity of the proposed control scheme is tangibly verified in comparison with the previous one.