## SEARCH

#### Institution

##### ( see all 992)

- Veer Surendra Sai University of Technology 12 (%)
- Northeastern University 8 (%)
- Universiti Teknologi Malaysia 8 (%)
- Cairo University 7 (%)
- Amrita Vishwa Vidyapeetham 6 (%)

#### Author

##### ( see all 2229)

- Behera, H. S. 12 (%)
- Naik, Bighnaraj 9 (%)
- Nayak, Janmenjoy 9 (%)
- Ghazali, Rozaida 4 (%)
- Hassanien, Aboul Ella 4 (%)

#### Subject

##### ( see all 198)

- Engineering [x] 748 (%)
- Computational Intelligence 408 (%)
- Artificial Intelligence (incl. Robotics) 345 (%)
- Signal, Image and Speech Processing 104 (%)
- Communications Engineering, Networks 98 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 748 matching Articles
Results per page:

## Evaluating Template Uniqueness in ECG Biometrics

### Informatics in Control, Automation and Robotics (2016-01-01) 370: 111-123 , January 01, 2016

Research over the pastCarreiras, C. decade hasLourenço, A. demonstratedFerreira, R. the capabilitySilva, H. of the electrocardiographic (ECG) signalFred, A. to be used as a biometric trait, through which the identity of an individual can be recognized. Given its universality, intrinsic aliveness detection, continuous availability, and inherent hidden nature, the ECG is an interesting biometric modality enabling the development of novel applications, where non-intrusive and continuous authentication are critical factors. Examples include personal computers, the gaming industry, and the auto industry, especially for car sharing programs and fleet management solutions. Nonetheless, from a theoretical point of view, there are still some challenges to overcome in bringing ECG biometrics to mass markets. In particular, the issues of uniqueness (related to inter-subject variability) and permanence (related to intra-subject variability) are still largely unanswered. This work focuses on the uniqueness issue, evaluating the performance of our ECG biometric system over a database encompassing 618 subjects. Additionally, we performed tests with subsets of this population. The results cement the ECG as a viable trait to be used for identity recognition, having obtained and Equal Error Rate of $$9.01\,\%$$ and an Error of Identification of $$15.64\,\%$$ for the entire test population.

## Packet filter optimization techniques for high-speed network monitoring

### Annales Des Télécommunications (2007-03-01) 62: 387-407 , March 01, 2007

Effective network monitoring is vital for a growing number of control and management applications typically found in present-day networks. The ever-increasing link speeds and the complexity of monitoring applications’ needs have exposed severe limitations of existing monitoring techniques. A majority of the current monitoring tasks require only a small subset of all observed packets, which share some common properties such as identical header fields or similar patterns in their data. In order to capture only these useful packets, a large set of expressions needs to be evaluated. This evaluation should be done as efficiently as possible when monitoring multi-gigabit networks. To speed up this packet classification process, this article presents different packet filter optimization techniques. Complementary to existing approaches, we propose an adaptive optimization algorithm which dynamically reconfigures the filter expressions based on the currently observed traffic pattern. The performance of the algorithms is validated both analytically and by means of the implementation in a network monitoring framework. The various characteristics of the algorithms are investigated, including their performance in an operational network intrusion detection system.

## Hamming clustering techniques for the identification of prognostic indices in patients with advanced head and neck cancer treated with radiation therapy

### Medical and Biological Engineering and Computing (2000-09-01) 38: 483-486 , September 01, 2000

The aim of the study is to demonstrate the usefulness of a new, non-linear classifier method, called Hamming clustering (HC), in selecting prognostic variables affecting overall survival in patients with head and neck cancer. In particular, the aim is to identify whether tumour proliferation parameters can be predictive factors of response in a set of 115 patients that receive either alternating chemo-radiotherapy or accelerated or conventional radiotherapy. HC is able to generate a set of understandable rules underlying the study objective; it can also select a subset of input variables that represent good prognostic factors. HC has been compared with other standard classifiers, providing better results in terms of classification accuracy. In particular, HC obtains the best accuracy of 74.8% (sensitivity of 51.1% and specificity of 91.2%) about survival. The rules found show that, besides the classical, well-known variables concerning the tumour dimension and the involved lymphonodes, some biological parameters, such as DNA ploidy, are also useful as predictive factors.

## A Taxonomy of 3D Occluded Objects Recognition Techniques

### 3D Research (2016-02-03) 7: 1-14 , February 03, 2016

The overall performances of object recognition techniques under different condition (e.g., occlusion, viewpoint, and illumination) have been improved significantly in recent years. New applications and hardware are shifted towards digital photography, and digital media. This faces an increase in Internet usage requiring object recognition for certain applications; particularly occulded objects. However occlusion is still an issue unhandled, interlacing the relations between extracted feature points through image, research is going on to develop efficient techniques and easy to use algorithms that would help users to source images; this need to overcome problems and issues regarding occlusion. The aim of this research is to review recognition occluded objects algorithms and figure out their pros and cons to solve the occlusion problem features, which are extracted from occluded object to distinguish objects from other co-existing objects by determining the new techniques, which could differentiate the occluded fragment and sections inside an image.

## Clustering and Principal Feature Selection Impact for Internet Traffic Classification Using K-NN

### Proceedings of Second International Conference on Electrical Systems, Technology and Information 2015 (ICESTI 2015) (2016-01-01) 365: 75-81 , January 01, 2016

K-NN is a classification algorithm which suitable for large amounts of data and have higher accuracy for internet traffic classification, unfortunately K-NN algorithm has disadvantage in computation time because K-NN algorithm calculates the distance of all data in some dataset. This research provide alternative solution to overcome K-NN computation time, the alternative solution is to implement clustering process before the classification process. Clustering process does not require high computation time. Fuzzy C-Mean algorithm is implemented in this research. The Fuzzy C-Mean algorithm clusters the based datasets that be entered. Fuzzy C-Mean has disadvantage of clustering, that is the results are often not the same even though the input data are same, and the initial dataset that of the Fuzzy C-Mean is not optimal, to optimize the initial datasets, in this research, feature selection algorithm is used, after selecting the main feature of dataset, the output from fuzzy C-Mean become consistent. Selection of the features is a method that is expected to provide an initial dataset that is optimum for the algorithm Fuzzy C-Means. Algorithms for feature selection in this study used is Principal Component Analysis (PCA). PCA reduced nonsignificant attribute to created optimal dataset and can improve performance clustering and classification algorithm. Results of this research is clustering and principal feature selection give signifanct impact in accuracy and computation time for internet traffic classification. The combination from this three methods have successfully modeled to generate a data classification method of internet bandwidth usage.

## A recursive kinematic random forest and alpha beta filter classifier for 2D radar tracks

### EURASIP Journal on Advances in Signal Processing (2016-07-28) 2016: 1-12 , July 28, 2016

In this work, we show that by using a recursive random forest together with an alpha beta filter classifier, it is possible to classify radar tracks from the tracks’ kinematic data. The kinematic data is from a 2D scanning radar without Doppler or height information. We use random forest as this classifier implicitly handles the uncertainty in the position measurements. As stationary targets can have an apparently high speed because of the measurement uncertainty, we use an alpha beta filter classifier to classify stationary targets from moving targets. We show an overall classification rate from simulated data at 82.6 % and from real-world data at 79.7 %. Additional to the confusion matrix, we also show recordings of real-world data.

## Classification of GPS Satellites Using Improved Back Propagation Training Algorithms

### Wireless Personal Communications (2013-07-01) 71: 789-803 , July 01, 2013

Factor geometric dilution of precision (GDOP) is an indicator that shows the quality of GPS positioning and has often been used for choosing suitable satellite’s subset from at least 24 orbited existing satellites. The calculation of GPS GDOP is a time-consuming task which can be done by solving measurement equations with complicated matrix transformation and inversion. In order to decrease this computational burden, in this research the artificial neural network (ANN) has been used. Although the basic back propagation (BP) is the most popular ANN algorithm and can be used in the estimators, detectors or classifiers, it is too slow for practical problems and its performance is not satisfactory in many cases. To overcome this problem, six algorithms, namely, BP with adaptive learning rate and momentum, Fletcher-Reeves conjugate gradient algorithm (CGA), Polak–Ribikre CGA, Powell–Beale CGA, scaled CGA, and resilient BP have been proposed to reduce the convergence time of the basic BP. The simulation results show that resilient BP, compared with other methods, has greater accuracy and calculation time. The resilient BP can improve the classification accuracy from 93.16 to 98.02 % accuracy by using the GPS GDOP measurement data.

## Robust classwise and projective low-rank representation for image classification

### Signal, Image and Video Processing (2017-06-24): 1-9 , June 24, 2017

Several variations of the low-rank representation have been suggested intensively for diverse applications, recently. They perform properly on image alignment but undesirably on classification. That is, they are intractable when a new image arrives with an unknown label to be classified. Hence, inspired by a recent research of the fast projection, this paper proposes a supervised approach called the robust *classwise* and *projective low-rank representation* (CPLRR), which is the first attempt to align images classwise and learn a projective nonlinear function, simultaneously. It separates out the low-rank components explicitly with the parametric transformation corrections and projects the original images to the low-rank representations of corresponding categories, in an efficient manner. With the advantage of fast projection, CPLRR is appropriate for image classification. Extensive experiments conducted on MNIST, Extended Yale B, and CMU PIE datasets validate the effect of the robust low-rank alignment and the rapid projection, against different domain deformations, noises, and illumination conditions.

## GAssist vs. BioHEL: critical assessment of two paradigms of genetics-based machine learning

### Soft Computing (2013-06-01) 17: 953-981 , June 01, 2013

This paper reports an exhaustive analysis performed over two specific Genetics-based Machine Learning systems: BioHEL and GAssist. These two systems share many mechanisms and operators, but at the same time, they apply two different learning paradigms (the Iterative Rule Learning approach and the Pittsburgh approach, respectively). The aim of this paper is to: (a) propose standard configurations for handling small and large datasets, (b) compare the two systems in terms of learning capabilities, complexity of the obtained solutions and learning time, (c) determine the areas of the problem space where each one of these two systems performs better, and (d) compare them with other well-known machine learning algorithms. The results show that it is possible to find standard configurations for both systems. With these configurations the systems perform up to the standards of other state-of-the-art machine learning algorithms such as Support Vector Machines. Moreover, we identify the problem domains where each one of these systems have advantages and disadvantages and propose ways to improve the systems based on this analysis.

## ABC-Miner+: constructing Markov blanket classifiers with ant colony algorithms

### Memetic Computing (2014-09-01) 6: 183-206 , September 01, 2014

ABC-Miner is a Bayesian classification algorithm based on the Ant colony optimization (ACO) meta-heuristic. The algorithm learns Bayesian network Augmented Naïve-Bayes (BAN) classifiers, where the class node is the parent of all the nodes representing the input variables. However, this assumes the existence of a dependency relationship between the class variable and *all* the input variables, and this relationship is always a type of “causal” (rather than “effect”) relationship, which restricts the flexibility of the algorithm to learn. In this paper, we extended the ABC-Miner algorithm to be able to learn the *Markov blanket* of the class variable. Such a produced model has a more flexible Bayesian network classifier structure, where it is not necessary to have a (direct) dependency relationship between the class variable and each of the input variables, and the dependency between the class and the input variables varies from “causal” to “effect” relationships. In this context, we propose two algorithms:
$${\hbox {ABC-Miner}+_1}$$
, in which the dependency relationships between the class and the input variables are defined in a separate phase before the dependency relationships among the input variables are defined, and
$${\hbox {ABC-Miner}+_2}$$
, in which the two types of dependency relationships in the Markov blanket classifier are discovered in a single integrated process. Empirical evaluations on 33 UCI benchmark datasets show that our extended algorithms outperform the original version in terms of predictive accuracy, model size and computational time. Moreover, they have shown a very competitive performance against other well-known classification algorithms in the literature.