## SEARCH

#### Country

##### ( see all 103)

- United States 4016 (%)
- Germany 2206 (%)
- United Kingdom 1998 (%)
- China 1633 (%)

#### Institution

##### ( see all 17086)

- University of California 180 (%)
- Tsinghua University 163 (%)
- Carnegie Mellon University 146 (%)
- Chinese Academy of Sciences 140 (%)
- Napier University 116 (%)

#### Author

##### ( see all 39231)

- Buchanan, W. J. 102 (%)
- Maschke, Thomas 77 (%)
- Nyman, Mattias 72 (%)
- Beynon-Davies, Paul 56 (%)
- Harrop, Rob 54 (%)

#### Subject

##### ( see all 199)

- Computer Science [x] 30458 (%)
- Artificial Intelligence (incl. Robotics) 12750 (%)
- Computer Communication Networks 7337 (%)
- Information Storage and Retrieval 6551 (%)
- Information Systems Applications (incl. Internet) 5934 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 30458 matching Articles
Results per page:

## Parallelization of Reaction Dynamics Codes Using P-GRADE: A Case Study

### Computational Science and Its Applications – ICCSA 2004 (2004-01-01) 3044: 290-299 , January 01, 2004

P-GRADE, a graphical tool and programming environment was used to parallelize atomic level reaction dynamics codes. In the reported case study a classical trajectory code written in FORTRAN has been parallelized. The selected level was coarse grain parallelization. P-GRADE allowed us to use automatic schemes, out of which the task farm was selected. The FORTRAN code was separated into an input/output and a working section. The former, enhanced by a data transfer section operates on the master, the latter on the slaves. Small sections for data transfer were written in C language. The P-GRADE environment offers a user-friendly way of monitoring the efficiency of the parallelization. On a 20-processor NPACI Rocks cluster the speed-up is 99 percent proportional to the number of processors.

## Systematic versus Non-systematic Methods for Solving Incremental Satisfiability

### Innovations in Applied Artificial Intelligence (2004-01-01) 3029: 543-551 , January 01, 2004

Propositional satisfiability (SAT) problem is fundamental to the theory of NP-completeness. Indeed, using the concept of “polynomial-time reducibility” all NP-complete problems can be polynomially reduced to SAT. Thus, any new technique for satisfiability problems will lead to general approaches for thousands of hard combinatorial problems. In this paper, we introduce the incremental propositional satisfiability problem that consists of maintaining the satisfiability of a propositional formula anytime a conjunction of new clauses is added. More precisely, the goal here is to check whether a solution to a SAT problem continues to be a solution anytime a new set of clauses is added and if not, whether the solution can be modified efficiently to satisfy the old formula and the new clauses. We will study the applicability of systematic and approximation methods for solving incremental SAT problems. The systematic method is based on the branch and bound technique while the approximation methods rely on stochastic local search and genetic algorithms.

## PRED: Prediction-Enabled RED

### Computational Science - ICCS 2004 (2004-01-01) 3038: 1193-1200 , January 01, 2004

This paper proposes a router congestion control mechanism called PRED (Prediction-enabled RED), a more adaptive and proactive version of RED (Random Early Detection). In essence, PRED predicts its queue length for an early detection of possible congestion alerts in the near future and operates adaptively to the predicted changes in traffic patterns. Typically, PRED does this by first making prediction about average queue length and then using the predicted average queue length to adjust three classic RED parameters *max*_{th}, *min*_{th}, and *max*_{p}. The incoming packets after the adjustment are now being dropped with the new probability defined by updated parameters. Due to its adaptability and proactive reaction to network traffic changes, PRED can be considered as a novel solution to dynamically configure RED. Extensive simulation results from NS-2 simulator are presented to verify the performance and characteristics of PRED.

## Sympatric Speciation Through Assortative Mating in a Long-Range Cellular Automaton

### Cellular Automata (2004-01-01) 3305: 405-414 , January 01, 2004

A probabilistic cellular automaton is developed to study the combined effect of competition and assortativity on the speciation process in the absence of geographical barriers. The model is studied in the case of long-range coupling. A simulated annealing technique was used in order to find the stationary distribution in reasonably short simulation times. Two components of fitness are considered: a static one that describes adaptation to environmental factors not related to the population itself, and a dynamic one that accounts for interactions between organisms such as competition. The simulations show that both in the case of flat and steep static fitness landscape, competition and assortativity do exert a synergistic effect on speciation. We also show that competition acts as a stabilizing force preventing the random sampling effects to drive one of the newborn populations to extinction. Finally, the variance of the frequency distribution is plotted as a function of competition and assortativity, obtaining a surface that shows a sharp transition from a very low (single species state) to a very high (multiple species state) level, therefore featuring as a phase transition diagram. Examination of the contour plots of the phase diagram graphycally highlights the synergetic effect.

## Block-Wise Markov Models

### Stochastic Image Processing (2004-01-01): 125-148 , January 01, 2004

The multiscale Markov models introduced in Chapter 4 can be used to extract large scale image features by parameterizing inter scale and intra scale interactions in an image. Since there are multiple scales with different parameter values for different scales, the parameter estimation process in the multi-scale approach becomes a major computational burden. One way to alleviate this complexity is to adopt a block-wise image modeling paradigm. That is, dividing the image space into non-overlapping blocks, a random variable is assigned for each image block to represent the class label. Then, a representative feature for each image block is extracted and is treated as the observed image data. The collection of all block features constitute the realization of the random field *Y*. Also, the class label assigned for each image block is a realization of the unobservable random field *X*. Since the representative feature values for image blocks normally have spatial continuity, the random field for the block class labels can be modeled as an MRF.

## A Runtime Scheduling Approach with Respect to Job Parallelism for Computational Grid

### Grid and Cooperative Computing - GCC 2004 (2004-01-01) 3251: 261-268 , January 01, 2004

Researches have demonstrated that runtime scheduling with respect to parallelism can provide a low-overhead load balancing with global load information. In this paper, we present a new approach that is with respect to job parallelism for computational grid. The approach has the capability of redressing the amount of parallel jobs automatically through grain size and other significant factors we specified. As all the jobs parallelized have been divided by an aptotic grain size before being scheduled, the approach can analyze and manage the affiliation of parallel jobs automatically. The approach is so polished that scheduling can be adjusted in real time with little delay. It provides high-quality load balancing for parallel jobs and is universal for computational grid making every grid resource work efficiently, not only as a good scheduling scheme but also as a scheme for scheduling and managing job parallelism.

## Integration, Diffusion, and Merging in Information Management Discipline

### SOFSEM 2004: Theory and Practice of Computer Science (2004-01-01) 2932: 22-40 , January 01, 2004

We observe that information is the life force through which we interact with our environment. The dynamic state of the world is maintained by information management. These observations motivated us to develop the concept of *fully connected information space* which we introduce in this paper. We discuss its structure and properties and present our research work and contributions for its maintenance. We also speculate the future of this information space and our mode of interaction with it.

## On a Special Class of Dempster-Shafer Theories

### Artificial Intelligence and Soft Computing - ICAISC 2004 (2004-01-01) 3070: 885-890 , January 01, 2004

###
*Abstract*

In this paper we want to draw Reader’s attention to the issue of impact of separate measurement of features (attributes) from which we want to make inferences. It turns out, that the fact of separate measurements implies algorithmic simplifications for many forms of reasoning in DST. Basic theorems and algorithms exploiting this are given.

## Framework for Simulating the Human Behavior for Intelligent Virtual Agents. Part I: Framework Architecture

### Computational Science - ICCS 2004 (2004-01-01) 3039: 229-236 , January 01, 2004

This paper is the first in a series of two papers (both included in this volume) describing a new framework for simulating the human behavior for intelligent virtual agents. This first paper focuses on the framework architecture and implementation issues. Firstly, we describe some requirements for such a framework to simulate realistically the human behavior. Then, the framework architecture is discussed. Finally, some strategies concerning the implementation of our framework on single and distributed CPU environments are presented.

## On the Degree of Independence of a Contingency Matrix

### Rough Sets and Current Trends in Computing (2004-01-01) 3066: 219-228 , January 01, 2004

###
*Abstract*

A contingency table summarizes the conditional frequencies of two attributes and shows how these two attributes are dependent on each other. Thus, this table is a fundamental tool for pattern discovery with conditional probabilities, such as rule discovery. In this paper, a contingency table is interpreted from the viewpoint of statistical independence and granular computing. The first important observation is that a contingency table compares two attributes with respect to the number of equivalence classes. For example, a *n* × *n* table compares two attributes with the same granularity, while a *m* × *n* (*m* ≥ *n*) table compares two attributes with different granularities. The second important observation is that matrix algebra is a key point of analysis of this table. Especially, the degree of independence, rank plays a very important role in evaluating the degree of statistical independence. Relations between rank and the degree of dependence are also investigated.