## SEARCH

#### Institution

##### ( see all 1502)

- Carnegie Mellon University [x] 3863 (%)
- University of Pittsburgh 72 (%)
- University of Maryland 48 (%)
- Microsoft Research 47 (%)
- University of California 40 (%)

#### Author

##### ( see all 6494)

- Faloutsos, Christos 88 (%)
- Kanade, Takeo 84 (%)
- Koedinger, Kenneth R. 64 (%)
- Veloso, Manuela 59 (%)
- Aleven, Vincent 58 (%)

#### Publication

##### ( see all 1243)

- Intelligent Tutoring Systems 124 (%)
- International Journal of Computer Vision 80 (%)
- Machine Learning 72 (%)
- Computer Aided Verification 69 (%)
- Artificial Intelligence in Education 58 (%)

#### Subject

##### ( see all 245)

- Computer Science [x] 3863 (%)
- Artificial Intelligence (incl. Robotics) 1864 (%)
- Computer Communication Networks 754 (%)
- Software Engineering 696 (%)
- Information Systems Applications (incl. Internet) 674 (%)

## CURRENTLY DISPLAYING:

Most articles

Fewest articles

Showing 1 to 10 of 3863 matching Articles
Results per page:

## Efficient Implementation of a Synchronous Parallel Push-Relabel Algorithm

### Algorithms - ESA 2015 (2015-01-01): 9294 , January 01, 2015

Motivated by the observation that FIFO-based push-relabel algorithms are able to outperform highest label-based variants on modern, large maximum flow problem instances, we introduce an efficient implementation of the algorithm that uses coarse-grained parallelism to avoid the problems of existing parallel approaches. We demonstrate good relative and absolute speedups of our algorithm on a set of large graph instances taken from real-world applications. On a modern 40-core machine, our parallel implementation outperforms existing sequential implementations by up to a factor of 12 and other parallel implementations by factors of up to 3.

## Analysis and Generation

### Computation of Language (1989-01-01): 99-118 , January 01, 1989

This chapter analyzes the functionality of LA-grammar in analysis and generation. Section 5.1 illustrates the process of incremental pragmatic interpretation during analysis, and explains why constituent-structure analysis seems to have such a strong intuitive basis. Section 5.2 discusses the relation between the syntactic generation of strings and the notion of pragmatico-semantic generation, defined as a mapping from utterance meanings to surfaces. Section 5.3 describes the left-associative approach to analysis and generation based on the Linear Path Hypothesis. Section 5.4 explains three basic problems of generation, and illustrates possible ways to solve the Extraction Problem and the Connection Problem. Section 5.5 addresses the Choice Problem of generation.

## Computing Policy Options

### Computational Analysis of Terrorist Groups: Lashkar-e-Taiba (2013-01-01): 149-156 , January 01, 2013

This chapter describes the methodology and the algorithm used to automatically compute policy options. It provides a mathematical definition of a policy against LeT and then proves the LeT Violence Non-Eliminability Theorem that shows there is no policy that will stop all of LeT’s terrorist actions. The reason for this is that attacks on holidays are carried out in situations that are inconsistent with situations when LeT carries out other types of attacks. The chapter presents an algorithm to compute all policies (in accordance with the mathematical definition of policy) that have good potential to significantly reduce all types of attacks carried out by LeT (except for attacks on holidays). Readers who do not wish to wade through the technical details can skip directly to Sect. 10.5 which summarizes the results of this chapter.

## Dynamic Discovery, Invocation and Composition of Semantic Web Services

### Methods and Applications of Artificial Intelligence (2004-01-01) 3025: 3-12 , January 01, 2004

While the Web has emerged as a World Wide repository of digitized information, by and large, this information is not available for automated inference. Two recent efforts, the *Semantic Web* [1] and *Web Services* hold great promise of making the Web a machine understandable infrastructure where software agents can perform distributed transactions. The Semantic Web transforms the Web into a repository of computer readable data, while Web services provide the tools for the automatic use of that data. To date there are very few points of contact between Web services and the Semantic Web: research on the Semantic Web focuses mostly on markup languages to allow annotation of Web pages and the inferential power needed to derive consequences, utilizing the Web as a formal knowledge base. Web services concentrate on proposals for interoperability standards and protocols to perform B2B transactions.

## Nash Equilibria for Weakest Target Security Games with Heterogeneous Agents

### Game Theory for Networks (2012-01-01) 75: 444-458 , January 01, 2012

Motivated attackers cannot always be blocked or deterred. In the physical-world security context, examples include suicide bombers and sexual predators. In computer networks, zero-day exploits unpredictably threaten the information economy and end users. In this paper, we study the conflicting incentives of individuals to act in the light of such threats.

More specifically, in the weakest target game an attacker will *always* be able to compromise the agent (or agents) with the lowest protection level, but will leave all others unscathed. We find the game to exhibit a number of complex phenomena. It does not admit pure Nash equilibria, and when players are heterogeneous in some cases the game does not even admit mixed-strategy equilibria.

Most outcomes from the weakest-target game are far from ideal. In fact, payoffs for most players in any Nash equilibrium are far worse than in the game’s social optimum. However, under the rule of a social planner, average security investments are extremely low. The game thus leads to a conflict between pure economic interests, and common social norms that imply that higher levels of security are always desirable.

## On the String Consensus Problem and the Manhattan Sequence Consensus Problem

### String Processing and Information Retrieval (2014-01-01) 8799: 244-255 , January 01, 2014

In the *Manhattan Sequence Consensus* problem (MSC problem) we are given *k* integer sequences, each of length ℓ, and we are to find an integer sequence *x* of length ℓ (called a consensus sequence), such that the maximum Manhattan distance of *x* from each of the input sequences is minimized. For binary sequences Manhattan distance coincides with Hamming distance, hence in this case the string consensus problem (also called string center problem or closest string problem) is a special case of MSC. Our main result is a practically efficient
$\mathcal{O}(\ell)$
-time algorithm solving MSC for *k* ≤ 5 sequences. Practicality of our algorithms has been verified experimentally. It improves upon the quadratic algorithm by Amir et al. (SPIRE 2012) for string consensus problem for *k* = 5 binary strings. Similarly as in Amir’s algorithm we use a column-based framework. We replace the implied general integer linear programming by its easy special cases, due to combinatorial properties of the MSC for *k* ≤ 5. We also show that for a general parameter *k* any instance can be reduced in linear time to a kernel of size *k*!, so the problem is fixed-parameter tractable. Nevertheless, for *k* ≥ 4 this is still too much for any naive solution to be feasible in practice.

## The CMUnited-99 Small-Size Robot Team

### RoboCup-99: Robot Soccer World Cup III (2000-01-01) 1856: 661-662 , January 01, 2000

One of the necessary steps in entering a small-size RoboCup team is the actual construction of the robots. We have successfully built robots for RoboCup-97 and RoboCup-98, leading to two champion teams, namely CMUnited-97 [2] and CMUnited-98 [1].

## Dataset Issues in Object Recognition

### Toward Category-Level Object Recognition (2006-01-01) 4170: 29-48 , January 01, 2006

Appropriate datasets are required at all stages of object recognition research, including learning visual models of object and scene categories, detecting and localizing instances of these models in images, and evaluating the performance of recognition algorithms. Current datasets are lacking in several respects, and this paper discusses some of the lessons learned from existing efforts, as well as innovative ways to obtain very large and diverse annotated datasets. It also suggests a few criteria for gathering future datasets.

## An Optimization-Based Sampling Scheme for Phylogenetic Trees

### Research in Computational Molecular Biology (2011-01-01) 6577: 252-266 , January 01, 2011

Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.

## Locality preserving and global discriminant projection with prior information

### Machine Vision and Applications (2010-06-01) 21: 577-585 , June 01, 2010

Existing supervised and semi-supervised dimensionality reduction methods utilize training data only with class labels being associated to the data samples for classification. In this paper, we present a new algorithm called locality preserving and global discriminant projection with prior information (LPGDP) for dimensionality reduction and classification, by considering both the manifold structure and the prior information, where the prior information includes not only the class label but also the misclassification of marginal samples. In the LPGDP algorithm, the overlap among the class-specific manifolds is discriminated by a global class graph, and a locality preserving criterion is employed to obtain the projections that best preserve the within-class local structures. The feasibility of the LPGDP algorithm has been evaluated in face recognition, object categorization and handwritten Chinese character recognition experiments. Experiment results show the superior performance of data modeling and classification to other techniques, such as linear discriminant analysis, locality preserving projection, discriminant locality preserving projection and marginal Fisher analysis.