Results 11  20
of
125,808
BLEU: a Method for Automatic Evaluation of Machine Translation
, 2002
"... Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused. ..."
Abstract

Cited by 2107 (4 self)
 Add to MetaCart
Human evaluations of machine translation are extensive but expensive. Human evaluations can take months to finish and involve human labor that can not be reused.
Nonlinear component analysis as a kernel eigenvalue problem

, 1996
"... We describe a new method for performing a nonlinear form of Principal Component Analysis. By the use of integral operator kernel functions, we can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all ..."
Abstract

Cited by 1554 (85 self)
 Add to MetaCart
We describe a new method for performing a nonlinear form of Principal Component Analysis. By the use of integral operator kernel functions, we can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible 5pixel products in 16x16 images. We give the derivation of the method, along with a discussion of other techniques which can be made nonlinear with the kernel approach; and present first experimental results on nonlinear feature extraction for pattern recognition.
Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories
 In CVPR
"... This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine subregions and computing histograms of local features found inside each subregion. The resulting “spatial pyrami ..."
Abstract

Cited by 1878 (52 self)
 Add to MetaCart
This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine subregions and computing histograms of local features found inside each subregion. The resulting “spatial pyramid ” is a simple and computationally efficient extension of an orderless bagoffeatures image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s “gist ” and Lowe’s SIFT descriptors. 1.
Estimating the Support of a HighDimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract

Cited by 766 (29 self)
 Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propose a method to approach this problem by trying to estimate a function f which is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a preliminary theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabelled d...
Sparse Bayesian Learning and the Relevance Vector Machine
, 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..."
Abstract

Cited by 958 (5 self)
 Add to MetaCart
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vector machine' (RVM), a model of identical functional form to the popular and stateoftheart `support vector machine' (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer basis functions than a comparable SVM while oering a number of additional advantages. These include the benets of probabilistic predictions, automatic estimation of `nuisance' parameters, and the facility to utilise arbitrary basis functions (e.g. non`Mercer' kernels).
Abduction in Logic Programming
"... Abduction in Logic Programming started in the late 80s, early 90s, in an attempt to extend logic programming into a framework suitable for a variety of problems in Artificial Intelligence and other areas of Computer Science. This paper aims to chart out the main developments of the field over th ..."
Abstract

Cited by 616 (76 self)
 Add to MetaCart
Abduction in Logic Programming started in the late 80s, early 90s, in an attempt to extend logic programming into a framework suitable for a variety of problems in Artificial Intelligence and other areas of Computer Science. This paper aims to chart out the main developments of the field over the last ten years and to take a critical view of these developments from several perspectives: logical, epistemological, computational and suitability to application. The paper attempts to expose some of the challenges and prospects for the further development of the field.
The Berkeley FrameNet Project
 IN PROCEEDINGS OF THE COLINGACL
, 1998
"... FrameNet is a threeyear NSFsupported project in corpusbased computational lexicography, now in its second year #NSF IRI9618838, #Tools for Lexicon Building"#. The project's key features are #a# a commitment to corpus evidence for semantic and syntactic generalizations, and #b# the repr ..."
Abstract

Cited by 624 (3 self)
 Add to MetaCart
FrameNet is a threeyear NSFsupported project in corpusbased computational lexicography, now in its second year #NSF IRI9618838, #Tools for Lexicon Building"#. The project's key features are #a# a commitment to corpus evidence for semantic and syntactic generalizations, and #b# the representation of the valences of its target words #mostly nouns, adjectives, and verbs# in which the semantic portion makes use of frame semantics. The resulting database will contain #a# descriptions of the semantic frames underlying the meanings of the words described, and #b# the valence representation #semantic and syntactic# of several thousand words and phrases, each accompanied by #c# a representative collection of annotated corpus attestations, which jointly exemplify the observed linkings between #frame elements" and their syntactic realizations #e.g. grammatical function, phrase type, and other syntactic traits#. This report will present the project's goals and workflow, and information about the computational tools that have been adapted or created inhouse for this work.
A Comparative Analysis of Methodologies for Database Schema Integration
 ACM COMPUTING SURVEYS
, 1986
"... One of the fundamental principles of the database approach is that a database allows a nonredundant, unified representation of all data managed in an organization. This is achieved only when methodologies are available to support integration across organizational and application boundaries.
Metho ..."
Abstract

Cited by 642 (10 self)
 Add to MetaCart
One of the fundamental principles of the database approach is that a database allows a nonredundant, unified representation of all data managed in an organization. This is achieved only when methodologies are available to support integration across organizational and application boundaries.
Methodologies for database design usually perform the design activity by separately producing several schemas, representing parts of the application, which are subsequently merged. Database schema integration is the activity of integrating the schemas of existing or proposed databases into a global, unified schema. The aim of the paper is to provide first a unifying framework for the problem of schema integration, then a comparative review of the work done thus far in this area. Such a framework, with the associated analysis of the existing approaches, provides a basis for identifying strengths and weaknesses of individual methodologies, as well as general guidelines for future improvements and extensions.
Results 11  20
of
125,808