Results 1 - 10
of
8,721
BIOINFORMATICS Quality classification of tandem mass spectrometry data
"... Motivation: Peptide identification by tandem mass spectrometry is an important tool in proteomic research. Powerful identification programs exist, such as SEQUEST, ProICAT and Mascot, which can relate experimental spectra to the theoretical ones derived from protein databases, thus removing much of ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
identification, and in this way to reduce the labour from the validation phase. Results: We propose a prefiltering scheme for evaluating the quality of spectra before the database search. The spectra are classified into two classes: spectra which contain valuable information for peptide identification
Rendering of Surfaces from Volume Data
- IEEE COMPUTER GRAPHICS AND APPLICATIONS
, 1988
"... The application of volume rendering techniques to the display of surfaces from sampled scalar functions of three spatial dimensions is explored. Fitting of geometric primitives to the sampled data is not required. Images are formed by directly shading each sample and projecting it onto the picture ..."
Abstract
-
Cited by 875 (12 self)
- Add to MetaCart
the picture plane. Surface shading calculations are performed at every voxel with local gradient vec-tors serving as surface normals. In a separate step, surface classification operators are applied to obtain a partial opacity for every voxel. Operators that detect isovalue contour surfaces and region
The click modular router
, 2001
"... Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devic ..."
Abstract
-
Cited by 1167 (28 self)
- Add to MetaCart
Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network
Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods
- ADVANCES IN LARGE MARGIN CLASSIFIERS
, 1999
"... The output of a classifier should be a calibrated posterior probability to enable post-processing. Standard SVMs do not provide such probabilities. One method to create probabilities is to directly train a kernel classifier with a logit link function and a regularized maximum likelihood score. Howev ..."
Abstract
-
Cited by 1051 (0 self)
- Add to MetaCart
. However, training with a maximum likelihood score will produce non-sparse kernel machines. Instead, we train an SVM, then train the parameters of an additional sigmoid function to map the SVM outputs into probabilities. This chapter compares classification error rate and likelihood scores for an SVM plus
TABU SEARCH
"... Tabu Search is a metaheuristic that guides a local heuristic search procedure to explore the solution space beyond local optimality. One of the main components of tabu search is its use of adaptive memory, which creates a more flexible search behavior. Memory based strategies are therefore the hallm ..."
Abstract
-
Cited by 822 (48 self)
- Add to MetaCart
. These networks have been widely used for both prediction and classification in many different areas. Although the most popular method for training these networks is backpropagation, other optimization methods such as tabu search have been applied to solve this problem. This chapter describes two training
Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm
- Machine Learning
, 1988
"... learning Boolean functions, linear-threshold algorithms Abstract. Valiant (1984) and others have studied the problem of learning various classes of Boolean functions from examples. Here we discuss incremental learning of these functions. We consider a setting in which the learner responds to each ex ..."
Abstract
-
Cited by 773 (5 self)
- Add to MetaCart
example according to a current hypothesis. Then the learner updates the hypothesis, if necessary, based on the correct classification of the example. One natural measure of the quality of learning in this setting is the number of mistakes the learner makes. For suitable classes of functions, learning
Mining the Peanut Gallery: Opinion Extraction and Semantic Classification of Product Reviews
, 2003
"... The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor, mixe ..."
Abstract
-
Cited by 453 (0 self)
- Add to MetaCart
The web contains a wealth of product reviews, but sifting through them is a daunting task. Ideally, an opinion mining tool would process a set of search results for a given item, generating a list of product attributes (quality, features, etc.) and aggregating opinions about each of them (poor
Enhanced hypertext categorization using hyperlinks
, 1998
"... A major challenge in indexing unstructured hypertext databases is to automatically extract meta-data that enables structured search using topic taxonomies, circumvents keyword ambiguity, and improves the quality of search and profile-based routing and filtering. Therefore, an accurate classifier is ..."
Abstract
-
Cited by 453 (8 self)
- Add to MetaCart
is an essential component of a hypertext database. Hyperlinks pose new problems not addressed in the extensive text classification literature. Links clearly contain high-quality semantic clues that are lost upon a purely term-based classifier, but exploiting link information is non-trivial because it is noisy
Support Vector Machines for Classification and Regression
- UNIVERSITY OF SOUTHAMPTON, TECHNICAL REPORT
, 1998
"... The problem of empirical data modelling is germane to many engineering applications.
In empirical data modelling a process of induction is used to build up a model of the
system, from which it is hoped to deduce responses of the system that have yet to be observed.
Ultimately the quantity and qualit ..."
Abstract
-
Cited by 357 (5 self)
- Add to MetaCart
and quality of the observations govern the performance
of this empirical model. By its observational nature data obtained is finite and sampled;
typically this sampling is non-uniform and due to the high dimensional nature of the
problem the data will form only a sparse distribution in the input space
On Projection Algorithms for Solving Convex Feasibility Problems
, 1996
"... Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of the ..."
Abstract
-
Cited by 331 (43 self)
- Add to MetaCart
of these algorithms, a very broad and flexible framework is investigated . Several crucial new concepts which allow a systematic discussion of questions on behaviour in general Hilbert spaces and on the quality of convergence are brought out. Numerous examples are given. 1991 M.R. Subject Classification. Primary 47H
Results 1 - 10
of
8,721