Results 1  10
of
19
Prediction of protein stability changes for singlesite mutations,
 Proteins
, 2006
"... Abstract Accurate prediction of protein stability changes resulting from single amino acid mutations is important for understanding protein structures and designing new proteins. We use support vector machines to predict protein stability changes for single amino acid mutations leveraging both sequ ..."
Abstract

Cited by 74 (2 self)
 Add to MetaCart
(Show Context)
Abstract Accurate prediction of protein stability changes resulting from single amino acid mutations is important for understanding protein structures and designing new proteins. We use support vector machines to predict protein stability changes for single amino acid mutations leveraging both sequence and structural information. We evaluate our approach using crossvalidation methods on a large dataset of single amino acid mutations. When only the sign of the stability changes is considered, the predictive method achieves 84% accuracya significant improvement over previously published results. Moreover, the experimental results show that the prediction accuracy obtained using sequence alone is close to the accuracy obtained using tertiary structure information. Because our method can accurately predict protein stability changes using primary sequence information only, it is applicable to many situations where the tertiary structure is unknown, overcoming a major limitation of previous methods which require tertiary information. The web server for predictions of protein stability changes upon mutations (MUpro), software, and datasets are available at www.igb.uci.edu/servers/servers.html.
A greedy algorithm for optimizing the kernel alignment and the performance of kernel machines
 In Proc. EUSIPCO ’06
"... Kerneltarget alignment has recently been proposed as a criterion for measuring the degree of agreement between a reproducing kernel and a learning task. It makes possible to find a powerful kernel for a given classification problem without designing any classifier. In this paper, we present an alte ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Kerneltarget alignment has recently been proposed as a criterion for measuring the degree of agreement between a reproducing kernel and a learning task. It makes possible to find a powerful kernel for a given classification problem without designing any classifier. In this paper, we present an alternating optimization strategy, based on a greedy algorithm for maximizing the alignment over linear combinations of kernels, and a gradient descent to adjust the free parameters of each kernel. Experimental results show an improvement in the classification performance of support vector machines, and a drastic reduction in the training time. 1.
Vulnerability of machine learning models to adversarial examples
"... Abstract: We propose a genetic algorithm for generating adversarial examples for machine learning models. Such approach is able to find adversarial examples without the access to model's parameters. Different models are tested, including both deep and shallow neural networks architectures. We ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: We propose a genetic algorithm for generating adversarial examples for machine learning models. Such approach is able to find adversarial examples without the access to model's parameters. Different models are tested, including both deep and shallow neural networks architectures. We show that RBF networks and SVMs with Gaussian kernels tend to be rather robust and not prone to misclassification of adversarial examples.
2010 International Conference on Pattern Recognition KernelBased Implicit Regularization of Structured Objects
"... Abstract—Weighted graph regularization provides a rich framework that allows to regularize functions defined over the vertices of a weighted graph. Until now, such a framework has been only defined for real or multivalued functions hereby restricting the regularization framework to numerical data. O ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Weighted graph regularization provides a rich framework that allows to regularize functions defined over the vertices of a weighted graph. Until now, such a framework has been only defined for real or multivalued functions hereby restricting the regularization framework to numerical data. On the other hand, several kernels have been defined so far on structured objects such as strings or graphs. Using definite positive kernels, each original object is associated, by the “kernel trick”, to one element of a Hilbert space. As a consequence, this paper proposes to extend the weighted graph regularization framework to objects implicitly defined by their kernel hereby performing the regularization within the Hilbert space associated to the kernel. This work opens the door to the regularization of structured objects. Keywordskernel; graphbased regularization; total variation; classification; discrete structures I.
Supervised and Localized Dimensionality Reduction from Multiple Feature Representations or Kernels
"... Abstract We propose a supervised and localized dimensionality reduction method that combines multiple feature representations or kernels. Each feature representation or kernel is used where it is suitable through a parametric gating model in a supervised manner for efficient dimensionality reductio ..."
Abstract
 Add to MetaCart
Abstract We propose a supervised and localized dimensionality reduction method that combines multiple feature representations or kernels. Each feature representation or kernel is used where it is suitable through a parametric gating model in a supervised manner for efficient dimensionality reduction and classification, and local projection matrices are learned for each feature representation or kernel. The kernel machine parameters, the local projection matrices, and the gating model parameters are optimized using an alternating optimization procedure composed of kernel machine training and gradientdescent updates. Empirical results on benchmark data sets validate the method in terms of classification accuracy, smoothness of the solution, and ease of visualization.
ONLINE LEARNING WITH KERNELS A NEW APPROACH FOR SPARSITY CONTROL BASED ON A COHERENCE CRITERION
"... Kernel methods are well known standard tools for solving function approximation and pattern classification problems. In this paper, we consider online learning in a reproducing kernel Hilbert space. We develop a simple and computationally efficient algorithm for sparse solutions. The approach is bas ..."
Abstract
 Add to MetaCart
(Show Context)
Kernel methods are well known standard tools for solving function approximation and pattern classification problems. In this paper, we consider online learning in a reproducing kernel Hilbert space. We develop a simple and computationally efficient algorithm for sparse solutions. The approach is based on sequential projection learning and the coherence criterion, which is a fundamental parameter to characterize dictionaries of functions in sparse approximation problems. Experimental results show the effectiveness of our approach. 1.
Fuzzy cmeans, kernel functions, segmentation, Multiple Sclerosis KERNELIZED FUZZY CMEANS METHOD IN SEGMENTATION OF
"... In the current study, an alternative approach to a fuzzy clustering in a kernel space has been tested. First, a “kernel trick” ' is applied to the fuzzy cmeans (FCM) algorithm. Later, the modified method is employed in an automated segmentation of demyelination plaques in Multiple Sclerosis. 1 ..."
Abstract
 Add to MetaCart
(Show Context)
In the current study, an alternative approach to a fuzzy clustering in a kernel space has been tested. First, a “kernel trick” ' is applied to the fuzzy cmeans (FCM) algorithm. Later, the modified method is employed in an automated segmentation of demyelination plaques in Multiple Sclerosis. 1.