Results 1  10
of
63
An introduction to kernelbased learning algorithms
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2001
"... This paper provides an introduction to support vector machines (SVMs), kernel Fisher discriminant analysis, and ..."
Abstract

Cited by 598 (55 self)
 Add to MetaCart
This paper provides an introduction to support vector machines (SVMs), kernel Fisher discriminant analysis, and
Approximation algorithms for metric facility location and kmedian problems using the . . .
"... ..."
A survey of outlier detection methodologies
 Artificial Intelligence Review
, 2004
"... Abstract. Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populat ..."
Abstract

Cited by 312 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review.
Robust support vector method for hyperspectral data classification and knowledge discovery
 IEEE Transactions on Geoscience and Remote Sensing
, 2004
"... Abstract — In this paper, we propose the use of Support Vector Machines (SVM) for automatic hyperspectral data classification and knowledge discovery. In the first stage of the study, we use SVMs for crop classification and analyze their performance in terms of efficiency and robustness, as compared ..."
Abstract

Cited by 36 (7 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we propose the use of Support Vector Machines (SVM) for automatic hyperspectral data classification and knowledge discovery. In the first stage of the study, we use SVMs for crop classification and analyze their performance in terms of efficiency and robustness, as compared to extensively used neural and fuzzy methods. Efficiency is assessed by evaluating accuracy and statistical differences in several scenes. Robustness is analyzed in terms of (a) suitability to working conditions when a feature selection stage is not possible, and (b) performance when different levels of Gaussian noise are introduced at their inputs. In the second stage of this work, we analyze the distribution of the support vectors (SV) and perform sensitivity analysis on the best classifier in order to analyze the significance of the input spectral bands. For classification purposes, six hyperspectral images acquired with the 128band HyMAP spectrometer during the DAISEX1999 campaign are used. Six crop classes were labelled for each image. A reduced set of labelled samples is used to train the models and the entire images are used to assess their performance. Several conclusions are drawn: (1) SVMs yield better outcomes than neural networks regarding accuracy, simplicity and robustness; (2) training neural and neurofuzzy models is unfeasible when working with high dimensional input spaces and great amounts of training data; (3) SVMs perform similarly for different training subsets with varying input dimension, which indicates that noisy bands are successfully detected; and (4) a valuable ranking of bands through sensitivity analysis is achieved. Index Terms — Hyperspectral imagery, crop classification, knowledge discovery, Support Vector Machines, neural networks.
Approximating kmedian via pseudoapproximation
 In Proceedings of the Fortyfifth Annual ACM Symposium on Theory of Computing, STOC ’13
, 2013
"... We present a novel approximation algorithm for kmedian that achieves an approximation guarantee of 1 + 3 + , improving upon the decadeold ratio of 3 + . Our approach is based on two components, each of which, we believe, is of independent interest. First, we show that in order to give an αapproxi ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
We present a novel approximation algorithm for kmedian that achieves an approximation guarantee of 1 + 3 + , improving upon the decadeold ratio of 3 + . Our approach is based on two components, each of which, we believe, is of independent interest. First, we show that in order to give an αapproximation algorithm for kmedian, it is sufficient to give a pseudoapproximation algorithm that finds an αapproximate solution by opening k + O(1) facilities. This is a rather surprising result as there exist instances for which opening k + 1 facilities may lead to a significant smaller cost than if only k facilities were opened. Second, we give such a pseudoapproximation algorithm with α = 1 + 3 + . Prior to our work, it was not even known whether opening k + o(k) facilities would help improve the approximation ratio. 1
Statistical Analysis of Financial Networks
, 2005
"... Massive datasets arise in a broad spectrum of scientific, engineering and commercial applications. In many practically important cases, a massive dataset can be represented as a very large graph with certain attributes associated with its vertices and edges. Studying the structure of this graph is e ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
Massive datasets arise in a broad spectrum of scientific, engineering and commercial applications. In many practically important cases, a massive dataset can be represented as a very large graph with certain attributes associated with its vertices and edges. Studying the structure of this graph is essential for understanding the structural properties of the application it represents. Wellknown examples of applying this approach are the Internet graph, the Web graph, and the Call graph. It turns out that the degree distributions of al these graphs can be described by the powerlaw model. Here we consider another important application  a network representation of the stock market. Stock markets generate huge amounts of data, which can be used for constructing the market graph reflecting the market behavior. We conduct the statistical analysis of this graph and show that it also folliws the powerlaw model. Moreover, we detect cliques and independent sets in this graph. These special formations have a clear practical interpretation, and their analysis allows one to apply a new data mining technique of classifying financial instruments based on stock prices data, which provides a deeper insight into the internal structure of the stock market.
A Minsat approach for learning in logic domains
 INFORMS Journal on computing
, 2002
"... This paper describes a method for learning logic relationships that correctlyclassifya given data set. The method derives from given logic data certain minimum cost satisfiabilityproblems, solves these problems, and deduces from the solutions the desired logic relationships. Uses of the method inclu ..."
Abstract

Cited by 23 (16 self)
 Add to MetaCart
This paper describes a method for learning logic relationships that correctlyclassifya given data set. The method derives from given logic data certain minimum cost satisfiabilityproblems, solves these problems, and deduces from the solutions the desired logic relationships. Uses of the method include data mining, learning logic in expert systems, and identification of critical characteristics for recognition systems. Computational tests have proved that the method is fast and effective.
Alexander Tuzhilin , On the Use of Optimization for Data Mining: Theoretical Interactions and eCRM
 Opportunities[J], Management Science .
, 2003
"... P revious work on the solution to analytical electronic customer relationship management (eCRM) problems has used either datamining (DM) or optimization methods, but has not combined the two approaches. By leveraging the strengths of both approaches, the eCRM problems of customer analysis, custome ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
P revious work on the solution to analytical electronic customer relationship management (eCRM) problems has used either datamining (DM) or optimization methods, but has not combined the two approaches. By leveraging the strengths of both approaches, the eCRM problems of customer analysis, customer interactions, and the optimization of performance metrics (such as the lifetime value of a customer on the Web) can be better analyzed. In particular, many eCRM problems have been traditionally addressed using DM methods. There are opportunities for optimization to improve these methods, and this paper describes these opportunities. Further, an online appendix (mansci.pubs.informs.org/ecompanion.html) describes how DM methods can help optimizationbased approaches. More generally, this paper argues that the reformulation of eCRM problems within this new framework of analysis can result in more powerful analytical approaches.
Approximating Kmeanstype clustering via semidefinite programming
, 2005
"... One of the fundamental clustering problems is to assign n points into k clusters based on the minimal sumofsquares (MSSC), which is known to be NPhard. In this paper, by using matrix arguments, we first model MSSC as a socalled 01 semidefinite programming (SDP). We show that our 01 SDP model p ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
One of the fundamental clustering problems is to assign n points into k clusters based on the minimal sumofsquares (MSSC), which is known to be NPhard. In this paper, by using matrix arguments, we first model MSSC as a socalled 01 semidefinite programming (SDP). We show that our 01 SDP model provides an unified framework for several clustering approaches such as normalized kcut and spectral clustering. Moreover, the 01 SDP model allows us to solve the underlying problem approximately via the relaxed linear and semidefinite programming. Secondly, we consider the issue of how to extract a feasible solution of the original MSSC model from the approximate solution of the relaxed SDP problem. By using principal component analysis, we develop a rounding procedure to construct a feasible partitioning from a solution of the relaxed problem. In our rounding procedure, we need to solve a kmeans clustering problem in ℜ k−1, which can be solved in O(n k2 (k−1)) time. In case of biclustering, the running time of our rounding procedure can be reduced to O(nlog n). We show that our algorithm can provide a 2approximate solution to the original problem. Promising numerical results based on our new method are reported.
Genetic programming in classifying largescale data: an ensemble method
 Information Sciences
, 2004
"... This study demonstrates the potential of genetic programming (GP) as a base classifier algorithm in building ensembles in the context of largescale data classification. An ensemble built upon base classifiers that were trained with GP was found to significantly outperform its counterparts built upo ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
This study demonstrates the potential of genetic programming (GP) as a base classifier algorithm in building ensembles in the context of largescale data classification. An ensemble built upon base classifiers that were trained with GP was found to significantly outperform its counterparts built upon base classifiers that were trained with decision tree and logistic regression. The superiority of GP ensembles is attributed to the higher diversity, both in terms of the functional form of as well as with respect to the variables defining the models, among the base classifiers.