Results 1 - 10
of
31
Increasing and Decreasing Returns and Losses in Mutual Information Feature Subset Selection
- ENTROPY
, 2010
"... ..."
Discriminative least squares regression for multiclass classification and feature selection
- IEEE TNNLS
, 2012
"... Abstract — This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduce ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract — This paper presents a framework of discriminative least squares regression (LSR) for multiclass classification and feature selection. The core idea is to enlarge the distance between different classes under the conceptual framework of LSR. First, a technique called ε-dragging is introduced to force the regression targets of different classes moving along opposite directions such that the distances between classes can be enlarged. Then, the ε-draggings are integrated into the LSR model for multiclass classification. Our learning framework, referred to as discriminative LSR, has a compact model form, where there is no need to train two-class machines that are independent of each other. With its compact form, this model can be naturally extended for feature selection. This goal is achieved in terms of L2,1 norm of matrix, generating a sparse learning model for feature selection. The model for multiclass classification and its extension for feature selection are finally solved elegantly and efficiently. Experimental evaluation over a range of benchmark datasets indicates the validity of our method. Index Terms — Feature selection, least squares regression, multiclass classification, sparse learning. I.
S.: Binary particle swarm optimisation for feature selection: A filter based approach
- In: IEEE Congress on Evolutionary Computation, CEC 2012
, 2012
"... Abstract-Based on binary particle swarm optimisation (BPSO) and information theory, this paper proposes two new filter feature selection methods for classification problems. The first algorithm is based on BPSO and the mutual information of each pair of features, which determines the relevance and ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract-Based on binary particle swarm optimisation (BPSO) and information theory, this paper proposes two new filter feature selection methods for classification problems. The first algorithm is based on BPSO and the mutual information of each pair of features, which determines the relevance and redundancy of the selected feature subset. The second algorithm is based on BPSO and the entropy of each group of features, which evaluates the relevance and redundancy of the selected feature subset. Different weights for the relevance and redundancy in the fitness functions of the two proposed algorithms are used to further improve their performance in terms of the number of features and the classification accuracy. In the experiments, a decision tree (DT) is employed to evaluate the classification accuracy of the selected feature subset on the test sets of four datasets. The results show that with proper weights, two proposed algorithms can significantly reduce the number of features and achieve similar or even higher classification accuracy in almost all cases. The first algorithm usually selects a smaller feature subset while the second algorithm can achieve higher classification accuracy.
Mutual Information Refinement for Flash-no-Flash Image Alignment
"... Abstract. Flash-no-flash imaging aims to combine ambient light images with details available in flash images. Flash can alter color intensities radically leading to changes in gradient directions and strengths, as well as natural shadows possibly being removed and new ones created. This makes flash- ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Flash-no-flash imaging aims to combine ambient light images with details available in flash images. Flash can alter color intensities radically leading to changes in gradient directions and strengths, as well as natural shadows possibly being removed and new ones created. This makes flash-no-flash image pair alignment a challenging problem. In this paper, we present a new image registration method utilizing mutual information driven point matching accuracy refinement. For a phase correlation based method, accuracy improvement through the suggested point refinement was over 40 %. The new method also performed better than the reference methods SIFT and SURF by 3.0 and 9.1 % respectively in alignment accuracy. Visual inspection also confirmed that in several cases the proposed method succeeded in registering flash-no-flash image pairs where the tested reference methods failed.
5 CMIM-2: AN ENHANCED CONDITIONAL MUTUAL INFORMATION MAXIMIZATION CRITERION FOR FEATURE SELECTION
"... A new greedy feature selection criterion is proposed as an enhancement of the conditional mutual information maximization criterion (CMIM). The new criterion, called CMIM-2, allows detecting relevant features that are complementary in the class prediction better than the original criterion. In addit ..."
Abstract
- Add to MetaCart
(Show Context)
A new greedy feature selection criterion is proposed as an enhancement of the conditional mutual information maximization criterion (CMIM). The new criterion, called CMIM-2, allows detecting relevant features that are complementary in the class prediction better than the original criterion. In addition, we present a methodology to approximate the conditional mutual information to spaces of three variables, avoiding its estimation in high-dimensional spaces. Experimental results for artificial and UCI benchmark datasets show that the proposed criterion outperforms the original CMIM criterion.
3 Feature Extraction by Mutual Information Based on Minimal-Redundancy-Maximal-Relevance Criterion and Its Application to Classifying EEG Signal for Brain-Computer Interfaces
"... ..."
(Show Context)
14+ MILLION TOP 1% MOST CITED SCIENTIST 12.2% AUTHORS AND EDITORS FROM TOP 500 UNIVERSITIES 3 Feature Extraction by Mutual Information Based on Minimal-Redundancy-Maximal-Relevance Criterion and Its Application to Classifying EEG Signal for Brain-Computer
"... ..."
(Show Context)
Selecting Negative Samples for PPI Prediction Using Hierarchical Clustering Methodology
"... Protein-protein interactions PPIs play a crucial role in cellular processes. In the present work, a new approach is proposed to construct a PPI predictor training a support vector machine model through a mutual information filter-wrapper parallel feature selection algorithm and an iterative and hie ..."
Abstract
- Add to MetaCart
Protein-protein interactions PPIs play a crucial role in cellular processes. In the present work, a new approach is proposed to construct a PPI predictor training a support vector machine model through a mutual information filter-wrapper parallel feature selection algorithm and an iterative and hierarchical clustering to select a relevance negative training set. By means of a selected suboptimum set of features, the constructed support vector machine model is able to classify PPIs with high accuracy in any positive and negative datasets.
MIFS-ND: A Mutual Information-based Feature Selection Method
"... Abstract Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selecti ..."
Abstract
- Add to MetaCart
Abstract Feature selection is used to choose a subset of relevant features for effective classification of data. In high dimensional data classification, the performance of a classifier often depends on the feature subset used for classification. In this paper, we introduce a greedy feature selection method using mutual information. This method combines both feature-feature mutual information and featureclass mutual information to find an optimal subset of features to minimize redundancy and to maximize relevance among features. The effectiveness of the selected feature subset is evaluated using multiple classifiers on multiple datasets. The performance of our method both in terms of classification accuracy and execution time performance, has been found significantly high for twelve real-life datasets of varied dimensionality and number of instances when compared with several competing feature selection techniques.