Results

**1 - 6**of**6**### Relevance Analysis based on graph theory and spectral analysis

, 2012

"... Category 1. We present a new method for relevance analysis based on spectral information, which is done from a graph theory point of view. This method is carried out by using Gaussian Kernels instead of conventional quadratic forms and then avoiding the need of a linear combination-based representat ..."

Abstract
- Add to MetaCart

Category 1. We present a new method for relevance analysis based on spectral information, which is done from a graph theory point of view. This method is carried out by using Gaussian Kernels instead of conventional quadratic forms and then avoiding the need of a linear combination-based representation. For this end, it is implemented an extended approach for relevance analysis using alternative Kernels, in this case, exponential ones. For assessing the proposed method performance, it is applied a clustering algorithm commonly used and recommended by literature: normalized cuts based clustering. Experimental results are obtained from the processing of well known image and toy data bases. Results are comparable with those reported in the literature.

### Normalized Cuts Clustering with Prior Knowledge and a Pre-clustering Stage

, 2013

"... Clustering is of interest in cases when data are not labeled enough and a prior training stage is unfeasible. In particular, spectral clustering based on graph partitioning is of interest to solve problems with highly non-linearly separable classes. However, spectral methods, such as the well-know ..."

Abstract
- Add to MetaCart

Clustering is of interest in cases when data are not labeled enough and a prior training stage is unfeasible. In particular, spectral clustering based on graph partitioning is of interest to solve problems with highly non-linearly separable classes. However, spectral methods, such as the well-known normalized cuts, involve the computation of eigenvectors that is a highly time-consuming task in case of large data. In this work, we propose an alternative to solve the normalized cuts problem for clustering, achieving same results as conventional spectral methods but spending less processing time. Our method consists of a heuristic search to find the best cluster binary indicator matrix, in such a way that each pair of nodes with greater similarity value are first grouped and the remaining nodes are clustered following a heuristic algorithm to search into the similarity-based representation space. The proposed method is tested over a public domain image data set. Results show that our method reaches comparable results with a lower computational cost.

### ISOTROPY CRITERIA AND ALGORITHMS FOR DATA CLUSTERING

, 2011

"... Given a set of points, the goal of data clustering is to group them into clusters, such that the internal homogeneity of points within each cluster contrasts to inter-cluster heterogeneity. Over the last fifty years, many methods for data clustering have been developed in diverse scientific communit ..."

Abstract
- Add to MetaCart

Given a set of points, the goal of data clustering is to group them into clusters, such that the internal homogeneity of points within each cluster contrasts to inter-cluster heterogeneity. Over the last fifty years, many methods for data clustering have been developed in diverse scientific communities. However, many of these methods suffer from several shortcomings, and are unable to handle the rich diversity of cluster structures that are usually present in data. We develop an unsupervised, nonparametric approach to data clustering that addresses these shortcomings. Our goal is to build on the strengths of these methods, while simultaneously offering innovative solutions to their limitations. In our cluster model, clusters are seen as groups of points, with overlapping neighborhoods, that have similar spatial structures that are in contrast with their surroundings. We use the isotropy of a point distribution to characterize spatial structure. We argue that identifying the isotropic density neighborhoods of a point, helps in the detection of a diversity of cluster structures that are challenging to many other methods. We develop three different criteria for identifying neighborhoods with isotropic density. The first criterion is based on examining properties of one-dimensional projections in a hyperspherical neighborhood with uniform point distribution. The second and third criteria are based on the analysis of the force

### unknown title

"... Constrained affinity matrix for spectral clustering: A basic semi-supervised extension ..."

Abstract
- Add to MetaCart

(Show Context)
Constrained affinity matrix for spectral clustering: A basic semi-supervised extension

### transform and unsupervised techniques on thermal imaging

"... Most applications in the analysis of thermal images can detect changes in patterns [1]. Thus, thermal patterns provide sufficient information of the structure for a particular device, giving the opportunity to alert and perform some preventive action in the process. The wavelet transform is useful t ..."

Abstract
- Add to MetaCart

(Show Context)
Most applications in the analysis of thermal images can detect changes in patterns [1]. Thus, thermal patterns provide sufficient information of the structure for a particular device, giving the opportunity to alert and perform some preventive action in the process. The wavelet transform is useful to characterize this kind of change, because the localized of the scale information gives the nature of the representation. Thereafter, relevant details are preserved when scale is changed in the representation space, excluding other details that not provide information about regions of interest [2]. On the other hand, the patterns obtained in the characterization phase can be clustered by using unsupervised techniques. Those patterns are measured with an adequate metric to determine the principal cluster of the dataset. Then, additional criteria can be employed to initialize the centroids of the cluster to obtain a better outcome. This criteria is supported by no homogeneous representation of the cluster, justified by the nature of the thermal distribution: its distribution has smooth changes in the image, making difficult the automatic detection of those regions. In this way, a change in the representation space is necessary to make easy the clustering process. Thus, changes in the initial groups can be detected, when the cluster is initially identified in normal condition of the devices and the image representation space is wavelet. A pattern change is defined as the change in the number of clusters or in the size of some cluster. It represents a fault candidate in the device, making possible to take decision about the actual state of the device. State of the art proposes several metrics that can be used like Minkowski metric, Mahalanobis distance, correlation, among others [3]. In this context another parameters can be introduced to improve the results of the technique: initialization methods,

### Optimal Data Projection for Kernel Spectral Clustering

"... Abstract. Spectral clustering has taken an important place in the context of pattern recognition, being a good alternative to solve problems with non-linearly separable groups. Because of its unsupervised nature, clustering methods are often parametric, requiring then some initial parameters. Thus, ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract. Spectral clustering has taken an important place in the context of pattern recognition, being a good alternative to solve problems with non-linearly separable groups. Because of its unsupervised nature, clustering methods are often parametric, requiring then some initial parameters. Thus, clustering performance is greatly dependent on the selection of those initial parameters. Furthermore, tuning such parameters is not an easy task when the initial data representation is not adequate. Here, we propose a new projection for input data to improve the cluster identification within a kernel spectral clustering framework. The proposed projection is done from a feature extraction formulation, in which a generalized distance involving the kernel matrix is used. Data projection shows to be useful for improving the performance of kernel spectral clustering. 1