#### DMCA

## IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Nonlinear Dimensionality Reduction via Path-Based Isometric Mapping

### Citations

4699 |
Self-organizing maps
- Kohonen
- 1997
(Show Context)
Citation Context ... Analysis (PCA) and MultiDimensional Scaling (MDS) [8] in harnessing nonlinear data structures [9], [10]. However, their high time and memory complexity impose severe limitations on their scalabality =-=[11]-=-. To overcome this drawback, we set out to develop a method with lower computational costs yet the same performance. Throughout the paper it is assumed that data samples lie on a smooth low-dimensiona... |

2452 | A global geometric framework for nonlinear dimensionality reduction. Science 290: 2319--2323
- Tenenbaum, Silva
- 2000
(Show Context)
Citation Context ...The need to analyze and visualize multivariate data has yielded a surge of interest in dimensionality reduction research [1], [2], [3], [4]. In particular, manifold learning techniques such as Isomap =-=[5]-=-, Locally Linear Embedding (LLE) [6], and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Component Analysis (PCA) and MultiDimensional Scaling (MDS) [8] in harnessing nonli... |

2408 | Nonlinear Dimensionality Reduction by Locally Linear Embedding”, Science 290
- Roweis, Saul
(Show Context)
Citation Context ...ltivariate data has yielded a surge of interest in dimensionality reduction research [1], [2], [3], [4]. In particular, manifold learning techniques such as Isomap [5], Locally Linear Embedding (LLE) =-=[6]-=-, and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Component Analysis (PCA) and MultiDimensional Scaling (MDS) [8] in harnessing nonlinear data structures [9], [10]. Howe... |

1226 | Laplacian eigenmaps for dimensionality reduction and data representation,”
- Belkin, Niyogi
- 2003
(Show Context)
Citation Context ... surge of interest in dimensionality reduction research [1], [2], [3], [4]. In particular, manifold learning techniques such as Isomap [5], Locally Linear Embedding (LLE) [6], and Laplacian Eigenmaps =-=[7]-=- have outperformed classical methods like Principal Component Analysis (PCA) and MultiDimensional Scaling (MDS) [8] in harnessing nonlinear data structures [9], [10]. However, their high time and memo... |

720 |
Modern multidimensional scaling: Theory and applications
- Borg, Groenen
- 1997
(Show Context)
Citation Context ...ques such as Isomap [5], Locally Linear Embedding (LLE) [6], and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Component Analysis (PCA) and MultiDimensional Scaling (MDS) =-=[8]-=- in harnessing nonlinear data structures [9], [10]. However, their high time and memory complexity impose severe limitations on their scalabality [11]. To overcome this drawback, we set out to develop... |

403 |
Graph Algorithms
- Even
- 1979
(Show Context)
Citation Context ... it is not relevant to the number of samples [5], [6], [7]. Computation of shortest paths in the SSPC algorithm requires O(PN logN) = O ( N (1+γ) logN ) computations for applying Dijkstra’s algorithm =-=[20]-=- O(Nγ) times. The Singular Value Decomposition (SVD) used in optimization problem requires O ( P 3 ) = O ( N (3γ) ) multiplications [21]. So the total time complexity of the algorithm is O ( N (1+γ) l... |

274 |
Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data
- Donoho, Grimes
- 2003
(Show Context)
Citation Context ...ion is to discover compact representations of high-dimensional data. The need to analyze and visualize multivariate data has yielded a surge of interest in dimensionality reduction research [1], [2], =-=[3]-=-, [4]. In particular, manifold learning techniques such as Isomap [5], Locally Linear Embedding (LLE) [6], and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Component Anal... |

211 | Curvilinear component analysis: A self-organizing neural network for nonlinear mapping of data sets.
- Demartines, Herault
- 1997
(Show Context)
Citation Context ...n recognition is to discover compact representations of high-dimensional data. The need to analyze and visualize multivariate data has yielded a surge of interest in dimensionality reduction research =-=[1]-=-, [2], [3], [4]. In particular, manifold learning techniques such as Isomap [5], Locally Linear Embedding (LLE) [6], and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Comp... |

205 | Optimal multistep k-nearest neighbor search
- Seidl, Kriegel
- 1998
(Show Context)
Citation Context ...ted as G = (V,E) in which V = {x1,x2, ...,xN} denotes the set of nodes, and E is the set of edges connecting neighboring samples. Two ways determining the neighbors of a point are K-nearest neighbors =-=[16]-=-, or all points within a fixed range ǫ. In this paper we utilize the former method. For neighboring nodes xi and xj the weight is taken to be wi,j = ||xi − xj ||2. If we take xisxj to be the shortest ... |

169 |
Bonnesen-style isoperimetric inequalities
- Osserman
- 1979
(Show Context)
Citation Context ...rding the assumption about n, needs O (N). Third and the most important factor is the memory needed to save the shortest paths. In the limit that N goes to infinity, based on isoperimetric inequality =-=[22]-=-, the average length of these paths would be lower than O ( N1/K ) . Given the number of paths O (Nγ) the memory complexity of this component is O ( N (1/K+γ) ) . So the total memory complexity is O (... |

143 | Random projections of smooth manifolds
- Baraniuk, Wakin
- 2006
(Show Context)
Citation Context ...me this drawback, we set out to develop a method with lower computational costs yet the same performance. Throughout the paper it is assumed that data samples lie on a smooth low-dimensional manifold =-=[12]-=-, [13]. In the first stage, these data samples are covered by a set of geodesic paths, resulting in a network of intersecting routes. The main point is that data samples that belong to a geodesic path... |

124 | Graph Approximations to Geodesics on Embedded Manifolds
- Bernstein, Silva, et al.
- 2000
(Show Context)
Citation Context ...of geodesic paths, resulting in a network of intersecting routes. The main point is that data samples that belong to a geodesic path approximately lie on a straight line in the compact representation =-=[14]-=-. Thus a mapping scheme is developed to compute the lines in the destination space. The scheme is formulated as an optimization problem that attempts to preserve topology of the network of paths inste... |

110 |
van den Herik, “Dimensionality reduction: A comparative review,”
- Maaten, Postma, et al.
- 2009
(Show Context)
Citation Context ...icient methods have been proposed for construction of the neighborhood graph G [18], [16]. However, since the procedure is shared among all state-of-the-art methods it is not taken into consideration =-=[19]-=-. Moreover, it is assumed that number of neighbors, denoted by n, in K-nearest algorithm is O(1). Since in practice it is not relevant to the number of samples [5], [6], [7]. Computation of shortest p... |

94 |
The isomap algorithm and topological stability.
- Balasubramanian, Schwartz
- 2002
(Show Context)
Citation Context ...resentation by estimating pairwise geodesic distances. For sufficiently close pairs, referred to as neighboring points, the euclidean distance provides a good approximation of geodesic distance [14], =-=[15]-=-. For faraway points, one needs to walk through these neighboring pairs in the shortest way possible to evaluate the geodesic distance. That can be achieved efficiently by applying a shortest path alg... |

93 | Nearest neighbor in high-dimensional spaces.
- Indyk
- 2004
(Show Context)
Citation Context ... lay out time and memory complexity analysis of the Path-based Isomap and compare it to a number of existing methods. Efficient methods have been proposed for construction of the neighborhood graph G =-=[18]-=-, [16]. However, since the procedure is shared among all state-of-the-art methods it is not taken into consideration [19]. Moreover, it is assumed that number of neighbors, denoted by n, in K-nearest ... |

92 |
Large-scale sparse singular value computations.
- Berry
- 1992
(Show Context)
Citation Context ...parsity of this matrix for LLE and Laplacian-Eigenmaps the complexity will be reduced. Hence computations in the latter stage will be O ( N3 ) for Isomap, and O ( N2 ) for LLE and Laplacian-Eigenmaps =-=[24]-=-. For memory the only important component, that is the memory required by SVD, is O ( N2 ) [24]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 6 (a) (b) Fig. 5: Applying Path-Based Is... |

33 |
Singular value decomposition: application to analysis of experimental data.
- ER, Hofrichter
- 1992
(Show Context)
Citation Context ...( N (1+γ) logN ) computations for applying Dijkstra’s algorithm [20] O(Nγ) times. The Singular Value Decomposition (SVD) used in optimization problem requires O ( P 3 ) = O ( N (3γ) ) multiplications =-=[21]-=-. So the total time complexity of the algorithm is O ( N (1+γ) logN +N (3γ) ) . For memory analysis, there are three major components. First, SVD requires O ( N (2γ) ) [21]. Second, the n neighborhood... |

33 | Manifold alignment using procrustes analysis,
- Wang, Mahadevan
- 2008
(Show Context)
Citation Context ...act that for pairs on opposite sides of the hole, the shortest path on G will no longer serve as the Euclidean distance in low-dimensional space. This might lead to the failure of the whole algorithm =-=[25]-=-. However, Path-Based Isomap demonstrates acceptable resiliency to such non-normality. This effect can be understood as a consequence of (16), in that good estimations in (16) will correct poor ones t... |

21 | Simultaneous dimensionality reduction and human age estimation via kernel partial least squares regression
- Guo, Mu
- 2011
(Show Context)
Citation Context ... (LLE) [6], and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Component Analysis (PCA) and MultiDimensional Scaling (MDS) [8] in harnessing nonlinear data structures [9], =-=[10]-=-. However, their high time and memory complexity impose severe limitations on their scalabality [11]. To overcome this drawback, we set out to develop a method with lower computational costs yet the s... |

16 |
Shortest path methods: Complexity, interrelations and new propositions,
- Pallottino
- 1984
(Show Context)
Citation Context ... paths O (Nγ) the memory complexity of this component is O ( N (1/K+γ) ) . So the total memory complexity is O ( N +N2γ +N (1/K+γ) ) . Isomap requires O ( N2 ∼ N3 ) for computing shortest paths [15], =-=[23]-=-. Isomap, LLE and LaplacianEigenmaps need the SVD of an N × N matrix in the final stage [5], [6], [7]. Due to the sparsity of this matrix for LLE and Laplacian-Eigenmaps the complexity will be reduced... |

13 |
Stochastic neighbor embedding. Advances in neural information processing systems,
- Hinton, Roweis
- 2003
(Show Context)
Citation Context ...ognition is to discover compact representations of high-dimensional data. The need to analyze and visualize multivariate data has yielded a surge of interest in dimensionality reduction research [1], =-=[2]-=-, [3], [4]. In particular, manifold learning techniques such as Isomap [5], Locally Linear Embedding (LLE) [6], and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Component... |

9 | Gool. Iterative nearest neighbors for classification and dimensionality reduction
- Timofte, Van
(Show Context)
Citation Context ...dding (LLE) [6], and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Component Analysis (PCA) and MultiDimensional Scaling (MDS) [8] in harnessing nonlinear data structures =-=[9]-=-, [10]. However, their high time and memory complexity impose severe limitations on their scalabality [11]. To overcome this drawback, we set out to develop a method with lower computational costs yet... |

1 |
Rankvisu: Mapping from the neighborhood network
- Lespinats, Fertil, et al.
- 2009
(Show Context)
Citation Context ...s to discover compact representations of high-dimensional data. The need to analyze and visualize multivariate data has yielded a surge of interest in dimensionality reduction research [1], [2], [3], =-=[4]-=-. In particular, manifold learning techniques such as Isomap [5], Locally Linear Embedding (LLE) [6], and Laplacian Eigenmaps [7] have outperformed classical methods like Principal Component Analysis ... |

1 | Learning manifolds in the wild
- Hegde, Sankaranarayanan, et al.
- 2012
(Show Context)
Citation Context ...s drawback, we set out to develop a method with lower computational costs yet the same performance. Throughout the paper it is assumed that data samples lie on a smooth low-dimensional manifold [12], =-=[13]-=-. In the first stage, these data samples are covered by a set of geodesic paths, resulting in a network of intersecting routes. The main point is that data samples that belong to a geodesic path appro... |

1 | Graph covering via shortest paths
- Boothe, Dvorák, et al.
- 2007
(Show Context)
Citation Context ...hastic algorithm for covering the graph with a set of shortest paths called ”Stochastic Shortest Path Covering (SSPC)”. The general problem of graph covering via shortest paths is known to be NP-hard =-=[17]-=-. Thus we set out to find a sub-optimal solution with a stochastic approach. In practice our sub-optimal algorithm yields substantial time and space savings. The general idea is to iteratively cover a... |