Results 1  10
of
25,866
Nonlinear dimensionality reduction by locally linear embedding
 SCIENCE
, 2000
"... Many areas of science ..."
Temporal nonlinear dimensionality reduction
 In The 2011 International Joint Conference on Neural Networks (IJCNN
, 2011
"... Abstract — Existing Nonlinear dimensionality reduction (NLDR) algorithms make the assumption that distances between observations are uniformly scaled. Unfortunately, with many interesting systems, this assumption does not hold. We present a new technique called Temporal NLDR (TNLDR), which is specif ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract — Existing Nonlinear dimensionality reduction (NLDR) algorithms make the assumption that distances between observations are uniformly scaled. Unfortunately, with many interesting systems, this assumption does not hold. We present a new technique called Temporal NLDR (TNLDR), which
Global Versus Local Methods in Nonlinear Dimensionality Reduction
, 2003
"... Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine the adva ..."
Abstract

Cited by 208 (6 self)
 Add to MetaCart
Recently proposed algorithms for nonlinear dimensionality reduction fall broadly into two categories which have different advantages and disadvantages: global (Isomap [1]), and local (Locally Linear Embedding [2], Laplacian Eigenmaps [3]). We present two variants of Isomap which combine
Semisupervised nonlinear dimensionality reduction
 In ICML
, 2006
"... The problem of nonlinear dimensionality reduction is considered. We focus on problems where prior information is available, namely, semisupervised dimensionality reduction. It is shown that basic nonlinear dimensionality reduction algorithms, such as Locally Linear Embedding (LLE), Isometric featur ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
The problem of nonlinear dimensionality reduction is considered. We focus on problems where prior information is available, namely, semisupervised dimensionality reduction. It is shown that basic nonlinear dimensionality reduction algorithms, such as Locally Linear Embedding (LLE), Isometric
Nonlinear Dimensionality Reduction for Regression
"... The task of dimensionality reduction for regression (DRR) is to find a low dimensional representation z ∈ R q of the input covariates x ∈ R p, with q ≪ p, for regressing the output y ∈ R d. DRR can be beneficial for visualization of high dimensional data, efficient regressor design with a reduced in ..."
Abstract
 Add to MetaCart
The task of dimensionality reduction for regression (DRR) is to find a low dimensional representation z ∈ R q of the input covariates x ∈ R p, with q ≪ p, for regressing the output y ∈ R d. DRR can be beneficial for visualization of high dimensional data, efficient regressor design with a reduced
for Nonlinear Dimensionality Reduction
"... 23; right 36, 13, and 27); superior frontal gyrus (left �9, 31, and 45; right 17, 35, and 37). 17. Although the improvement in WM performance with cholinergic enhancement was a nonsignificant trend in the current study (P � 0.07), in a previous study (9) with a larger sample (n � 13) the effect was ..."
Abstract
 Add to MetaCart
23; right 36, 13, and 27); superior frontal gyrus (left �9, 31, and 45; right 17, 35, and 37). 17. Although the improvement in WM performance with cholinergic enhancement was a nonsignificant trend in the current study (P � 0.07), in a previous study (9) with a larger sample (n � 13) the effect was highly significant (P � 0.001). In the current study, we analyzed RT data for six of our seven subjects because the behavioral data for one subject were unavailable due to a computer failure. The difference in the significance of the two findings is simply a result of the difference in sample sizes. A power analysis shows that the size of the RT difference and variability in the current sample would yield a significant result (P � 0.01) with a sample size of 13. During the memory trials, mean RT was 1180 ms during placebo and 1119 ms during physostigmine. During the control trials, mean RT was 735 ms during placebo and 709 ms during physostigmine, a difference that did not approach significance (P � 0.24), suggesting that the effect of cholinergic enhancement on WM performance is not due to a nonspecific increase in arousal. 18. Matchedpair t tests (twotailed) were used to test the significance of drugrelated changes in the volume of regions of interest that showed significant response contrasts.
Nonlinear Dimensionality Reduction as Information Retrieval
"... Nonlinear dimensionality reduction has so far been treated either as a data representation problem or as a search for a lowerdimensional manifold embedded in the data space. A main application for both is in information visualization, to make visible the neighborhood or proximity relationships in th ..."
Abstract

Cited by 14 (6 self)
 Add to MetaCart
Nonlinear dimensionality reduction has so far been treated either as a data representation problem or as a search for a lowerdimensional manifold embedded in the data space. A main application for both is in information visualization, to make visible the neighborhood or proximity relationships
Nonlinear Dimensionality Reduction as Information Retrieval
"... Nonlinear dimensionality reduction has so far been treated either as a data representation problem or as a search for a lowerdimensional manifold embedded in the data space. A main application for both is in information visualization, to make visible the neighborhood or proximity relationships in th ..."
Abstract
 Add to MetaCart
Nonlinear dimensionality reduction has so far been treated either as a data representation problem or as a search for a lowerdimensional manifold embedded in the data space. A main application for both is in information visualization, to make visible the neighborhood or proximity relationships
Nonlinear Dimensionality Reduction using Approximate Nearest Neighbors
"... Nonlinear dimensionality reduction methods often rely on the nearestneighbors graph to extract lowdimensional embeddings that reliably capture the underlying structure of highdimensional data. Research however has shown that computing nearest neighbors of a point from a highdimensional data set g ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Nonlinear dimensionality reduction methods often rely on the nearestneighbors graph to extract lowdimensional embeddings that reliably capture the underlying structure of highdimensional data. Research however has shown that computing nearest neighbors of a point from a highdimensional data set
Learning a kernel matrix for nonlinear dimensionality reduction
 In Proceedings of the Twenty First International Conference on Machine Learning (ICML04
, 2004
"... We investigate how to learn a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. Noting that the kernel matrix implicitly maps the data into a nonlinear feature space, we show how to discover a mapping that “unfolds ” the underlying manifold from which the data ..."
Abstract

Cited by 152 (9 self)
 Add to MetaCart
We investigate how to learn a kernel matrix for high dimensional data that lies on or near a low dimensional manifold. Noting that the kernel matrix implicitly maps the data into a nonlinear feature space, we show how to discover a mapping that “unfolds ” the underlying manifold from which the data
Results 1  10
of
25,866