Results 1 - 10
of
3,887
Why does unsupervised pre-training help deep learning?
, 2010
"... Much recent research has been devoted to learning algorithms for deep architectures such as Deep Belief Networks and stacks of autoencoder variants with impressive results being obtained in several areas, mostly on vision and language datasets. The best results obtained on supervised learning tasks ..."
Abstract
-
Cited by 155 (20 self)
- Add to MetaCart
often involve an unsupervised learning component, usually in an unsupervised pre-training phase. The main question investigated here is the following: why does unsupervised pre-training work so well? Through extensive experimentation, we explore several possible explanations discussed in the literature
Supervised and unsupervised discretization of continuous features
- in A. Prieditis & S. Russell, eds, Machine Learning: Proceedings of the Twelfth International Conference
, 1995
"... Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify de n-ing characteristics of the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised dis ..."
Abstract
-
Cited by 540 (11 self)
- Add to MetaCart
Many supervised machine learning algorithms require a discrete feature space. In this paper, we review previous work on continuous feature discretization, identify de n-ing characteristics of the methods, and conduct an empirical evaluation of several methods. We compare binning, an unsupervised
A bayesian hierarchical model for learning natural scene categories
- In CVPR
, 2005
"... We propose a novel approach to learn and recognize natural scene categories. Unlike previous work [9, 17], it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region ..."
Abstract
-
Cited by 948 (15 self)
- Add to MetaCart
We propose a novel approach to learn and recognize natural scene categories. Unlike previous work [9, 17], it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each
The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training
"... Whereas theoretical work suggests that deep architectures might be more efficient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pretraining. Even though these new algorithms have enabled training de ..."
Abstract
- Add to MetaCart
Whereas theoretical work suggests that deep architectures might be more efficient at representing highly-varying functions, training deep architectures was unsuccessful until the recent advent of algorithms based on unsupervised pretraining. Even though these new algorithms have enabled training
Unsupervised learning of models for recognition
- In ECCV
, 2000
"... Abstract. We present a method to learn object class models from unlabeled and unsegmented cluttered scenes for the purpose of visual object recognition. We focus on a particular type of model where objects are represented as flexible constellations of rigid parts (features). The variability within a ..."
Abstract
-
Cited by 356 (30 self)
- Add to MetaCart
a class is represented by a joint probability density function (pdf) on the shape of the constellation and the output of part detectors. In a first stage, the method automatically identifies distinctive parts in the training set by applying a clustering algorithm to patterns selected by an interest
Extracting product features and opinions from reviews
, 2005
"... Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces OPINE, an unsupervised informationextraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers, and their ..."
Abstract
-
Cited by 401 (4 self)
- Add to MetaCart
Consumers are often forced to wade through many on-line reviews in order to make an informed product choice. This paper introduces OPINE, an unsupervised informationextraction system which mines reviews in order to build a model of important product features, their evaluation by reviewers
Unsupervised Segmentation of Color-Texture Regions in Images and Video
, 2001
"... A new method for unsupervised segmentation of color-texture regions in images and video is presented. This method, which we refer to as JSEG, consists of two independent steps: color quantization and spatial segmentation. In the first step, colors in the image are quantized to several representative ..."
Abstract
-
Cited by 318 (3 self)
- Add to MetaCart
A new method for unsupervised segmentation of color-texture regions in images and video is presented. This method, which we refer to as JSEG, consists of two independent steps: color quantization and spatial segmentation. In the first step, colors in the image are quantized to several
Unsupervised Learning of Image Manifolds by Semidefinite Programming
, 2004
"... Can we detect low dimensional structure in high dimensional data sets of images and video? The problem of dimensionality reduction arises often in computer vision and pattern recognition. In this paper, we propose a new solution to this problem based on semidefinite programming. Our algorithm can be ..."
Abstract
-
Cited by 270 (9 self)
- Add to MetaCart
be used to analyze high dimensional data that lies on or near a low dimensional manifold. It overcomes certain limitations of previous work in manifold learning, such as Isomap and locally linear embedding. We illustrate the algorithm on easily visualized examples of curves and surfaces, as well
A HMM-based Pre-training Approach for Sequential Data
"... Abstract. Much recent research highlighted the critical role of unsuper-vised pre-training to improve the performance of neural network models. However, extensions of those architectures to the temporal domain intro-duce additional issues, which often prevent to obtain good performance in a reasonab ..."
Abstract
- Add to MetaCart
Abstract. Much recent research highlighted the critical role of unsuper-vised pre-training to improve the performance of neural network models. However, extensions of those architectures to the temporal domain intro-duce additional issues, which often prevent to obtain good performance in a
TRANSFER LEARNING BY SUPERVISED PRE-TRAINING FOR AUDIO-BASED MUSIC CLASSIFICATION
"... Very few large-scale music research datasets are publicly available. There is an increasing need for such datasets, be-cause the shift from physical to digital distribution in the music industry has given the listener access to a large body of music, which needs to be cataloged efficiently and be ea ..."
Abstract
- Add to MetaCart
, the Million Song Dataset (MSD), for classifica-tion tasks on other datasets, by reusing models trained on the MSD for feature extraction. This transfer learning ap-proach, which we refer to as supervised pre-training, was previously shown to be very effective for computer vision problems. We show
Results 1 - 10
of
3,887