Results 1  10
of
35
Histograms Analysis for Image Retrieval
, 2001
"...  This paper analyzes the use of histograms of low level image features, such as color and luminance, as descriptors for image retrieval purposes. A novel denition of histogram capacity curve taking into account the density distribution of histograms in the corresponding spaces is proposed and used ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
 This paper analyzes the use of histograms of low level image features, such as color and luminance, as descriptors for image retrieval purposes. A novel denition of histogram capacity curve taking into account the density distribution of histograms in the corresponding spaces is proposed and used to quantify the eectiveness of image descriptors and histogram dissimilarities in image retrieval applications. The results permit the design of scalable image retrieval systems which make optimal use of computational and storage resources. Keywords: image retrieval, histograms, density estimation, distribution comparison. 1. Introduction A currently active line of research and development in the Computer Vision community is the design and development of ecient tools for accessing multimedia material, such as video and still images, using their media specic features. In particular, several research papers and tools have been presented for image retrieval based on low level visual featu...
A Model of Facial Behaviour
 In IEEE International Conference on Automatic Face and Gesture Recognition, Seoul, Korea, May 17
, 2002
"... We consider the problem of learning how a person’s face behaves in a long video sequence, with the aim of synthesising convincing sequences demonstrating the same behaviours. We describe a novel approach to segment a sequence into short sections, each representing a distinct action (or a part of an ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning how a person’s face behaves in a long video sequence, with the aim of synthesising convincing sequences demonstrating the same behaviours. We describe a novel approach to segment a sequence into short sections, each representing a distinct action (or a part of an action). These sections are grouped and a model of the variability of the action learnt. A variable length Markov model is trained on the sequence of such actions to learn the temporal relationships. The result is a system that can generate realistic sequences of an individual face.
Ocluster: scalable clustering of large high dimensional data sets
 In Data Mining, Proceedings from the IEEE International Conference on
, 2002
"... Clustering large data sets of high dimensionality has always been a serious challenge for clustering algorithms. Many recently developed clustering algorithms have attempted to address either handling data sets with very large number of records or data sets with very high number of dimensions. This ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Clustering large data sets of high dimensionality has always been a serious challenge for clustering algorithms. Many recently developed clustering algorithms have attempted to address either handling data sets with very large number of records or data sets with very high number of dimensions. This paper provides a discussion of the advantages and limitations of existing algorithms when they operate on very large multidimensional data sets. To simultaneously overcome both the “curse of dimensionality ” and the scalability problems associated with large amounts of data, we propose a new clustering algorithm called OCluster. This new clustering method combines a novel active sampling technique with an axisparallel partitioning strategy to identify continuous areas of high density in the input space. The method operates on a limited memory buffer and requires at most a single scan through the data. We demonstrate the high quality of the obtained clustering solutions, their robustness to noise, and OCluster’s excellent scalability. 1.
Bin Width Selection in Multivariate Histograms By the Combinatorial Method
"... We present several multivariate histogram density estimates that are universally L 1 optimal to within a constant factor and an additive term O( log n=n). The bin widths are chosen by the combinatorial method developed by the authors in Combinatorial Methods in Density Estimation (SpringerVerla ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We present several multivariate histogram density estimates that are universally L 1 optimal to within a constant factor and an additive term O( log n=n). The bin widths are chosen by the combinatorial method developed by the authors in Combinatorial Methods in Density Estimation (SpringerVerlag, 2001). The present paper solves a problem left open in that book.
A comparison of automatic histogram constructions
, 2008
"... Abstract. Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of da ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of data sets. In this paper we compare several histogram construction procedures by means of a simulation study. The study includes plugin methods, crossvalidation, penalized maximum likelihood and the taut string procedure. Their performance on different test beds is measured by their ability to identify the peaks of an underlying density as well as by Hellinger distance.
Clustering Large Databases with Numeric and Nominal Values Using Orthogonal Projections
"... Clustering large highdimensional databases has emerged as a challenging research area. A number of recently developed clustering algorithms have focused on overcoming either the “curse of dimensionality ” or the scalability problems associated with large amounts of data. The majority of these algor ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Clustering large highdimensional databases has emerged as a challenging research area. A number of recently developed clustering algorithms have focused on overcoming either the “curse of dimensionality ” or the scalability problems associated with large amounts of data. The majority of these algorithms operate only on numeric data, a few handle nominal data, and very few can deal with both numeric and nominal values. Orthogonal partitioning Clustering (OCluster) was originally introduced as a fast, scalable solution for large multidimensional databases with numeric values. Here, we extend OCluster to domains with nominal and mixed values. OCluster uses a topdown partitioning strategy based on orthogonal projections to identify areas of high density in the input data space. The algorithm employs an active sampling mechanism and requires at most a single scan through the data. We demonstrate the high quality of the obtained clustering solutions, their explanatory power, and OCluster’s good scalability. 1.
Constructing a regular histogram a comparison of methods
, 2007
"... Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of data sets. I ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Even for a welltrained statistician the construction of a histogram for a given realvalued data set is a difficult problem. It is even more difficult to construct a fully automatic procedure which specifies the number and widths of the bins in a satisfactory manner for a wide range of data sets. In this paper we compare several histogram construction methods by means of a simulation study. The study includes plugin methods, crossvalidation, penalized maximum likelihood and the taut string procedure. Their performance on different test beds is measured by the Hellinger distance and the ability to identify the modes of the underlying density.
Variations on the histogram
, 2007
"... It is usual to choose to make the bins in a histogram all have the same width. One could also choose to make them all have the same area. These two options have complementary strengths and weaknesses; the equalwidth histogram oversmooths in regions of high density, and is poor at identifying sharp ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
It is usual to choose to make the bins in a histogram all have the same width. One could also choose to make them all have the same area. These two options have complementary strengths and weaknesses; the equalwidth histogram oversmooths in regions of high density, and is poor at identifying sharp peaks; the equalarea histogram oversmooths in regions of low density, and so does not identify outliers. We describe a compromise approach whichavoids both of these defects. We argue that relying on asymptotics of the Integrated Mean Square Error leads to inappropriate recommendations.
Scene Image is NonMutually Exclusive A Fuzzy Qualitative Scene Understanding
"... Abstract—Ambiguity or uncertainty is a pervasive element of many real world decision making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research had proven that decision making by human is subjecti ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Ambiguity or uncertainty is a pervasive element of many real world decision making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research had proven that decision making by human is subjective. It is influenced by many factors such as experience, age, background, etc. Scene understanding is one of the computer vision problems that fall into this category. Conventional methods relax this problem by assuming scene images are mutually exclusive; and therefore, focus on developing different approaches to perform the binary classification tasks. In this paper, we show that scene images are nonmutually exclusive, and propose the Fuzzy Qualitative Rank Classifier (FQRC) to tackle the aforementioned problems. The proposed FQRC provides a ranking interpretation instead of binary decision. Evaluations in term of qualitative and quantitative using large numbers and challenging public scene datasets have shown the effectiveness of our proposed method in modeling the nonmutually exclusive scene images. Index Terms—Scene understanding, fuzzy qualitative reasoning, multilabel classification, computer vision, pattern recognition I.