Results 1  10
of
38
The Bayesian image retrieval system, PicHunter: Theory, implementation, and psychophysical experiments
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2000
"... This paper presents the theory, design principles, implementation, and performance results of PicHunter, a prototype contentbased image retrieval (CBIR) system that has been developed over the past three years. In addition, this document presents the rationale, design, and results of psychophysica ..."
Abstract

Cited by 222 (2 self)
 Add to MetaCart
(Show Context)
This paper presents the theory, design principles, implementation, and performance results of PicHunter, a prototype contentbased image retrieval (CBIR) system that has been developed over the past three years. In addition, this document presents the rationale, design, and results of psychophysical experiments that were conducted to address some key issues that arose during PicHunter’s development. The PicHunter project makes four primary contributions to research on contentbased image retrieval. First, PicHunter represents a simple instance of a general Bayesian framework we describe for using relevance feedback to direct a search. With an explicit model of what users would do, given what target image they want, PicHunter uses Bayes’s rule to predict what is the target they want, given their actions. This is done via a probability distribution over possible image targets, rather than by refining a query. Second, an entropyminimizing display algorithm is described that attempts to maximize the information obtained from a user at each iteration of the search. Third, PicHunter makes use of hidden annotation rather than a possibly inaccurate/inconsistent annotation structure that the user must learn and make queries in. Finally, PicHunter introduces two experimental paradigms to quantitatively evaluate the performance of the system, and psychophysical experiments are presented that support the theoretical claims.
Searching in The Plane
 INFORMATION AND COMPUTATION
, 1991
"... In this paper we initiate a new area of study dealing with the best way to search a possibly unbounded region for an object. The model for our search algorithms is that we must pay costs proportional to the distance of the next probe position relative to our current position. This model is meant to ..."
Abstract

Cited by 146 (0 self)
 Add to MetaCart
(Show Context)
In this paper we initiate a new area of study dealing with the best way to search a possibly unbounded region for an object. The model for our search algorithms is that we must pay costs proportional to the distance of the next probe position relative to our current position. This model is meant to give a realistic cost measure for a robot moving in the plane. We also examine the effect of decreasing the amount of a priori information given to search problems. Problems of this type are very simple analogues of nontrivial problems on searching an unbounded region, processing digitized images, and robot navigation. We show that for some simple search problems, the relative information of knowing the general direction of the goal is much higher than knowing the distance to the goal.
An Optimized Interaction Strategy for Bayesian Relevance Feedback
 In IEEE Conference on Computer Vision and Pattern Recognition (CVPR’98
, 1998
"... A new algorithm and systematic evaluation is presented for searching a database via relevance feedback. It represents a new image display strategy for the PicHunter system [2, 1]. The algorithm takes feedback in the form of relative judgments ("item A is more relevant than item B") as oppo ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
(Show Context)
A new algorithm and systematic evaluation is presented for searching a database via relevance feedback. It represents a new image display strategy for the PicHunter system [2, 1]. The algorithm takes feedback in the form of relative judgments ("item A is more relevant than item B") as opposed to the stronger assumption of categorical relevance judgments ("item A is relevant but item B is not"). It also exploits a learned probabilistic model of human behavior to make better use of the feedback it obtains. The algorithm can be viewed as an extension of indexing schemes like the kd tree to a stochastic setting, hence the name "stochasticcomparison search." In simulations, the amount of feedback required for the new algorithm scales like log 2 D, where D is the size of the database, while a simple querybyexampleapproach scales like D a , where a < 1 depends on the structure of the database. This theoretical advantage is reflected by experiments with real users on a database of 1500 stock photographs. 1
Generalized binary search
 In Proceedings of the 46th Allerton Conference on Communications, Control, and Computing
, 2008
"... This paper addresses the problem of noisy Generalized Binary Search (GBS). GBS is a wellknown greedy algorithm for determining a binaryvalued hypothesis through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideratio ..."
Abstract

Cited by 58 (0 self)
 Add to MetaCart
(Show Context)
This paper addresses the problem of noisy Generalized Binary Search (GBS). GBS is a wellknown greedy algorithm for determining a binaryvalued hypothesis through a sequence of strategically selected queries. At each step, a query is selected that most evenly splits the hypotheses under consideration into two disjoint subsets, a natural generalization of the idea underlying classic binary search. GBS is used in many applications, including fault testing, machine diagnostics, disease diagnosis, job scheduling, image processing, computer vision, and active learning. In most of these cases, the responses to queries can be noisy. Past work has provided a partial characterization of GBS, but existing noisetolerant versions of GBS are suboptimal in terms of query complexity. This paper presents an optimal algorithm for noisy GBS and demonstrates its application to learning multidimensional threshold functions. 1
Active learning using arbitrary binary valued queries
 Machine Learning
, 1993
"... The original and most widely studied PAC model for learning assumes a passive learner in the sense that the learner plays no role in obtaining information about the unknown concept. That is, the samples are simply drawn independently from some probability distribution. Some work has been done on stu ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
(Show Context)
The original and most widely studied PAC model for learning assumes a passive learner in the sense that the learner plays no role in obtaining information about the unknown concept. That is, the samples are simply drawn independently from some probability distribution. Some work has been done on studying more powerful oracles and how they affect learnability. To find bounds on the improvement that can be expected from using oracles, we consider active learning in the sense that the learner has complete choice in the information received. Specifically, we allow the learner to ask arbitrary yes/no questions. We consider both active learning under a fixed distribution and distributionfree active learning. In the case of active learning, the underlying probability distribution is used only to measure distance between concepts. For learnability with respect to a fixed distribution, active learning does not enlarge the set of learnable concept classes, but can improve the sample complexity. For distributionfree learning, it is shown that a concept class is actively learnable iff it is finite, so that active learning is in fact less powerful than the usual passive learning model. We also consider a form of distributionfree learning in which the learner knows the distribution being used, so that 'distributionfree ' refers only to the requirement that a bound on the number of queries can be obtained uniformly over all distributions. Even with the side information of the distribution being used, a concept class is actively learnable iff it has finite VC dimension, so that active learning with the side information still does not enlarge the set of learnable concept classes.
Sorting and Searching in the Presence of Memory Faults (without Redundancy)
 Proc. 36th ACM Symposium on Theory of Computing (STOC’04
, 2004
"... We investigate the design of algorithms resilient to memory faults, i.e., algorithms that, despite the corruption of some memory values during their execution, are able to produce a correct output on the set of uncorrupted values. In this framework, we consider two fundamental problems: sorting and ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
We investigate the design of algorithms resilient to memory faults, i.e., algorithms that, despite the corruption of some memory values during their execution, are able to produce a correct output on the set of uncorrupted values. In this framework, we consider two fundamental problems: sorting and searching. In particular, we prove that any O(n log n) comparisonbased sorting algorithm can tolerate at most O((n log n) ) memory faults. Furthermore, we present one comparisonbased sorting algorithm with optimal space and running time that is resilient to O((n log n) ) faults. We also prove polylogarithmic lower and upper bounds on faulttolerant searching.
Optimal resilient sorting and searching in the presence of memory faults
 IN PROC. 33RD INTERNATIONAL COLLOQUIUM ON AUTOMATA, LANGUAGES AND PROGRAMMING, VOLUME 4051 OF LECTURE NOTES IN COMPUTER SCIENCE
, 2006
"... We investigate the problem of reliable computation in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we consider the problems of sorting and searching in optimal time while tolerating the largest possible number of memory faults. In particular, we design an ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
We investigate the problem of reliable computation in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we consider the problems of sorting and searching in optimal time while tolerating the largest possible number of memory faults. In particular, we design an O(n log n) time sorting algorithm that can optimally tolerate up to O ( √ n log n) memory faults. In the special case of integer sorting, we present an algorithm with linear expected running time that can tolerate O ( √ n) faults. We also present a randomized searching algorithm that can optimally tolerate up to O(log n) memory faults in O(log n) expected time, and an almost optimal deterministic searching algorithm that can tolerate O((log n) 1−ǫ) faults, for any small positive constant ǫ, in O(log n) worstcase time. All these results improve over previous bounds.
Resilient search trees
 IN PROCEEDINGS OF 18TH ACMSIAM SODA
, 2007
"... We investigate the problem of computing in a reliable fashion in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we focus on the design of resilient data structures, i.e., data structures that, despite the corruption of some memory values during their lifetim ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We investigate the problem of computing in a reliable fashion in the presence of faults that may arbitrarily corrupt memory locations. In this framework, we focus on the design of resilient data structures, i.e., data structures that, despite the corruption of some memory values during their lifetime, are nevertheless able to operate correctly (at least) on the set of uncorrupted values. In particular, we present resilient search trees which achieve optimal time and space bounds while tolerating up to O ( √ log n) memory faults, where n is the current number of items in the search tree. In more detail, our resilient search trees are able to insert, delete and search for a key in O(log n + δ 2) amortized time, where δ is an upper bound on the total number of faults. The space required is O(n + δ).
Effective Search Problems
 Mathematical Logic Quarterly
, 1994
"... The task of computing a function F with the help of an oracle X can be viewed as a search problem where the cost measure is the number of queries to X . We ask for the minimal number that can be achieved by a suitable choice of X and call this quantity the query complexity of F . This concept is s ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
(Show Context)
The task of computing a function F with the help of an oracle X can be viewed as a search problem where the cost measure is the number of queries to X . We ask for the minimal number that can be achieved by a suitable choice of X and call this quantity the query complexity of F . This concept is suggested by earlier work of Beigel, Gasarch, Gill, and Owings on "Bounded query classes". We introduce a fault tolerant version and relate it with Ulam's game. For many natural classes of functions F we obtain tight upper and lower bounds on the query complexity of F . Previous results like the Nonspeedup Theorem and the Cardinality Theorem appear in a wider perspective. 1991 Mathematics Subject Classification: Primary 03D20; Secondary 68Q15, 68R05 Keywords: Search problems, bounded queries, query complexity, recursive functions 1 Introduction The task of computing a function F with the help of an oracle X ` ! (! is the set of all natural numbers) can be viewed as a search problem where t...