Results 1 - 10
of
51
Saliency Detection via Graph-Based Manifold Ranking
"... Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of considering the contrast between the salient ..."
Abstract
-
Cited by 46 (3 self)
- Add to MetaCart
(Show Context)
Most existing bottom-up methods measure the foreground saliency of a pixel or region based on its contrast within a local context or the entire image, whereas a few methods focus on segmenting out background regions and thereby salient objects. Instead of considering the contrast between the salient objects and their surrounding regions, we consider both foreground and background cues in a different way. We rank the similarity of the image elements (pixels or regions) with foreground cues or background cues via graph-based manifold ranking. The saliency of the image elements is defined based on their relevances to the given seeds or queries. We represent the image as a close-loop graph with superpixels as nodes. These nodes are ranked based on the similarity to background and foreground queries, based on affinity matrices. Saliency detection is carried out in a two-stage scheme to extract background regions and foreground salient objects efficiently. Experimental results on two large benchmark databases demonstrate the proposed method performs well when against the state-of-the-art methods in terms of accuracy and speed. We also create a more difficult benchmark database containing 5,172 images to test the proposed saliency model and make this database publicly available with this paper for further studies in the saliency field. 1.
What Makes a Patch Distinct?
"... only color distinctness, hence, erroneously detect the red surface as salient. (c) [11] rely on shape priors and thus, detect only the beard and arm. (d) [9] search for unique patches, hence, detect mostly the outline of the statue. (e) [5] add an objectness measure to [9]. Their result is fuzzy due ..."
Abstract
-
Cited by 29 (1 self)
- Add to MetaCart
(Show Context)
only color distinctness, hence, erroneously detect the red surface as salient. (c) [11] rely on shape priors and thus, detect only the beard and arm. (d) [9] search for unique patches, hence, detect mostly the outline of the statue. (e) [5] add an objectness measure to [9]. Their result is fuzzy due to the objects in the background (tree and clouds). (f) Our algorithm accurately detects the entire statue, excluding all background pixels, by considering both color and pattern distinctness. What makes an object salient? Most previous work assert that distinctness is the dominating factor. The difference between the various algorithms is in the way they compute distinctness. Some focus on the patterns, others on the colors, and several add high-level cues and priors. We propose a simple, yet powerful, algorithm that integrates these three factors. Our key contribution is a novel and fast approach to compute pattern distinctness. We rely on the inner statistics of the patches in the image for identifying unique patterns. We provide an extensive evaluation and show that our approach outperforms all state-of-the-art methods on the five most commonly-used datasets. 1.
Salient object detection: A discriminative regional feature integration approach
- In CVPR
, 2013
"... Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmenta-tion, uses the supervised learni ..."
Abstract
-
Cited by 27 (4 self)
- Add to MetaCart
(Show Context)
Salient object detection has been attracting a lot of interest, and recently various heuristic computational models have been designed. In this paper, we regard saliency map computation as a regression problem. Our method, which is based on multi-level image segmenta-tion, uses the supervised learning approach to map the regional feature vector to a saliency score, and finally fuses the saliency scores across multiple levels, yielding the saliency map. The contributions lie in two-fold. One is that we show our approach, which integrates the regional contrast, regional property and regional backgroundness descriptors together to form the master saliency map, is able to produce superior saliency maps to existing algorithms most of which combine saliency maps heuristically computed from different types of fea-tures. The other is that we introduce a new regional fea-ture vector, backgroundness, to characterize the back-ground, which can be regarded as a counterpart of the objectness descriptor [2]. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts. 1.
Binarized normed gradients for objectness estimation at 300fps
- in IEEE CVPR
, 2014
"... Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, wit ..."
Abstract
-
Cited by 25 (6 self)
- Add to MetaCart
(Show Context)
Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their cor-responding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gra-dients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this fea-ture, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single lap-top CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 % object detec-tion rate (DR) with 1,000 proposals. Increasing the num-bers of proposals and color spaces for computing BING fea-tures, our performance can be further improved to 99.5% DR. 1.
Efficient Salient Region Detection with Soft Image Abstraction
"... Detecting visually salient regions in images is one of the fundamental problems in computer vision. We propose a novel method to decompose an image into large scale per-ceptually homogeneous elements for efficient salient region detection, using a soft image abstraction representation. By considerin ..."
Abstract
-
Cited by 22 (4 self)
- Add to MetaCart
(Show Context)
Detecting visually salient regions in images is one of the fundamental problems in computer vision. We propose a novel method to decompose an image into large scale per-ceptually homogeneous elements for efficient salient region detection, using a soft image abstraction representation. By considering both appearance similarity and spatial dis-tribution of image pixels, the proposed representation ab-stracts out unnecessary image details, allowing the assign-ment of comparable saliency values across similar regions, and producing perceptually accurate salient region detec-tion. We evaluate our salient region detection approach on the largest publicly available dataset with pixel accurate annotations. The experimental results show that the pro-posed method outperforms 18 alternate methods, reducing the mean absolute error by 25.2 % compared to the previous best result, while being computationally more efficient. 1.
Saliency detection via absorbing markov chain
- in IEEE International Conference on Computer Vision
, 2013
"... In this paper, we formulate saliency detection via ab-sorbing Markov chain on an image graph model. We joint-ly consider the appearance divergence and spatial distri-bution of salient objects and the background. The virtual boundary nodes are chosen as the absorbing nodes in a Markov chain and the a ..."
Abstract
-
Cited by 14 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we formulate saliency detection via ab-sorbing Markov chain on an image graph model. We joint-ly consider the appearance divergence and spatial distri-bution of salient objects and the background. The virtual boundary nodes are chosen as the absorbing nodes in a Markov chain and the absorbed time from each transient node to boundary absorbing nodes is computed. The ab-sorbed time of transient node measures its global similar-ity with all absorbing nodes, and thus salient objects can be consistently separated from the background when the absorbed time is used as a metric. Since the time from transient node to absorbing nodes relies on the weights on the path and their spatial distance, the background region on the center of image may be salient. We further exploit the equilibrium distribution in an ergodic Markov chain to reduce the absorbed time in the long-range smooth back-ground regions. Extensive experiments on four benchmark datasets demonstrate robustness and efficiency of the pro-posed method against the state-of-the-art methods. 1.
The secrets of salient object segmentation
- In Computer Vision and Pattern Recognition (CVPR), IEEE Conference on
"... Abstract: In this paper we provide an extensive evaluation of fixation prediction and salient object segmentation algorithms as well as statistics of major datasets. Our analysis identifies serious design flaws of existing salient object benchmarks, called the dataset design bias, by over emphasisi ..."
Abstract
-
Cited by 14 (0 self)
- Add to MetaCart
(Show Context)
Abstract: In this paper we provide an extensive evaluation of fixation prediction and salient object segmentation algorithms as well as statistics of major datasets. Our analysis identifies serious design flaws of existing salient object benchmarks, called the dataset design bias, by over emphasising the stereotypical concepts of saliency. The dataset design bias does not only create the discomforting disconnection between fixations and salient object segmentation, but also misleads the algorithm designing. Based on our analysis, we propose a new high quality dataset that offers both fixation and salient object segmentation ground-truth. With fixations and salient object being presented simultaneously, we are able to bridge the gap between fixations and salient objects, and propose a novel method for salient object segmentation. Finally, we report significant benchmark progress on three existing datasets of segmenting salient objects.
Saliency Detection via Dense and Sparse Reconstruction
"... In this paper, we propose a visual saliency detection al-gorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as like-ly cues for background templates, from which dense and sparse appearance models are constructed. For each im-age region, we ..."
Abstract
-
Cited by 12 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we propose a visual saliency detection al-gorithm from the perspective of reconstruction errors. The image boundaries are first extracted via superpixels as like-ly cues for background templates, from which dense and sparse appearance models are constructed. For each im-age region, we first compute dense and sparse reconstruc-tion errors. Second, the reconstruction errors are propa-gated based on the contexts obtained from K-means cluster-ing. Third, pixel-level saliency is computed by an integra-tion of multi-scale reconstruction errors and refined by an object-biased Gaussian model. We apply the Bayes formula to integrate saliency measures based on dense and sparse reconstruction errors. Experimental results show that the proposed algorithm performs favorably against seventeen state-of-the-art methods in terms of precision and recall. In addition, the proposed algorithm is demonstrated to be more effective in highlighting salient objects uniformly and robust to background noise. 1.
Pisa: Pixelwise image saliency by aggregating complementary appearance contrast measures with spatial priors
- In CVPR
"... Driven by recent vision and graphics applications such as image segmentation and object recognition, assigning pixel-accurate saliency values to uniformly highlight fore-ground objects becomes increasingly critical. More often, such fine-grained saliency detection is also desired to have a fast runt ..."
Abstract
-
Cited by 10 (2 self)
- Add to MetaCart
(Show Context)
Driven by recent vision and graphics applications such as image segmentation and object recognition, assigning pixel-accurate saliency values to uniformly highlight fore-ground objects becomes increasingly critical. More often, such fine-grained saliency detection is also desired to have a fast runtime. Motivated by these, we propose a generic and fast computational framework called PISA – Pixelwise Image Saliency Aggregating complementary saliency cues based on color and structure contrasts with spatial pri-ors holistically. Overcoming the limitations of previous methods often using homogeneous superpixel-based and color contrast-only treatment, our PISA approach directly performs saliency modeling for each individual pixel and makes use of densely overlapping, feature-adaptive obser-vations for saliency measure computation. We further im-pose a spatial prior term on each of the two contrast mea-sures, which constrains pixels rendered salient to be com-pact and also centered in image domain. By fusing com-plementary contrast measures in such a pixelwise adaptive manner, the detection effectiveness is significantly boosted. Without requiring reliable region segmentation or post-relaxation, PISA exploits an efficient edge-aware image rep-resentation and filtering technique and produces spatially coherent yet detail-preserving saliency maps. Extensive ex-periments on three public datasets demonstrate PISA’s su-perior detection accuracy and competitive runtime speed over the state-of-the-arts approaches. 1.
Saliency Aggregation: A Data-driven Approach
"... A variety of methods have been developed for visual saliency analysis. These methods often complement each other. This paper addresses the problem of aggregating var-ious saliency analysis methods such that the aggregation result outperforms each individual one. We have two major observations. First ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
A variety of methods have been developed for visual saliency analysis. These methods often complement each other. This paper addresses the problem of aggregating var-ious saliency analysis methods such that the aggregation result outperforms each individual one. We have two major observations. First, different methods perform differently in saliency analysis. Second, the performance of a saliency analysis method varies with individual images. Our idea is to use data-driven approaches to saliency aggregation that appropriately consider the performance gaps among indi-vidual methods and the performance dependence of each method on individual images. This paper discusses various data-driven approaches and finds that the image-dependent aggregation method works best. Specifically, our method uses a Conditional Random Field (CRF) framework for saliency aggregation that not only models the contribution from individual saliency map but also the interaction be-tween neighboring pixels. To account for the dependence of aggregation on an individual image, our approach selects a subset of images similar to the input image from a training data set and trains the CRF aggregation model only using this subset instead of the whole training set. Our experi-ments on public saliency benchmarks show that our aggre-gation method outperforms each individual saliency method and is robust with the selection of aggregated methods. 1.