Results 1 - 10
of
231
Geodesic Active Contours
, 1997
"... A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both in ..."
Abstract
-
Cited by 1425 (47 self)
- Add to MetaCart
(Show Context)
A novel scheme for the detection of object boundaries is presented. The technique is based on active contours evolving in time according to intrinsic geometric measures of the image. The evolving contours naturally split and merge, allowing the simultaneous detection of several objects and both interior and exterior boundaries. The proposed approach is based on the relation between active contours and the computation of geodesics or minimal distance curves. The minimal distance curve lays in a Riemannian space whose metric is defined by the image content. This geodesic approach for object segmentation allows to connect classical “snakes ” based on energy minimization and geometric active contours based on the theory of curve evolution. Previous models of geometric active contours are improved, allowing stable boundary detection when their gradients suffer from large variations, including gaps. Formal results concerning existence, uniqueness, stability, and correctness of the evolution are presented as well. The scheme was implemented using an efficient algorithm for curve evolution. Experimental results of applying the scheme to real images including objects with holes and medical data imagery demonstrate its power. The results may be extended to 3D object segmentation as well.
Painterly Rendering with Curved Brush Strokes of Multiple Sizes
, 1998
"... We present a new method for creating an image with a handpainted appearance from a photograph, and a new approach to designing styles of illustration. We "paint" an image with a series of spline brush strokes. Brush strokes are chosen to match colors in a source image. A painting is built ..."
Abstract
-
Cited by 238 (9 self)
- Add to MetaCart
We present a new method for creating an image with a handpainted appearance from a photograph, and a new approach to designing styles of illustration. We "paint" an image with a series of spline brush strokes. Brush strokes are chosen to match colors in a source image. A painting is built up in a series of layers, starting with a rough sketch drawn with a large brush. The sketch is painted over with progressively smaller brushes, but only in areas where the sketch differs from the blurred source image. Thus, visual emphasis in the painting corresponds roughly to the spatial energy present in the source image. We demonstrate a technique for painting with long, curved brush strokes, aligned to normals of image gradients. Thus we begin to explore the expressive quality of complex brush strokes. Rather than process images with a single manner of painting, we present a framework for describing a wide range of visual styles. A style is described as an intuitive set of parameters to the pain...
Shape Priors for Level Set Representations
- In ECCV
, 2002
"... Level Set Representations, the pioneering framework introduced by Osher and Sethian [14] is the most common choice for the implementation of variational frameworks in Computer Vision since it is implicit, intrinsic, parameter and topology free. However, many Computer vision applications refer to ..."
Abstract
-
Cited by 202 (14 self)
- Add to MetaCart
(Show Context)
Level Set Representations, the pioneering framework introduced by Osher and Sethian [14] is the most common choice for the implementation of variational frameworks in Computer Vision since it is implicit, intrinsic, parameter and topology free. However, many Computer vision applications refer to entities with physical meanings that follow a shape form with a certain degree of variability. In this paper, we propose a novel energetic form to introduce shape constraints to level set representations. This formulation exploits all advantages of these representations resulting on a very elegant approach that can deal with a large number of parametric as well as continuous transformations. Furthermore, it can be combined with existing well known level set-based segmentation approaches leading to paradigms that can deal with noisy, occluded and missing or physically corrupted data. Encouraging experimental results are obtained using synthetic and real images.
A fast approximation of the bilateral filter using a signal processing approach
- In Proceedings of the European Conference on Computer Vision
, 2006
"... The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such ..."
Abstract
-
Cited by 179 (7 self)
- Add to MetaCart
The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2 megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering. 1
Navier-Stokes, fluid dynamics, and image and video inpainting
- Proc. IEEE Computer Vision and Pattern Recognition (CVPR
, 2001
"... Image inpainting involves filling in part of an image or video using information from the surrounding area. Applications include the restoration of damaged photographs and movies and the removal of selected objects. In this paper, we introduce a class of automated methods for digital inpainting. The ..."
Abstract
-
Cited by 167 (18 self)
- Add to MetaCart
(Show Context)
Image inpainting involves filling in part of an image or video using information from the surrounding area. Applications include the restoration of damaged photographs and movies and the removal of selected objects. In this paper, we introduce a class of automated methods for digital inpainting. The approach uses ideas from classical fluid dynamics to propagate isophote lines continuously from the exterior into the region to be inpainted. The main idea is to think of the image intensity as a ‘stream function ’ for a two-dimensional incompressible flow. The Laplacian of the image intensity plays the role of the vorticity of the fluid; it is transported into the region to be inpainted by a vector field defined by the stream function. The resulting algorithm is designed to continue isophotes while matching gradient vectors at the boundary of the inpainting region. The method is directly based on the Navier-Stokes equations for fluid dynamics, which has the immediate advantage of well-developed theoretical and numerical results. This is a new approach for introducing ideas from computational fluid dynamics into problems in computer vision and image analysis.
Coherence-Enhancing Diffusion Filtering
, 1999
"... The completion of interrupted lines or the enhancement of flow-like structures is a challenging task in computer vision, human vision, and image processing. We address this problem by presenting a multiscale method in which a nonlinear diffusion filter is steered by the so-called interest operato ..."
Abstract
-
Cited by 137 (3 self)
- Add to MetaCart
The completion of interrupted lines or the enhancement of flow-like structures is a challenging task in computer vision, human vision, and image processing. We address this problem by presenting a multiscale method in which a nonlinear diffusion filter is steered by the so-called interest operator (second-moment matrix, structure tensor). An m-dimensional formulation of this method is analysed with respect to its well-posedness and scale-space properties. An efficient scheme is presented which uses a stabilization by a semi-implicit additive operator splitting (AOS), and the scale-space behaviour of this method is illustrated by applying it to both 2-D and 3-D images.
A topology preserving level set method for geometric deformable models
- IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2003
"... Active contour and surface models, also known as deformable models, are powerful image segmentation techniques. Geometric deformable models implemented using level set methods have advantages over parametric models due to their intrinsic behavior, parameterization independence, and ease of implement ..."
Abstract
-
Cited by 117 (7 self)
- Add to MetaCart
(Show Context)
Active contour and surface models, also known as deformable models, are powerful image segmentation techniques. Geometric deformable models implemented using level set methods have advantages over parametric models due to their intrinsic behavior, parameterization independence, and ease of implementation. However, a long claimed advantage of geometric deformable models—the ability to automatically handle topology changes—turns out to be a liability in applications where the object to be segmented has a known topology that must be preserved. In this paper, we present a new class of geometric deformable models designed using a novel topology-preserving level set method, which achieves topology preservation by applying the simple point concept from digital topology. These new models maintain the other advantages of standard geometric deformable models including subpixel accuracy and production of nonintersecting curves or surfaces. Moreover, since the topology-preserving constraint is enforced efficiently through local computations, the resulting algorithm incurs only nominal computational overhead over standard geometric deformable models. Several experiments on simulated and real data are provided to demonstrate the performance of this new deformable model algorithm.
Why simple shrinkage is still relevant for redundant representations
- IEEE Trans. Inf. Theory
, 2006
"... General Description • Problem statement: Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal's representa ..."
Abstract
-
Cited by 115 (12 self)
- Add to MetaCart
(Show Context)
General Description • Problem statement: Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal's representation is enforced using a unitary transform. Still, shrinkage is also practiced with non-unitary, and even redundant representations, typically leading to satisfactory results. In this paper we shed some light on this behavior. • Originality of the work: The main argument in this paper is that such simple shrinkage could be interpreted as the first iteration of an algorithm that solves the basis pursuit denoising (BPDN) problem. While the desired solution of BPDN is hard to obtain in general, a simple iterative procedure that amounts to step-wise shrinkage can be employed with quite successful performance. • New results: We demonstrate how the simple shrinkage emerges as the first iteration of such algorithm. Furthermore, we show how shrinkage can be iterated, turning into an effective algorithm that minimizes the BPDN via simple shrinkage steps, in order to further strengthen the denoising effect. Lastly, the emerging algorithm stands in between the basis and the matching pursuit as a novel and appealing pursuit technique for atom decomposition in the presence of noise.
On the Equivalence of Soft Wavelet Shrinkage, Total Variation Diffusion, Total Variation Regularization, and SIDEs
- SIAM J. NUMER. ANAL
, 2004
"... Soft wavelet shrinkage, total variation (TV) diffusion, TV regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the one-dimen ..."
Abstract
-
Cited by 89 (18 self)
- Add to MetaCart
(Show Context)
Soft wavelet shrinkage, total variation (TV) diffusion, TV regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the one-dimensional case. First, we prove that Haar wavelet shrinkage on a single scale is equivalent to a single step of space-discrete TV diffusion or regularization of two-pixel pairs. In the translationally invariant case we show that applying cycle spinning to Haar wavelet shrinkage on a single scale can be regarded as an absolutely stable explicit discretization of TV diffusion. We prove that space-discrete TV diffusion and TV regularization are identical and that they are also equivalent to the SIDEs system when a specific force function is chosen. Afterwards, we show that wavelet shrinkage on multiple scales can be regarded as a single step diffusion filtering or regularization of the Laplacian pyramid of the signal. We analyze possibilities to avoid Gibbs-like artifacts for multiscale Haar wavelet shrinkage by scaling the thresholds. Finally, we present experiments where hybrid methods are designed that combine the advantages of wavelets and PDE/variational approaches. These methods are based on iterated shift-invariant wavelet shrinkage at multiple scales with scaled thresholds.