Results 1 - 10
of
180
The Contourlet Transform: An Efficient Directional Multiresolution Image Representation
- IEEE TRANSACTIONS ON IMAGE PROCESSING
"... The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a “true” two-dimensional transform that can capture the intrinsic geometrical structure t ..."
Abstract
-
Cited by 513 (20 self)
- Add to MetaCart
The limitations of commonly used separable extensions of one-dimensional transforms, such as the Fourier and wavelet transforms, in capturing the geometry of image edges are well known. In this paper, we pursue a “true” two-dimensional transform that can capture the intrinsic geometrical structure that is key in visual information. The main challenge in exploring geometry in images comes from the discrete nature of the data. Thus, unlike other approaches, such as curvelets, that first develop a transform in the continuous domain and then discretize for sampled data, our approach starts with a discrete-domain construction and then studies its convergence to an expansion in the continuous domain. Specifically, we construct a discrete-domain multiresolution and multidirection expansion using non-separable filter banks, in much the same way that wavelets were derived from filter banks. This construction results in a flexible multiresolution, local, and directional image expansion using contour segments, and thus it is named the contourlet transform. The discrete contourlet transform has a fast iterated filter bank algorithm that requires an order N operations for N-pixel images. Furthermore, we establish a precise link between the developed filter bank and the associated continuousdomain contourlet expansion via a directional multiresolution analysis framework. We show that with parabolic scaling and sufficient directional vanishing moments, contourlets achieve the optimal approximation rate for piecewise smooth functions with discontinuities along twice continuously differentiable curves. Finally, we show some numerical experiments demonstrating the potential of contourlets in several image processing applications.
The Dual-Tree Complex Wavelet Transform -- A coherent framework for multiscale signal and image processing
, 2005
"... The dual-tree complex wavelet transform (CWT) is a relatively recent enhancement to the discrete wavelet transform (DWT), with important additional properties: It is nearly shift invariant and directionally selective in two and higher dimensions. It achieves this with a redundancy factor of only 2 ..."
Abstract
-
Cited by 270 (29 self)
- Add to MetaCart
The dual-tree complex wavelet transform (CWT) is a relatively recent enhancement to the discrete wavelet transform (DWT), with important additional properties: It is nearly shift invariant and directionally selective in two and higher dimensions. It achieves this with a redundancy factor of only 2 d for d-dimensional signals, which is substantially lower than the undecimated DWT. The multidimensional (M-D) dual-tree CWT is nonseparable but is based on a computationally efficient, separable filter bank (FB). This tutorial discusses the theory behind the dual-tree transform, shows how complex wavelets with good properties can be designed, and illustrates a range of applications in signal and image processing. We use the complex number symbol C in CWT to
A Tutorial on Modern Lossy Wavelet Image Compression: Foundations of JPEG 2000
, 2001
"... The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based o ..."
Abstract
-
Cited by 97 (0 self)
- Add to MetaCart
(Show Context)
The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based on the wavelet transform, in contrast to the discrete cosine transform (DCT) used in the original JPEG standard. The significant change in coding methods between the two standards leads one to ask: What prompted the JPEG committee to adopt such a dramatic change? The answer to this question comes from considering the state of image coding at the time the original JPEG standard was being formed. At that time wavelet analysis and wavelet coding were still
Theoretical Foundations of Transform Coding
, 2001
"... This article explains the fundamental principles of transform coding; these principles apply equally well to images, audio, video, and various other types of data, so abstract formulations are given. Much of the material presented here is adapted from [14, Chap. 2, 4]. The details on wavelet transfo ..."
Abstract
-
Cited by 80 (6 self)
- Add to MetaCart
This article explains the fundamental principles of transform coding; these principles apply equally well to images, audio, video, and various other types of data, so abstract formulations are given. Much of the material presented here is adapted from [14, Chap. 2, 4]. The details on wavelet transform-based image compression and the JPEG2000 image compression standard are given in the following two articles of this special issue [38], [37]
Wavelets, Approximation, and Compression
, 2001
"... this article is to look at recent wavelet advances from a signal processing perspective. In particular, approximation results are reviewed, and the implication on compression algorithms is discussed. New constructions and open problems are also addressed ..."
Abstract
-
Cited by 68 (6 self)
- Add to MetaCart
this article is to look at recent wavelet advances from a signal processing perspective. In particular, approximation results are reviewed, and the implication on compression algorithms is discussed. New constructions and open problems are also addressed
Directionlets: Anisotropic Multi-directional Representation with Separable Filtering
- IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2004
"... In spite of the success of the standard wavelet transform (WT) in image processing in recent years, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in the horizontal and vertical directions. One-dimensional (1-D) discontinuities in images (edges a ..."
Abstract
-
Cited by 58 (6 self)
- Add to MetaCart
In spite of the success of the standard wavelet transform (WT) in image processing in recent years, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in the horizontal and vertical directions. One-dimensional (1-D) discontinuities in images (edges and contours) that are very important elements in visual perception, intersect too many wavelet basis functions and lead to a non-sparse representation. To capture efficiently these anisotropic geometrical structures characterized by many more than the horizontal and vertical directions, a more complex multi-directional (M-DIR) and anisotropic transform is required. We present a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT. The transform retains the separable filtering and subsampling and the simplicity of computations and filter design from the standard two-dimensional (2-D) WT. The corresponding anisotropic basis functions (directionlets) have directional vanishing moments (DVM) along any two directions with rational slopes. Furthermore, we show that this novel transform provides an efficient tool for non-linear approximation (NLA) of images, achieving the approximation power O(N −1.55), which is competitive to the rates achieved by the other oversampled transform constructions.
Accuracy-guaranteed bit-width optimization
- IEEE TRANS. COMP.-AIDED DES. INTEG. CIR. SYS
, 2006
"... An automated static approach for optimizing bit widths of fixed-point feedforward designs with guaranteed accuracy, called MiniBit, is presented. Methods to minimize both the integer and fraction parts of fixed-point signals with the aim of minimizing the circuit area are described. For range analy ..."
Abstract
-
Cited by 32 (13 self)
- Add to MetaCart
An automated static approach for optimizing bit widths of fixed-point feedforward designs with guaranteed accuracy, called MiniBit, is presented. Methods to minimize both the integer and fraction parts of fixed-point signals with the aim of minimizing the circuit area are described. For range analysis, the technique in this paper identifies the number of integer bits necessary to meet range requirements. For precision analysis, a semianalytical approach with analytical error models in conjunction with adaptive simulated annealing is employed to optimize the number of fraction bits. The analytical models make it possible to guarantee overflow/underflow protection and numerical accuracy for all inputs over the user-specified input intervals. Using a stream compiler for field-programmable gate arrays (FPGAs), the approach in this paper is demonstrated with polynomial approximation, RGB-to-YCbCr conversion, matrix multiplication, B-splines, and discrete cosine transform placed and routed on a Xilinx Virtex-4 FPGA. Improvements for a given design reduce the area and the latency by up to 26 % and 12%, respectively, over a design using optimum uniform fraction bit widths. Studies show that MiniBit-optimized designs are within 1 % of the area produced from the integer linear programming approach.
Analysis and architecture design of block-coding engine for EBCOT in JPEG 2000
- IEEE Trans. Circuits and Systems
, 2003
"... Abstract—Embedded block coding with optimized truncation (EBCOT) is the most important technology in the latest image-coding standard, JPEG 2000. The hardware design of the block-coding engine in EBCOT is critical because the operations are bit-level processing and occupy more than half of the compu ..."
Abstract
-
Cited by 27 (8 self)
- Add to MetaCart
Abstract—Embedded block coding with optimized truncation (EBCOT) is the most important technology in the latest image-coding standard, JPEG 2000. The hardware design of the block-coding engine in EBCOT is critical because the operations are bit-level processing and occupy more than half of the computation time of the whole compression process. A general purpose processor (GPP) is, therefore, very inefficient to process these operations. In this paper, we present detailed analysis and dedicated hardware architecture of the block-coding engine to execute the EBCOT algorithm efficiently. The context formation process in EBCOT is analyzed to get an insight into the characteristics of the operation. Column-based architecture and two speed-up methods, sample skipping (SS) and group-of-column skipping (GOCS), for the context generation are then proposed. As for arithmetic encoder design, the pipeline and look-ahead techniques are used to speed up the processing. It is shown that about 60% of the processing time is reduced compared with sample-based straightforward implementation. A test chip is designed and the simulation results show that it can process 4.6 million pixels image within 1 s, corresponding to 2400 1800 image size, or CIF (352 288) 4:2:0 video sequence with 30 frames per second at 50-MHz working frequency. Index Terms—Block-coding engine, EBCOT, embedded block coding with optimized truncation, JPEG 2000.
Anti-forensics of digital image compression
- IEEE Trans. Inf. Forensics Security
, 2011
"... Abstract—As society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image’s compression his ..."
Abstract
-
Cited by 24 (9 self)
- Add to MetaCart
(Show Context)
Abstract—As society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image’s compression history and its associated compression fingerprints. Little consideration has been given, however, to anti-forensic techniques capable of fooling forensic algorithms. In this paper, we present a set of anti-forensic techniques designed to remove forensically significant indicators of compression from an image. We do this by first developing a generalized framework for the design of anti-forensic techniques to remove compression fingerprints from an image’s transform coefficients. This framework operates by estimating the distribution of an image’s transform coefficients before compression, then adding anti-forensic dither to the transform coefficients of a compressed image so that their distribution matches the estimated one. We then use this framework to develop anti-forensic techniques specifically targeted at erasing compression fingerprints left by both JPEG and wavelet-based coders. Additionally, we propose a technique to remove statistical traces of the blocking artifacts left by image compression algorithms that divide an image into segments during processing. Through a series of experiments, we demonstrate that our anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image’s visual quality. Furthermore, we show how these techniques can be used to render several forms of image tampering such as double JPEG compression, cut-and-paste image forgery, and image origin falsification undetectable through compression-history-based forensic means. Index Terms—Anti-forensics, anti-forensic dither, digital forensics, image compression, JPEG compression. I.