Results 1  10
of
15
Analysis and complexity reduction of multiple reference frames motion estimation in H.264/AVC
 IEEE Trans. Circuits Syst. Video Technol
, 2006
"... Abstract—In the new video coding standard H.264/AVC, motion estimation (ME) is allowed to search multiple reference frames. Therefore, the required computation is highly increased, and it is in proportion to the number of searched reference frames. However, the reduction in prediction residues is mo ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In the new video coding standard H.264/AVC, motion estimation (ME) is allowed to search multiple reference frames. Therefore, the required computation is highly increased, and it is in proportion to the number of searched reference frames. However, the reduction in prediction residues is mostly dependent on the nature of sequences, not on the number of searched frames. Sometimes the prediction residues can be greatly reduced, but frequently a lot of computation is wasted without achieving any better coding performance. In this paper, we propose a contextbased adaptive method to speed up the multiple reference frames ME. Statistical analysis is first applied to the available information for each macroblock (MB) after intraprediction and interprediction from the previous frame. Contextbased adaptive criteria are then derived to determine whether it is necessary to search more reference frames. The reference frame selection criteria are related to selected MB modes, interprediction residues, intraprediction residues, motion vectors of subpartitioned blocks, and quantization parameters. Many available standard video sequences are tested as examples. The simulation results show that the proposed algorithm can maintain competitively the same video quality as exhaustive search of multiple reference frames. Meanwhile, 76%–96 % of computation for searching unnecessary reference frames can be avoided. Moreover, our fast reference frame selection is orthogonal to conventional fast block matching algorithms, and they can be easily combined to achieve further efficient implementations.
Antiforensics of digital image compression
 IEEE Trans. Inf. Forensics Security
, 2011
"... Abstract—As society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image’s compression his ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
(Show Context)
Abstract—As society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image’s compression history and its associated compression fingerprints. Little consideration has been given, however, to antiforensic techniques capable of fooling forensic algorithms. In this paper, we present a set of antiforensic techniques designed to remove forensically significant indicators of compression from an image. We do this by first developing a generalized framework for the design of antiforensic techniques to remove compression fingerprints from an image’s transform coefficients. This framework operates by estimating the distribution of an image’s transform coefficients before compression, then adding antiforensic dither to the transform coefficients of a compressed image so that their distribution matches the estimated one. We then use this framework to develop antiforensic techniques specifically targeted at erasing compression fingerprints left by both JPEG and waveletbased coders. Additionally, we propose a technique to remove statistical traces of the blocking artifacts left by image compression algorithms that divide an image into segments during processing. Through a series of experiments, we demonstrate that our antiforensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image’s visual quality. Furthermore, we show how these techniques can be used to render several forms of image tampering such as double JPEG compression, cutandpaste image forgery, and image origin falsification undetectable through compressionhistorybased forensic means. Index Terms—Antiforensics, antiforensic dither, digital forensics, image compression, JPEG compression. I.
Recompression of JPEG images by requantization
 IEEE Trans. Image Process
, 2003
"... Abstract—In this paper, we report a novel heuristic for requantizing JPEG images. The resulting images are generally smaller and often have improved perceptual image quality over a “blind” requantization approach, that is, one that does not consider the properties of the quantization matrices. The ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we report a novel heuristic for requantizing JPEG images. The resulting images are generally smaller and often have improved perceptual image quality over a “blind” requantization approach, that is, one that does not consider the properties of the quantization matrices. The heuristic is supported by a detailed mathematical treatment which incorporates the wellknown Laplacian distribution of the AC discrete cosine transform (DCT) coefficients with an analysis of the error introduced by requantization. We note that the technique is applicable to any image compression method which employs discrete cosine transforms and quantization. Index Terms—Compression, JPEG image format, quantization, recompression, requantization. I.
ANTIFORENSICS OF JPEG COMPRESSION
"... The widespread availability of photo editing software has made it easy to create visually convincing digital image forgeries. To address this problem, there has been much recent work in the �eld of digital image forensics. There has been little work, however, in the �eld of antiforensics, which see ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
The widespread availability of photo editing software has made it easy to create visually convincing digital image forgeries. To address this problem, there has been much recent work in the �eld of digital image forensics. There has been little work, however, in the �eld of antiforensics, which seeks to develop a set of techniques designed to fool current forensic methodologies. In this work, we present a technique for disguising an image’s JPEG compression history. An image’s JPEG compression history can be used to provide evidence of image manipulation, supply information about the camera used to generate an image, and identify forged regions within an image. We show how the proper addition of noise to an image’s discrete cosine transform coef�cients can suf�ciently remove quantization artifacts which act as indicators of JPEG compression while introducing an acceptable level of distortion. Simulation results are provided to verify the ef�cacyofthisantiforensic technique. Index Terms — AntiForensics, Digital Forensics, JPEG Compression I.
The cost of jpeg compression antiforensics
 Proc. IEEE ICASSP
, 2011
"... The statistical footprint left by JPEG compression can be a valuable source of information for the forensic analyst. Recently, it has been shown that a suitable antiforensic method can be used to destroy these traces, by properly adding a noiselike signal to the quantized DCT coefficients. In this ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
The statistical footprint left by JPEG compression can be a valuable source of information for the forensic analyst. Recently, it has been shown that a suitable antiforensic method can be used to destroy these traces, by properly adding a noiselike signal to the quantized DCT coefficients. In this paper we analyze the cost of this technique in terms of introduced distortion and loss of image quality. We characterize the dependency of the distortion on the image statistics in the DCT domain and on the quantization step used in JPEG compression. We also evaluate the loss of quality as measured by means of a perceptual metric, showing that a perceptuallyoptimized version of the antiforensic method fails to completely conceal the forgery. Our conclusion is that removing the traces of the JPEG compression history could be much more challenging than it might appear, as antiforensic methods are bound to leave characteristic traces. Index Terms — digital image forensics; antiforensics; JPEG compression 1.
Analysis of the DCT coefficient distributions for document coding
 in Proc. Digital Photography Conf.: PICS 2003
, 2003
"... Abstract—It is known that the distribution of the discrete cosine transform (DCT) coefficients of most natural images follow a Laplacian distribution, and this knowledge has been employed to improve decoder design. However, such is not the case for text documents. In this letter, we present an anal ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract—It is known that the distribution of the discrete cosine transform (DCT) coefficients of most natural images follow a Laplacian distribution, and this knowledge has been employed to improve decoder design. However, such is not the case for text documents. In this letter, we present an analysis of their DCT coefficient distributions, and show that a Gaussian distribution can be a realistic model. Furthermore, we can use a generalized Gaussian model to incorporate the Laplacian distribution found for natural images. Index Terms—Discrete cosine transform (DCT), document processing, image analysis, image coding, probability statistics. I.
Revealing the traces of jpeg compression antiforensics,” Information Forensics and Security
 IEEE Transactions on
, 2013
"... Abstract—Due to the lossy nature of transform coding, JPEG introduces characteristic traces in the compressed images. A forensic analyst might reveal these traces by analyzing the histogram of discrete cosine transform (DCT) coefficients and exploit them to identify local tampering, copymove forge ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract—Due to the lossy nature of transform coding, JPEG introduces characteristic traces in the compressed images. A forensic analyst might reveal these traces by analyzing the histogram of discrete cosine transform (DCT) coefficients and exploit them to identify local tampering, copymove forgery, etc. At the same time, it has been recently shown that a knowledgeable adversary can possibly conceal the traces of JPEG compression, by adding a dithering noise signal in the DCT domain, in order to restore the histogram of the original image. In this paper, we study the processing chain that arises in the case of JPEG compression antiforensics. We take the perspective of the forensic analyst, and we show how it is possible to counter the aforementioned antiforensic method revealing the traces of JPEG compression, regardless of the quantization matrix being used. Tests on a large image dataset demonstrated that the proposed detector was able to achieve an average accuracy equal to 93%, rising above 99% when excluding the case of nearly lossless JPEG compression. Index Terms—Antiforensics, digital image forensics, JPEG compression.
BLIND PSNR ESTIMATION OF VIDEO SEQUENCES USING QUANTIZED DCT COEFFICIENT DATA
"... This paper proposes a noreference PSNR estimation method for video sequences subject to lossy DCTbased encoding, such as MPEG2 encoding. The proposed method is based on DCT coefficient statistics, which are modeled by Laplace probability density functions, with parameter λ. The distribution’s par ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
This paper proposes a noreference PSNR estimation method for video sequences subject to lossy DCTbased encoding, such as MPEG2 encoding. The proposed method is based on DCT coefficient statistics, which are modeled by Laplace probability density functions, with parameter λ. The distribution’s parameter is computed from the received quantized data, by combining maximumlikelihood with linear prediction estimates. The resulting coefficient distributions are then used for estimating the local error due to lossy encoding. Since no knowledge about the original (reference) sequences is required, the proposed method can be used as a noreference metric for evaluating the quality of the encoded video sequences. Index Terms — Image quality, noreference metric, parameter estimation
ML detection of steganography
 Proceedings SPIE, Electronic Imaging, Security, Steganography, and Watermarking of Multimedia Contents VII
, 2005
"... Digital steganography is the art of hiding information in multimedia content, such that it remains perceptually and statistically unchanged. The detection of such covert communication is referred to as steganalysis. To date, steganalysis research has focused primarily on either, the extraction of fe ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Digital steganography is the art of hiding information in multimedia content, such that it remains perceptually and statistically unchanged. The detection of such covert communication is referred to as steganalysis. To date, steganalysis research has focused primarily on either, the extraction of features from a document that are sensitive to the embedding, or the inference of some statistical difference between marked and unmarked objects. In this work, we evaluate the statistical limits of such techniques by developing asymptotically optimal tests (Maximum Likelihood) for a number of side informed embedding schemes. The required probability density functions (pdf) are derived for Dither Modulation (DM) and DistortionCompensated Dither Modulation (DCDM/SCS) from an steganalyst’s point of view. For both embedding techniques, the pdfs are derived in the presence and absence of a secret dither key. The resulting tests are then compared to a robust blind steganalytic test based on feature extraction. The performance of the tests is evaluated using an integral measure and receiver operating characteristic (ROC) curves.
Novel Variance Based Approach to Improving JPEG
 Decoding,” ICIT 2005, Proc. IEEE International Conference on Industrial Technology
, 2005
"... Abstract For JPEG image compression, there exists a well known approach to optimal reconstruction of quantized AC DCT coefficients in the JPEG decoder. However, the Laplacian distribution parameter must be firstly determined. To solve this problem, this paper proposes a practical novel variance bas ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract For JPEG image compression, there exists a well known approach to optimal reconstruction of quantized AC DCT coefficients in the JPEG decoder. However, the Laplacian distribution parameter must be firstly determined. To solve this problem, this paper proposes a practical novel variance based approach to estimate the Laplacian parameter based on the quantized DCT coefficients, which are already available in the decoder side. Both the theoretical analysis and extensive experimental results demonstrate that the proposed approach can achieve almost the best decoded image quality improvement when compared with other approaches reported in the literature. More importantly, the proposed approach provides a more practical and efficient manner to estimate the Laplacian parameter, and is hence more suitable for realworld applications. I.