Results 1 - 10
of
111
Perceptual blur and ringing metrics: Application to JPEG2000,” Signal Process
- Image Commun
, 2004
"... We present a full- and no-reference blur metric as well as a full-reference ringing metric. These metrics are based on an analysis of the edges and adjacent regions in an image and have very low computational complexity. As blur and ringing are typical artifacts of wavelet compression, the metrics a ..."
Abstract
-
Cited by 65 (1 self)
- Add to MetaCart
(Show Context)
We present a full- and no-reference blur metric as well as a full-reference ringing metric. These metrics are based on an analysis of the edges and adjacent regions in an image and have very low computational complexity. As blur and ringing are typical artifacts of wavelet compression, the metrics are then applied to JPEG2000 coded images. Their perceptual significance is corroborated through a number of subjective experiments. The results show that the proposed metrics perform well over a wide range of image content and distortion levels. Potential applications include source coding optimization and network resource management. r 2003 Elsevier B.V. All rights reserved.
An Evaluation of Multi-Resolution Search and Storage in Resource-Constrained Sensor Networks
, 2003
"... Wireless sensor networks enable dense sensing of the environment, offering unprecedented opportunities for observing the physical world. Centralized data collection and analysis adversely impact sensor node lifetime. Previous sensor network research has, therefore, focused on in network aggregation ..."
Abstract
-
Cited by 27 (3 self)
- Add to MetaCart
Wireless sensor networks enable dense sensing of the environment, offering unprecedented opportunities for observing the physical world. Centralized data collection and analysis adversely impact sensor node lifetime. Previous sensor network research has, therefore, focused on in network aggregation and query processing, but has done so for applications where the features of interest are known a priori. When features are not known a priori, as is the case with many scientific applications in dense sensor arrays, efficient support for multi-resolution storage and iterative, drill-down queries is essential.
A novel multiple description coding scheme compatible with the JPEG 2000 decoder
, 2004
"... In this letter we propose a novel technique to generate rate-distortion optimized multiple descriptions of images, exploiting the rate-allocation strategy embedded in the JPEG 2000 encoder. The proposed scheme can be applied to any encoding algorithm, given that the rate allocation is based on cod ..."
Abstract
-
Cited by 21 (8 self)
- Add to MetaCart
In this letter we propose a novel technique to generate rate-distortion optimized multiple descriptions of images, exploiting the rate-allocation strategy embedded in the JPEG 2000 encoder. The proposed scheme can be applied to any encoding algorithm, given that the rate allocation is based on code-block truncation. The method yields excellent performance in terms of both central and side distortion, outperforming state-of-the art techniques. Moreover, the single description decoding is fully compatible with the JPEG 2000 Part 1 decoder.
Robust human face hiding ensuring privacy
- in Proc. of International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS
, 2005
"... Nowadays, video surveillance of people must ensure privacy. In this paper, we propose a seamless solution to that problem by masking faces in video sequences, which keeps people anonymous. The system consists of two modules. First, an analysis module identifies and follows regions of interest (ROI&a ..."
Abstract
-
Cited by 19 (1 self)
- Add to MetaCart
(Show Context)
Nowadays, video surveillance of people must ensure privacy. In this paper, we propose a seamless solution to that problem by masking faces in video sequences, which keeps people anonymous. The system consists of two modules. First, an analysis module identifies and follows regions of interest (ROI's) where faces are detected. Second, the JPEG 2000 encoding module compresses the frames keeping the ROI’s in a separate data layer, so that the correct rendering of human faces can be restricted. The analysis module combines two complementary methods: face detection, to locate faces in the image, and tracking, to follow them seamlessly along the time. The fusion of these two methods increases robustness: once a face has already been detected in a frame, tracking may locate it in the consecutive frames, even when a face detection algorithm would not. In addition, detection of faces prevents tracking from loosing its targets. The encoding module downshifts the JPEG 2000 data corresponding to the identified ROI's to the lowest quality layer of the codestream. When the transmission bandwidth is limited, the human faces will then be decoded with a lower visual quality, up to invisibility when required. The proposed solution has been tested on different types of sequences. The results are presented in the paper. 1.
A flexible and scalable authentication scheme for jpeg2000 image codestreams
- Proc. of 11 th ACM Int. Conf. on Multimedia
, 2003
"... JPEG2000 is an emerging standard for still image compression and is becoming the solution of choice for many digital imaging fields and applications. An important aspect of JPEG2000 is its “compress once, decompress many ways” property [1], i. e., it allows extraction of various sub-images (e.g., im ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
JPEG2000 is an emerging standard for still image compression and is becoming the solution of choice for many digital imaging fields and applications. An important aspect of JPEG2000 is its “compress once, decompress many ways” property [1], i. e., it allows extraction of various sub-images (e.g., images with various resolutions, pixel fidelities, tiles and components) all from a single compressed image codestream. In this paper, we present a flexible and scalable authentication scheme for JPEG2000 images based on the Merkle hash tree and digital signature. Our scheme is fully compatible with JPEG2000 and possesses a “sign once, verify many ways ” property. That is, it allows users to verify the authenticity and integrity of different sub-images extracted from a single compressed codestream protected with a single digital signature. Categories and Subject Descriptors K.4.4 [Computers and Society]: Electronic Commerce— intellectual property, security; I.3.8 [Computer Methodologies]:
Region-based wavelet coding methods for digital mammography
- IEEE Transactions on Medical Imaging
, 2003
"... Abstract | Spatial resolution and contrast sensitivity re-quirements for some types of medical image techniques, in-cluding mammography, delay the implementation of new digital technologies, namely CAD, PACS or teleradiology. In order to reduce transmission time and storage cost, an eÆcient data com ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
(Show Context)
Abstract | Spatial resolution and contrast sensitivity re-quirements for some types of medical image techniques, in-cluding mammography, delay the implementation of new digital technologies, namely CAD, PACS or teleradiology. In order to reduce transmission time and storage cost, an eÆcient data compression scheme to reduce digital data without degradation of medical image quality is needed. In this study, we have applied two region-based compres-sion methods to digital mammograms. In both methods, after segmenting the breast region, Region-Based Discrete Wavelet Transform (RBDWT) is applied followed by an Object-Based extension of the Set Partitioning In Hierar-chical Trees (OB-SPIHT) coding algorithm in one method, and an Object-Based extension of the Set Partitioned Em-bedded bloCK (OB-SPECK) coding algorithm in the other. We have compared this specic implementations against the original SPIHT on ve digital mammograms compressed at rates ranging from 0.1 to 1.0 bpp. Distortion was evaluated for all images and compression rates by the Peak Signal-to-Noise Ratio (PSNR). For all images, OB-SPIHT and OB-SPECK performed substantially better than the traditional SPIHT, and a slight dierence in performance was found be-tween them. For digital mammography, region-based com-pression methods represent an improvement in compression eÆciency.
JPEG vs. JPEG2000: An objective comparison of image encoding quality
- Proceedings of SPIE Applications of Digital Image Processing
, 2004
"... This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.
A new family of spline-based biorthogonal wavelet transforms and their application to image compression
- IEEE Trans. Image Process
, 2004
"... Abstract—In this paper. we design a new family of biorthogonal wavelet transforms and describe their applications to still image compression. The wavelet transforms are constructed from various types of interpolatory and quasiinterpolatory splines. The transforms use finite impulse response and infi ..."
Abstract
-
Cited by 12 (6 self)
- Add to MetaCart
(Show Context)
Abstract—In this paper. we design a new family of biorthogonal wavelet transforms and describe their applications to still image compression. The wavelet transforms are constructed from various types of interpolatory and quasiinterpolatory splines. The transforms use finite impulse response and infinite impulse response filters that are implemented in a fast lifting mode. Index Terms—Image compression, lifting scheme, spline, wavelet transform. I.
Three-Dimensional Encoding/Two-Dimensional Decoding of Medical Data
- IEEE Transactions on Medical Imaging
, 2003
"... We propose a fully three-dimensional (3-D) wavelet-based coding system featuring 3-D encoding/two-dimensional (2-D) decoding functionalities. A fully 3-D transform is combined with context adaptive arithmetic coding; 2-D decoding is enabled by encoding every 2-D subband image independently. The syst ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
We propose a fully three-dimensional (3-D) wavelet-based coding system featuring 3-D encoding/two-dimensional (2-D) decoding functionalities. A fully 3-D transform is combined with context adaptive arithmetic coding; 2-D decoding is enabled by encoding every 2-D subband image independently. The system allows a finely graded up to lossless quality scalability on any 2-D image of the dataset. Fast access to 2-D images is obtained by decoding only the corresponding information thus avoiding the reconstruction of the entire volume. The performance has been evaluated on a set of volumetric data and compared to that provided by other 3-D as well as 2-D coding systems. Results show a substantial improvement in coding efficiency (up to 33%) on volumes featuring good correlation properties along the axis. Even though we did not address the complexity issue, we expect a decoding time of the order of one second/image after optimization. In summary, the proposed 3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones.