Results

**1 - 7**of**7**### 1Cell-based 2-Step Scalar Deadzone Quantization for High Bit-Depth Hyperspectral Image Coding

"... Abstract—Remote sensing images often need to be coded and/or transmitted with constrained computational resources. Among other features, such images commonly have high spatial, spectral, and bit-depth resolution, which may render difficult their handling. This paper introduces an embedded quantizati ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—Remote sensing images often need to be coded and/or transmitted with constrained computational resources. Among other features, such images commonly have high spatial, spectral, and bit-depth resolution, which may render difficult their handling. This paper introduces an embedded quantization scheme based on 2-step scalar deadzone quantization (2SDQ) that enhances the quality of transmitted images when coded with a constrained number of bits. The proposed scheme is devised for use in JPEG2000. It is named cell-based 2SDQ since it uses cells, i.e., small sets of wavelet coefficients within the codeblocks defined by JPEG2000. Cells permit a finer discrimination of coefficients in which to apply the proposed quantizer. Experimental results indicate that the proposed scheme is especially beneficial for high bit-depth hyperspectral images. Index Terms—Embedded quantization, 2-step scalar deadzone quantization, high bit-depth images, JPEG2000. I.

### 1General Embedded Quantization for Wavelet-Based Lossy Image Coding

"... Abstract—Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with b ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quanti-zation scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems. Index Terms—General embedded quantization, lossy image coding, JPEG 2000. I.

### 12-Step Scalar Deadzone Quantization for Bitplane Image Coding

"... Abstract—Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) togethe ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indices. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ is demonstrated within the framework of JPEG2000. Index Terms—2-step scalar deadzone quantization, general embedded quantization, bitplane image coding, JPEG2000. I.

### 1Stationary Probability Model for Microscopic Parallelism in JPEG2000

"... Abstract—Parallel processing is key to augmenting the throughput of image codecs. Despite numerous efforts to paral-lelize wavelet-based image coding systems, most attempts fail at the parallelization of the bitplane coding engine, which is the most computationally intensive stage of the coding pipe ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—Parallel processing is key to augmenting the throughput of image codecs. Despite numerous efforts to paral-lelize wavelet-based image coding systems, most attempts fail at the parallelization of the bitplane coding engine, which is the most computationally intensive stage of the coding pipeline. The main reason for this failure is the causality with which current coding strategies are devised, which assumes that one coefficient is coded after another. This work analyzes the mechanisms employed in bitplane coding and proposes alternatives to enhance opportuni-ties for parallelism. We describe a stationary probability model that, without sacrificing the advantages of current approaches, removes the main obstacle to the parallelization of most coding strategies. Experimental tests evaluate the coding performance achieved by the proposed method in the framework of JPEG2000 when coding different types of images. Results indicate that the stationary probability model achieves similar coding perfor-mance, with slight increments or decrements depending on the image type and the desired level of parallelism. Index Terms—Parallel architectures, JPEG2000, probability models, bitplane image coding.

### 1Bitplane Image Coding with Parallel Coefficient Processing

"... Abstract—Image coding systems have been traditionally tai-lored for Multiple Instruction, Multiple Data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—Image coding systems have been traditionally tai-lored for Multiple Instruction, Multiple Data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the up-raising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the Single Instruction, Multiple Data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies can not fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient processing (BPC-PaCo), a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. Experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to traditional strategies is almost negligible. Index Terms—Bitplane image coding, Single Instruction Mul-tiple Data (SIMD), JPEG2000.

### 1Entropy-based Evaluation of Context Models for Wavelet-transformed Images

"... Abstract—Entropy is a measure of a message uncertainty. Among others aspects, it serves to determine the minimum coding rate that practical systems may attain. This work defines an entropy-based measure to evaluate context models employed in wavelet-based image coding. The proposed measure is define ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—Entropy is a measure of a message uncertainty. Among others aspects, it serves to determine the minimum coding rate that practical systems may attain. This work defines an entropy-based measure to evaluate context models employed in wavelet-based image coding. The proposed measure is defined considering the mechanisms utilized by modern coding systems. It establishes the maximum performance achievable with each context model. This helps to determine the adequateness of the model under different coding conditions, and serves to predict with high precision the coding rate achieved by practical systems. Experimental results evaluate four well-known context models using different types of images, coding rates, and transform strategies. They reveal that, under specific coding conditions, some widely-spread context models may not be as adequate as it is generally thought. The hints provided by this analysis may help to design simpler and more efficient wavelet-based image codecs. Index Terms—Context models, image entropy, wavelet trans-form, bitplane image coding, JPEG2000. I.

### 1Context-adaptive Binary Arithmetic Coding with Fixed-length Codewords

"... Abstract—Context-adaptive binary arithmetic coding is a widespread technique in the field of image and video coding. Most state-of-the-art arithmetic coders produce a (long) codeword of a priori unknown length. Its generation requires a renormalization procedure to permit progressive processing. Thi ..."

Abstract
- Add to MetaCart

(Show Context)
Abstract—Context-adaptive binary arithmetic coding is a widespread technique in the field of image and video coding. Most state-of-the-art arithmetic coders produce a (long) codeword of a priori unknown length. Its generation requires a renormalization procedure to permit progressive processing. This paper intro-duces two arithmetic coders that produce multiple codewords of fixed length. Contrarily to the traditional approach, the genera-tion of fixed-length codewords does not require renormalization since the whole interval arithmetic is stored in the coder’s internal registers. The proposed coders employ a new context-adaptive mechanism based on variable-size sliding window that estimates with high precision the probability of the symbols coded. Their integration in coding systems is straightforward as demonstrated within the framework of JPEG2000. Experimental tests indicate that the proposed coders are computationally simpler than the MQ coder of JPEG2000 and the M coder of HEVC while achieving superior coding efficiency. Index Terms—Context-adaptive binary arithmetic coding, fixed-length arithmetic codes.