Results 1 - 10
of
283
Reduced-reference image quality assessment using divisive . . .
, 2009
"... Reduced-reference image quality assessment (RRIQA) methods estimate image quality degradations with partial information about the “perfect-quality” reference image. In this paper, we propose an RRIQA algorithm based on a divisive normalization image representation. Divisive normalization has been r ..."
Abstract
-
Cited by 82 (9 self)
- Add to MetaCart
Reduced-reference image quality assessment (RRIQA) methods estimate image quality degradations with partial information about the “perfect-quality” reference image. In this paper, we propose an RRIQA algorithm based on a divisive normalization image representation. Divisive normalization has been recognized as a successful approach to model the perceptual sensitivity of biological vision. It also provides a useful image representation that significantly improves statistical independence for natural images. By using a Gaussian scale mixture statistical model of image wavelet coefficients, we compute a divisive normal-ization transformation (DNT) for images and evaluate the quality of a distorted image by comparing a set of reduced-reference statistical features extracted from DNT-domain representations of the reference and distorted images, respectively. This leads to a generic or general-purpose RRIQA method, in which no assumption is made about the types of distortions occurring in the image being evaluated. The proposed algorithm is cross-validated using two publicly-accessible subject-rated image databases (the UT-Austin LIVE database and the Cornell-VCL A57 database) and demonstrates good performance across a wide range of image distortions.
FSIM: A Feature Similarity Index for Image Quality Assessment
"... Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural-similarity (SSIM) index brings IQA from pixel-based stage to structure-based stage. In this paper, a novel feature-similarity (FSIM) index ..."
Abstract
-
Cited by 77 (15 self)
- Add to MetaCart
Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural-similarity (SSIM) index brings IQA from pixel-based stage to structure-based stage. In this paper, a novel feature-similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS’ perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.
Sustainability science
, 2001
"... No-training, no-reference image quality index using perceptual features ..."
Abstract
-
Cited by 77 (4 self)
- Add to MetaCart
No-training, no-reference image quality index using perceptual features
Study of subjective and objective quality assessment of video
- IEEE Trans. Image Process
, 2010
"... Abstract—We present the results of a recent large-scale sub-jective study of video quality on a collection of videos distorted by a variety of application-relevant processes. Methods to as-sess the visual quality of digital videos as perceived by human observers are becoming increasingly important, ..."
Abstract
-
Cited by 75 (18 self)
- Add to MetaCart
(Show Context)
Abstract—We present the results of a recent large-scale sub-jective study of video quality on a collection of videos distorted by a variety of application-relevant processes. Methods to as-sess the visual quality of digital videos as perceived by human observers are becoming increasingly important, due to the large number of applications that target humans as the end users of video. Owing to the many approaches to video quality assessment (VQA) that are being developed, there is a need for a diverse independent public database of distorted videos and subjective scores that is freely available. The resulting Laboratory for Image and Video Engineering (LIVE) Video Quality Database contains 150 distorted videos (obtained from ten uncompressed reference videos of natural scenes) that were created using four different commonly encountered distortion types. Each video was assessed by 38 human subjects, and the difference mean opinion scores (DMOS) were recorded. We also evaluated the performance of several state-of-the-art, publicly available full-reference VQA algorithms on the new database. A statistical evaluation of the relative performance of these algorithms is also presented. The database has a dedicated web presence that will be maintained as long as it remains relevant and the data is available online. Index Terms—Full reference, human visual system, LIVE video quality database, perceptual quality assessment, video quality, vi-sual perception.
Motion Tuned Spatio-temporal Quality Assessment of Natural Videos
- IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2010
"... There has recently been a great deal of interest in the development of algorithms that objectively measure the integrity of video signals. Since video signals are being delivered to human end users in an increasingly wide array of applications and products, it is important that automatic methods of ..."
Abstract
-
Cited by 74 (7 self)
- Add to MetaCart
There has recently been a great deal of interest in the development of algorithms that objectively measure the integrity of video signals. Since video signals are being delivered to human end users in an increasingly wide array of applications and products, it is important that automatic methods of video quality assessment (VQA) be available that can assist in controlling the quality of video being delivered to this critical audience. Naturally, the quality of motion representation in videos plays an important role in the perception of video quality, yet existing VQA algorithms make little direct use of motion information, thus limiting their effectiveness. We seek to ameliorate this by developing a general, spatio-spectrally localized multiscale framework for evaluating dynamic video fidelity that integrates both spatial and temporal (and spatio-temporal) aspects of distortion assessment. Video quality is evaluated not only in space and time, but also in space-time, by evaluating motion quality along computed motion trajectories. Using this framework, we develop a full reference VQA algorithm for which we coin the term the MOtion-based Video Integrity Evaluation index, or MOVIE index. It is found that the MOVIE index delivers VQA scores that correlate quite closely with human subjective judgment, using the Video Quality Expert Group (VQEG) FRTV Phase 1 database as a test bed. Indeed, the MOVIE index is found to be quite competitive with, and even outperform, algorithms developed and submitted to the VQEG FRTV Phase 1 study, as well as more recent VQA algorithms tested on this database.
Information content weighting for perceptual image quality assessment
- IEEE Trans. Image Processing
, 2011
"... Abstract—Many state-of-the-art perceptual image quality as-sessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring local image quality/distortion, the pooling stage is often done in ..."
Abstract
-
Cited by 71 (16 self)
- Add to MetaCart
(Show Context)
Abstract—Many state-of-the-art perceptual image quality as-sessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring local image quality/distortion, the pooling stage is often done in ad-hoc ways, lacking theoretical principles and reliable computational models. This paper aims to test the hypothesis that when viewing natural images, the optimal perceptual weights for pooling should be proportional to local information content, which can be estimated in units of bit using advanced statistical models of natural images. Our extensive studies based upon six publicly-available sub-ject-rated image databases concluded with three useful findings. First, information content weighting leads to consistent improve-ment in the performance of IQA algorithms. Second, surprisingly, with information content weighting, even the widely criticized peak signal-to-noise-ratio can be converted to a competitive perceptual quality measure when compared with state-of-the-art algorithms. Third, the best overall performance is achieved by combining information content weighting with multiscale struc-tural similarity measures. Index Terms—Gaussian scale mixture (GSM), image quality assessment (IQA), pooling, information content measure, peak signal-to-noise-ratio (PSNR), structural similarity (SSIM), statis-tical image modeling. I.
Video Quality Assessment Using a Statistical Model of Human Visual Speed Perception
"... Motion is one of the most important types of information contained in natu-ral video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to in-corporate a recent model of human visual speed perception [Stocker & ..."
Abstract
-
Cited by 52 (7 self)
- Add to MetaCart
(Show Context)
Motion is one of the most important types of information contained in natu-ral video, but direct use of motion information in the design of video quality assessment algorithms has not been deeply investigated. Here we propose to in-corporate a recent model of human visual speed perception [Stocker & Simon-celli, Nature Neuroscience 9, 578-585 (2006)] and model visual perception in an information communication framework. This allows us to estimate both the motion information content and the perceptual uncertainty in video signals. Improved video quality assessment algorithms are obtained by incorporating the model as spatiotemporal weighting factors, where the weight increases with the information content and decreases with the perceptual uncertainty. Consistent improvement over existing video quality assessment algorithms is observed in our validation with the video quality experts group Phase I test data set.
Blind image quality assessment: A natural scene statatistics approach in the DCT domain
- IEEE Trans. on Image Processing
"... Abstract — We develop an efficient general-purpose blind/ no-reference image quality assessment (IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for ..."
Abstract
-
Cited by 45 (13 self)
- Add to MetaCart
(Show Context)
Abstract — We develop an efficient general-purpose blind/ no-reference image quality assessment (IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index. Index Terms — Discrete cosine transform (DCT), generalized Gaussian density, natural scene statistics, no-reference image quality assessment. I.
Bovik, “Visual Importance Pooling for Image Quality Assessment
- IEEE journal of Selected Topics in Signal Processing
, 2009
"... Abstract—Recent image quality assessment (IQA) metrics achieve high correlation with human perception of image quality. Naturally, it is of interest to produce even better results. One promising method is to weight image quality measurements by visual importance. To this end, we describe two strateg ..."
Abstract
-
Cited by 39 (8 self)
- Add to MetaCart
Abstract—Recent image quality assessment (IQA) metrics achieve high correlation with human perception of image quality. Naturally, it is of interest to produce even better results. One promising method is to weight image quality measurements by visual importance. To this end, we describe two strategies—visual fixation-based weighting, and quality-based weighting. By contrast with some prior studies we find that these strategies can improve the correlations with subjective judgment significantly. We demonstrate improvements on the SSIM index in both its multiscale and single-scale versions, using the LIVE database as a test-bed. Index Terms—Image quality assessment (IQA), quality-based weighting, structural similarity, subjective quality assessment, visual fixations. I.