Results 1 - 10
of
26
A perceptual framework for contrast processing of high dynamic range images
- IN APGV ’05: PROCEEDINGS OF THE 2ND SYMPOSIUM ON APPIED PERCEPTION IN GRAPHICS AND VISUALIZATION, ACM
, 2005
"... Image processing often involves an image transformation into a domain that is better correlated with visual perception, such as the wavelet domain, image pyramids, multi-scale contrast representations, contrast in retinex algorithms, and chroma, lightness and colorfulness predictors in color appeara ..."
Abstract
-
Cited by 61 (8 self)
- Add to MetaCart
Image processing often involves an image transformation into a domain that is better correlated with visual perception, such as the wavelet domain, image pyramids, multi-scale contrast representations, contrast in retinex algorithms, and chroma, lightness and colorfulness predictors in color appearance models. Many of these transformations are not ideally suited for image processing that significantly modifies an image. For example, the modification of a single band in a multi-scale model leads to an unrealistic image with severe halo artifacts. Inspired by gradient domain methods we derive a framework that imposes constraints on the entire set of contrasts in an image for a full range of spatial frequencies. This way, even severe image modifications do not reverse the polarity of contrast. The strengths of the framework are demonstrated by aggressive contrast enhancement and a visually appealing tone mapping which does not introduce artifacts. Additionally, we perceptually linearize contrast magnitudes using a custom transducer function. The transducer function has been derived especially for the purpose of HDR images, based on the contrast discrimination measurements for high contrast stimuli.
Perceptual illumination components: A new approach to efficient, high quality global illumination rendering
- ACM Transactions on Graphics
, 2004
"... In this paper we introduce a new perceptual metric for efficient, high quality, global illumination rendering. The metric is based on a rendering-by-components framework in which the direct, and indirect diffuse, glossy, and specular light transport paths are separately computed and then composited ..."
Abstract
-
Cited by 41 (3 self)
- Add to MetaCart
(Show Context)
In this paper we introduce a new perceptual metric for efficient, high quality, global illumination rendering. The metric is based on a rendering-by-components framework in which the direct, and indirect diffuse, glossy, and specular light transport paths are separately computed and then composited to produce an image. The metric predicts the perceptual importances of the computationally expensive indirect illumination components with respect to image quality. To develop the metric we conducted a series of psychophysical experiments in which we measured and modeled the perceptual importances of the components. An important property of this new metric is that it predicts component importances from inexpensive estimates of the reflectance properties of a scene, and therefore adds negligible overhead to the rendering process. This perceptual metric should enable the development of an important new class of efficient global-illumination rendering systems that can intelligently allocate limited computational resources, to provide high quality images at interactive rates.
Perceptually driven 3D distance metrics with application to watermarking
, 2006
"... This paper presents an objective structural distortion measure which reflects the visual similarity between 3D meshes and thus can be used for quality assessment. The proposed tool is not linked to any specific application and thus can be used to evaluate any kinds of 3D mesh processing algorithms ( ..."
Abstract
-
Cited by 37 (18 self)
- Add to MetaCart
This paper presents an objective structural distortion measure which reflects the visual similarity between 3D meshes and thus can be used for quality assessment. The proposed tool is not linked to any specific application and thus can be used to evaluate any kinds of 3D mesh processing algorithms (simplification, compression, watermarking etc.). This measure follows the concept of structural similarity recently introduced for 2D image quality assessment by Wang et al. 1 and is based on curvature analysis (mean, standard deviation, covariance) on local windows of the meshes. Evaluation and comparison with geometric metrics are done through a subjective experiment based on human evaluation of a set of distorted objects. A quantitative perceptual metric is also derived from the proposed structural distortion measure, for the specific case of watermarking quality assessment, and is compared with recent state of the art algorithms. Both visual and quantitative results demonstrate the robustness of our approach and its strong correlation with subjective ratings.
Visual Attention for Efficient High-Fidelity Graphics
"... High-fidelity rendering of complex scenes at interactive rates is one of the primary goals of computer graphics. Since high-fidelity rendering is computationally expensive, perceptual strategies such as visual attention have been explored to achieve this goal. In this paper we investigate how two mo ..."
Abstract
-
Cited by 28 (10 self)
- Add to MetaCart
High-fidelity rendering of complex scenes at interactive rates is one of the primary goals of computer graphics. Since high-fidelity rendering is computationally expensive, perceptual strategies such as visual attention have been explored to achieve this goal. In this paper we investigate how two models of human visual attention can be exploited in a selective rendering system. We examine their effects both individually, and in combination, through psychophysical experiments to measure savings in computation time while preserving the perceived visual quality for a task-related scene. We adapt the lighting simulation system Radiance to support selective rendering, by introducing a selective guidance system which can exploit attentional processes using an importance map. Our experiments demonstrate that viewers performing a visual task within the environment consistently fail to notice the difference between high quality and selectively rendered images, computed in a significantly reduced time.
Visual attention in 3D video games
- ACE 06 Proceedings
, 2006
"... been a movement to use a perception-based rendering approach where the rendering process itself takes into account where the user is most likely looking (Haber et al. 2001). Examples include trying to achieve real-time global illumination by concentrating the global illumination calculation only in ..."
Abstract
-
Cited by 26 (0 self)
- Add to MetaCart
been a movement to use a perception-based rendering approach where the rendering process itself takes into account where the user is most likely looking (Haber et al. 2001). Examples include trying to achieve real-time global illumination by concentrating the global illumination calculation only in parts of the scene that are salient (Myszkowski 2002). Video games have achieved a high degree of popularity because of such advances in computer graphics. These techniques are also important because they have enabled game environments to be used in applications such as health therapy and training. We believe that research on visual attention can further improve the design of game environments, thus decreasing frustration and increasing engagement. Many non-gamers get lost in 3D game environments, or they don’t pick up an important item because they don’t notice it. Visual attention research results can be used
Top-Down Visual Attention for Efficient Rendering of Task Related Scenes
- In Vision, Modeling and Visualization
, 2004
"... The perception of a virtual environment depends on the user and the task the user is currently performing in that environment. Models of the human visual system can thus be exploited to significantly reduce computational time when rendering high fidelity images, without compromising the perceived vi ..."
Abstract
-
Cited by 19 (7 self)
- Add to MetaCart
(Show Context)
The perception of a virtual environment depends on the user and the task the user is currently performing in that environment. Models of the human visual system can thus be exploited to significantly reduce computational time when rendering high fidelity images, without compromising the perceived visual quality. This paper considers how an image can be selectively rendered when a user is performing a visual task in an environment. In particular, we investigate to what level viewers fail to notice degradations in image quality, between nontask related areas and task related areas, when quality parameters such as image resolution, edge antialiasing and reflection and shadows are altered.
Perceptual rendering of participating media
- ACM Transactions on Applied Perception
"... High-fidelity image synthesis is the process of computing images that are perceptually indistinguishable from the real world they are attempting to portray. Such a level of fidelity requires that the physical processes of materials and the behavior of light are accurately simulated. Most computer gr ..."
Abstract
-
Cited by 13 (6 self)
- Add to MetaCart
High-fidelity image synthesis is the process of computing images that are perceptually indistinguishable from the real world they are attempting to portray. Such a level of fidelity requires that the physical processes of materials and the behavior of light are accurately simulated. Most computer graphics algorithms assume that light passes freely between surfaces within an environment. However, in many applications, we also need to take into account how the light interacts with media, such as dust, smoke, fog, etc., between the surfaces. The computational requirements for calculating the interaction of light with such participating media are substantial. This process can take many hours and rendering effort is often spent on computing parts of the scene that may not be perceived by the viewer. In this paper, we present a novel perceptual strategy for physically based rendering of participating media. By using a combination of a saliency map with our new extinction map (X map), we can significantly reduce rendering times for inhomogeneous media. The visual quality of the resulting images is validated using two objective difference metrics and a subjective psychophysical experiment. Although the average pixel errors of these metric are all less than 1%, the subjective validation indicates that the degradation in quality still is noticeable for certain scenes. We thus introduce and validate a novel light map (L map) that accounts for salient features caused by multiple light scattering around
A local roughness measure for 3D meshes and its application to visual masking
- ACM TRANS. ON APPL. PERCEPTION
, 2009
"... 3D models are subject to a wide variety of processing operations such as compression, simplification or watermarking, which may introduce some geometric artifacts on the shape. The main issue is to maximize the compression/simplification ratio or the watermark strength while minimizing these visual ..."
Abstract
-
Cited by 10 (4 self)
- Add to MetaCart
3D models are subject to a wide variety of processing operations such as compression, simplification or watermarking, which may introduce some geometric artifacts on the shape. The main issue is to maximize the compression/simplification ratio or the watermark strength while minimizing these visual degradations. However few algorithms exploit the human visual system to hide these degradations, while perceptual attributes could be quite relevant for this task. Particularly, the masking effect defines the fact that one visual pattern can hide the visibility of another. In this context we introduce an algorithm for estimating the roughness of a 3D mesh, as a local measure of geometric noise on the surface. Indeed, a textured (or rough) region is able to hide geometric distortions much better than a smooth one. Our measure is based on curvature analysis on local windows of the mesh and is independent of the resolution/connectivity of the object. The accuracy and the robustness of our measure, together with its relevance regarding visual masking have been demonstrated through extensive comparisons with state-of-the-art and subjective experiment. Two applications are also presented, in which the roughness is used to lead (and improve) respectively compression and watermarking algorithms.
Selective component-based rendering
- In Proceedings of GRAPHITE 2005, ACM SIGGRAPH
, 2005
"... The computational requirements of full global illumination rendering are such that it is still not possible to achieve high-fidelity graphics of very complex scenes in a reasonable time on a single computer. By identifying which computations are more relevant to the desired quality of the solution, ..."
Abstract
-
Cited by 10 (2 self)
- Add to MetaCart
The computational requirements of full global illumination rendering are such that it is still not possible to achieve high-fidelity graphics of very complex scenes in a reasonable time on a single computer. By identifying which computations are more relevant to the desired quality of the solution, selective rendering can significantly reduce rendering times. In this paper we present a novel component-based selective rendering system in which the quality of every image, and indeed every pixel, can be controlled by means of a component regular expression (crex). The crex provides a flexible mechanism for controlling which components are rendered and in which order. It can be used as a strategy for directing the light transport within a scene and also in a progressive rendering framework. Furthermore, the crex can be combined with visual perception techniques to reduce rendering computation times without compromising the perceived visual quality. By means of a psychophysical experiment we demonstrate how the crex can be successfully used in such a perceptual rendering framework. In addition, we show how the crex’s flexibility enables it to be incorporated in a predictive framework for time-constrained rendering.
Perceptual metrics for static and dynamic triangle meshes
- EUROGRAPHICS
, 2012
"... Almost all mesh processing procedures cause some more or less visible changes in the appearance of objects represented by polygonal meshes. In many cases, such as mesh watermarking, simplification or lossy compression, the objective is to make the change in appearance negligible, or as small as poss ..."
Abstract
-
Cited by 9 (3 self)
- Add to MetaCart
(Show Context)
Almost all mesh processing procedures cause some more or less visible changes in the appearance of objects represented by polygonal meshes. In many cases, such as mesh watermarking, simplification or lossy compression, the objective is to make the change in appearance negligible, or as small as possible, given some other constraints. Measuring the amount of distortion requires taking into account the final purpose of the data. In many applications, the final consumer of the data is a human observer, and therefore the perceptibility of the introduced appearance change by a human observer should be the criterion that is taken into account when designing and configuring the processing algorithms. In this review, we discuss the existing comparison metrics for static and dynamic (animated) triangle meshes. We describe the concepts used in perception-oriented metrics used for 2D image comparison, and we show how these concepts are employed in existing 3D mesh metrics. We describe the character of subjective data used for evaluation of mesh metrics and provide comparison results identifying the advantages and drawbacks of each method. Finally, we also discuss employing the perception-correlated metrics in perception-oriented mesh processing algorithms.