Results 1 - 10
of
185
Lightcuts: a scalable approach to illumination
- ACM Transactions on Graphics (Proc. SIGGRAPH
, 2005
"... Lightcuts is a scalable framework for computing realistic illumination. It handles arbitrary geometry, non-diffuse materials, and illumination from a wide variety of sources including point lights, area lights, HDR environment maps, sun/sky models, and indirect illumination. At its core is a new alg ..."
Abstract
-
Cited by 108 (17 self)
- Add to MetaCart
Lightcuts is a scalable framework for computing realistic illumination. It handles arbitrary geometry, non-diffuse materials, and illumination from a wide variety of sources including point lights, area lights, HDR environment maps, sun/sky models, and indirect illumination. At its core is a new algorithm for accurately approximating illumination from many point lights with a strongly sublinear cost. We show how a group of lights can be cheaply approximated while bounding the maximum approximation error. A binary light tree and perceptual metric are then used to adaptively partition the lights into groups to control the error vs. cost tradeoff. We also introduce reconstruction cuts that exploit spatial coherence to accelerate the generation of anti-aliased images with complex illumination. Results are demonstrated for five complex scenes and show that lightcuts can accurately approximate hundreds of thousands of point lights using only a few hundred shadow rays. Reconstruction cuts can reduce the number of shadow rays to tens.
Imperfect shadow maps for efficient computation of indirect illumination
- ACM Trans. Graph. (Proc. SIGGRAPH Asia
"... GTX. The scene is illuminated with a small spot light (upper right); all other illumination and shadowing is indirect (one bounce). We present a method for interactive computation of indirect illumination in large and fully dynamic scenes based on approximate visibility queries. While the high-frequ ..."
Abstract
-
Cited by 65 (15 self)
- Add to MetaCart
GTX. The scene is illuminated with a small spot light (upper right); all other illumination and shadowing is indirect (one bounce). We present a method for interactive computation of indirect illumination in large and fully dynamic scenes based on approximate visibility queries. While the high-frequency nature of direct lighting requires accurate visibility, indirect illumination mostly consists of smooth gradations, which tend to mask errors due to incorrect visibility. We exploit this by approximating visibility for indirect illumination with imperfect shadow maps—low-resolution shadow maps rendered from a crude point-based representation of the scene. These are used in conjunction with a global illumination algorithm based on virtual point lights enabling indirect illumination of dynamic scenes at real-time frame rates. We demonstrate that imperfect shadow maps are a valid approximation to visibility, which makes the simulation of global illumination an order of magnitude faster than using accurate visibility.
Far voxels: a multiresolution framework for interactive rendering of huge complex 3D models on commodity graphics platforms
- ACM TRANS. GRAPH
, 2005
"... We present an efficient approach for end-to-end out-of-core construction and interactive inspection of very large arbitrary surface models. The method tightly integrates visibility culling and outof-core data management with a level-of-detail framework. At preprocessing ..."
Abstract
-
Cited by 55 (3 self)
- Add to MetaCart
We present an efficient approach for end-to-end out-of-core construction and interactive inspection of very large arbitrary surface models. The method tightly integrates visibility culling and outof-core data management with a level-of-detail framework. At preprocessing
GigaWalk: Interactive Walkthrough of Complex Environments
- PROC. OF EUROGRAPHICS WORKSHOP ON RENDERING
, 2002
"... We present a new parallel algorithm for interactive walkthrough of complex, gigabyte-sized environments. Our approach combines occlusion culling and levelsof -detail and uses two graphics pipelines with one or more processors. We use a unified scene graph representation for multiple acceleration tec ..."
Abstract
-
Cited by 44 (7 self)
- Add to MetaCart
We present a new parallel algorithm for interactive walkthrough of complex, gigabyte-sized environments. Our approach combines occlusion culling and levelsof -detail and uses two graphics pipelines with one or more processors. We use a unified scene graph representation for multiple acceleration techniques, and we present novel algorithms for clustering geometry spatially, computing a scene graph hierarchy, performing conservative occlusion culling, and performing loadbalancing between graphics pipelines and processors. The resulting system, GigaWalk, has been used to render CAD environments composed of tens of millions of polygons at interactive rates on an SGI Onyx system with two Infinite Reality rendering pipelines. Overall, our system's combination of levels-of-detail and occlusion culling techniques results in significant improvements in frame-rate over view-frustum culling or either single technique alone.
Quick-VDR: Interactive view-dependent rendering of massive models
- IEEE VISUALIZATION
, 2004
"... We present a novel approach for interactive view-dependent rendering of massive models. Our algorithm combines view-dependent simplification, occlusion culling, and out-of-core rendering. We represent the model as a clustered hierarchy of progressive meshes (CHPM). We use the cluster hierarchy for c ..."
Abstract
-
Cited by 40 (8 self)
- Add to MetaCart
We present a novel approach for interactive view-dependent rendering of massive models. Our algorithm combines view-dependent simplification, occlusion culling, and out-of-core rendering. We represent the model as a clustered hierarchy of progressive meshes (CHPM). We use the cluster hierarchy for coarse-grained selective refinement and progressive meshes for fine-grained local refinement. We present an out-of-core algorithm for computation of a CHPM that includes cluster decomposition, hierarchy generation, and simplification. We make use of novel cluster dependencies in the preprocess to generate crack-free, drastic simplifications at runtime. The clusters are used for occlusion culling and out-of-core rendering. We add a frame of latency to the rendering pipeline to fetch newly visible clusters from the disk and to avoid stalls. The CHPM reduces the refinement cost for view-dependent rendering by more than an order of magnitude as compared to a vertex hierarchy. We have implemented our algorithm on a desktop PC. We can render massive CAD, isosurface, and scanned models, consisting of tens or a few hundreds of millions of triangles at 10−35 frames per second with little loss in image quality.
Exact From-Region Visibility Culling
, 2002
"... To pre-process a scene for the purpose of visibility culling during walkthroughs it is necessary to solve visibility from all the elements of a finite partition of viewpoint space. Many conservative and approximate solutions have been developed that solve for visibility rapidly. The idealised exac ..."
Abstract
-
Cited by 36 (1 self)
- Add to MetaCart
To pre-process a scene for the purpose of visibility culling during walkthroughs it is necessary to solve visibility from all the elements of a finite partition of viewpoint space. Many conservative and approximate solutions have been developed that solve for visibility rapidly. The idealised exact solution for general 3D scenes has often been regarded as computationally intractable. Our exact algorithm for finding the visible polygons in a scene from a region is a computationally tractable pre-process that can handle scenes of the order of millions of polygons. The essence
BASRI R.: Direct visibility of point sets
- ACM Trans. Graph
"... This paper proposes a simple and fast operator, the “Hidden ” Point Removal operator, which determines the visible points in a point cloud, as viewed from a given viewpoint. Visibility is determined without reconstructing a surface or estimating normals. It is shown that extracting the points that r ..."
Abstract
-
Cited by 29 (5 self)
- Add to MetaCart
This paper proposes a simple and fast operator, the “Hidden ” Point Removal operator, which determines the visible points in a point cloud, as viewed from a given viewpoint. Visibility is determined without reconstructing a surface or estimating normals. It is shown that extracting the points that reside on the convex hull of a transformed point cloud, amounts to determining the visible points. This operator is general – it can be applied to point clouds at various dimensions, on both sparse and dense point clouds, and on viewpoints internal as well as external to the cloud. It is demonstrated that the operator is useful in visualizing point clouds, in view-dependent reconstruction and in shadow casting.
Hardware-Accelerated From-Region Visibility Using a Dual Ray Space
- In Rendering Techniques 2001: 12th Eurographics Workshop on Rendering
, 2001
"... In this paper a novel from-region visibility algorithm is described. Its unique properties allow conducting remote walkthroughs in very large virtual environments, without preprocessing and storing prohibitive amounts of visibility information. The algorithm retains its speed and accuracy even wh ..."
Abstract
-
Cited by 27 (2 self)
- Add to MetaCart
(Show Context)
In this paper a novel from-region visibility algorithm is described. Its unique properties allow conducting remote walkthroughs in very large virtual environments, without preprocessing and storing prohibitive amounts of visibility information. The algorithm retains its speed and accuracy even when applied to large viewcells. This allows computing from-region visibility on-line, thus eliminating the need for visibility preprocessing. The algorithm utilizes a geometric transform, representing visibility in a two-dimensional space, the dual ray space. Standard rendering hardware is then used for rapidly performing visibility computation. The algorithm is robust and easy to implement, and can trade off between accuracy and speed. We report results from extensive experiments that were conducted on a virtual environment that accurately depicts 160 square kilometers of the city of London.
Perceptually-Driven Decision Theory for Interactive Realistic Rendering
- ACM Transactions on Graphics
, 2002
"... this paper we introduce a new approach to realistic rendering at interactive rates on commodity graphics hardware. The approach uses efficient perceptual metrics within a decision theoretic framework to optimally order rendering operations, producing images of the highest visual quality within syste ..."
Abstract
-
Cited by 26 (1 self)
- Add to MetaCart
this paper we introduce a new approach to realistic rendering at interactive rates on commodity graphics hardware. The approach uses efficient perceptual metrics within a decision theoretic framework to optimally order rendering operations, producing images of the highest visual quality within system constraints. We demonstrate the usefulness of this approach for various applications such as diffuse texture caching, environment map prioritization and radiosity mesh simplification. Although here we address the problem of realistic rendering at interactive rates, the perceptually-based decision theoretic methodology we introduce can be usefully applied in many areas of computer graphics
Interactive Visibility Culling in Complex Environments using Occlusion-Switches
- Proc. of ACM Symposium on Interactive 3D Graphics
, 2003
"... We present occlusion-switches for interactive visibility culling in complex 3D environments. An occlusionswitch consists of two GPUs (graphics processing units) and each GPU is used to either compute an occlusion representation or cull away primitives not visible from the current viewpoint. Moreover ..."
Abstract
-
Cited by 23 (6 self)
- Add to MetaCart
(Show Context)
We present occlusion-switches for interactive visibility culling in complex 3D environments. An occlusionswitch consists of two GPUs (graphics processing units) and each GPU is used to either compute an occlusion representation or cull away primitives not visible from the current viewpoint. Moreover, we switch the roles of each GPU between successive frames. The visible primitives are rendered in parallel on a third GPU. We utilize frame-to-frame coherence to lower the communication overhead between di#erent GPUs and improve the overall performance. The overall visibility culling algorithm is conservative up to image-space precision. This algorithm has been combined with levels-of-detail and implemented on three networked PCs, each consisting of a single GPU. We highlight its performance on complex environments composed of tens of millions of triangles. In practice, it is able to render these environments at interactive rates with little loss in image quality.