Results 1  10
of
17
A survey of the marching cubes algorithm
, 2006
"... A survey of the development of the marching cubes algorithm [W. Lorensen, H. Cline, Marching cubes: a high resolution 3D surface construction algorithm. Computer Graphics 1987; 21(4):163–9], a wellknown cellbycell method for extraction of isosurfaces from scalar volumetric data sets, is presented ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
A survey of the development of the marching cubes algorithm [W. Lorensen, H. Cline, Marching cubes: a high resolution 3D surface construction algorithm. Computer Graphics 1987; 21(4):163–9], a wellknown cellbycell method for extraction of isosurfaces from scalar volumetric data sets, is presented. The paper’s primary aim is to survey the development of the algorithm and its computational properties, extensions, and limitations (including the attempts to resolve its limitations). A rich body of publications related to this aim are included. Representative applications and spinoff work are also considered and related techniques are briefly discussed.
Streaming simplification of tetrahedral meshes
 IEEE Transactions on Visualization and Computer Graphics
, 2005
"... Abstract—Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can ac ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
(Show Context)
Abstract—Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish realtime visualization necessary for scientific analysis. We propose a twostep approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/Oefficient format that allows coherent access to the tetrahedral cells. A quadricbased simplification is sequentially performed on small portions of the mesh incore. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates outofcore to process meshes too large for main memory. Index Terms—Computational geometry and object modeling, outofcore algorithms, streaming algorithms, mesh simplification, large meshes, tetrahedral meshes. 1
Interactive Isosurface Ray Tracing of Large Octree Volumes
 In Proceedings of the 2006 IEEE Symposium on Interactive Ray Tracing
, 2006
"... Figure 1: Large volume data raytraced at 512 2 using octrees for compression and acceleration. From left to right: (1) LLNL RichtmyerMeshkov instability field (shown at timestep 270, with an isovalue of 100). (2) Closer view of the previous scene. (3) Utah CSAFE heptane simulation (timestep 152, i ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
(Show Context)
Figure 1: Large volume data raytraced at 512 2 using octrees for compression and acceleration. From left to right: (1) LLNL RichtmyerMeshkov instability field (shown at timestep 270, with an isovalue of 100). (2) Closer view of the previous scene. (3) Utah CSAFE heptane simulation (timestep 152, isovalue 42). Data is losslessly compressed into an octree volume to occupy less than one quarter the size of the original 3D array. Our approach permits storage of large data such as the LLNL simulation, and full sequences of mediumsize data such as the heptane, in main memory of consumer machines. Frame rates on an Intel Core Duo 2.16 GHz laptop with 2 GB RAM are 2.4, 1.3, and 3.3 fps respectively. On a 16node NUMA 2.4 GHz Opteron workstation, these images render at 17.9, 9.8, and 22.0 fps. We present a technique for ray tracing isosurfaces of large compressed structured volumes. Data is first converted into a losslesscompression octree representation that occupies a fraction of the original memory footprint. An isosurface is then dynamically rendered by tracing rays through a min/max hierarchy inside interior octree nodes. By embedding the acceleration tree and scalar data in a single structure and employing optimized octree hash schemes, we achieve competitive frame rates on common multicore architectures, and render large timevariant data that could not otherwise be accomodated.
OutofCore and Compressed Level Set Methods
"... This article presents a generic framework for the representation and deformation of level set surfaces at extreme resolutions. The framework is composed of two modules that each utilize optimized and application specific algorithms: 1) A fast outofcore data management scheme that allows for resolu ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This article presents a generic framework for the representation and deformation of level set surfaces at extreme resolutions. The framework is composed of two modules that each utilize optimized and application specific algorithms: 1) A fast outofcore data management scheme that allows for resolutions of the deforming geometry limited only by the available disk space as opposed to memory, and 2) compact and fast compression strategies that reduce both offline storage requirements and online memory footprints during simulation. Outofcore and compression techniques have been applied to a wide range of computer graphics problems in recent years, but this article is the first to apply it in the context of level set and fluid simulations. Our framework is generic and flexible in the sense that the two modules can transparently be integrated, separately or in any combination, into existing level set and fluid simulation software based on recently proposed narrow band data structures like the DTGrid of Nielsen and Museth [2006] and the HRLE of Houston et al. [2006]. The framework can be applied to narrow band signed distances, fluid velocities, scalar fields, particle properties as well as standard graphics attributes like colors, texture coordinates, normals, displacements etc. In fact, our framework is applicable to a large body of computer graphics problems that involve sequential or random access to very large codimension one (level set) and zero (e.g. fluid) data sets. We demonstrate this with several applications, including fluid simulations interacting with large boundaries ( ≈ 15003), surface deformations ( ≈ 20483), the solution of partial differential equations on large surfaces ( ≈ 40963) and meshtolevel set scan conversions of resolutions up to ≈ 350003 (7 billion voxels in the narrow band). Our outofcore framework is shown to be several times faster than current stateoftheart level set data structures relying on OS paging. In particular we show sustained throughput (grid points/sec) for gigabyte sized level sets as high as 65 % of stateoftheart throughput for incore simulations. We also demonstrate that our compression techniques outperform stateoftheart
Isosurface extraction and spatial filtering using persistent octree
 In IEEE Transactions on Visualization and Computer Graphics
, 2006
"... Abstract — We propose a novel Persistent OcTree (POT) indexing structure for accelerating isosurface extraction and spatial filtering from volumetric data. This data structure efficiently handles a wide range of visualization problems such as the generation of viewdependent isosurfaces, ray tracing, ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract — We propose a novel Persistent OcTree (POT) indexing structure for accelerating isosurface extraction and spatial filtering from volumetric data. This data structure efficiently handles a wide range of visualization problems such as the generation of viewdependent isosurfaces, ray tracing, and isocontour slicing for high dimensional data. POT can be viewed as a hybrid data structure between the interval tree and the BranchOnNeed Octree (BONO) in the sense that it achieves the asymptotic bound of the interval tree for identifying the active cells corresponding to an isosurface and is more efficient than BONO for handling spatial queries. We encode a compact octree for each isovalue. Each such octree contains only the corresponding active cells, in such a way that the combined structure has linear space. The inherent hierarchical structure associated with the active cells enables very fast filtering of the active cells based on spatial constraints. We demonstrate the effectiveness of our approach by performing viewdependent isosurfacing on a wide variety of volumetric data sets and 4D isocontour slicing on the timevarying RichtmyerMeshkov instability dataset. Index Terms—scientific visualization, isosurface extraction, indexing. 1
Finegrained visualization pipelines and lazy functional languages
 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
, 2006
"... ..."
Efficient isosurface extraction for large scale timevarying data using the persistent hyperoctree (phot
, 2006
"... We introduce the Persistent HyperOcTree (PHOT) to handle the 4D isocontouring problem for large scale timevarying data sets. This novel data structure is provably space efficient and optimal in retrieving active cells. More importantly, the set of active cells for any possible isovalue are already ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We introduce the Persistent HyperOcTree (PHOT) to handle the 4D isocontouring problem for large scale timevarying data sets. This novel data structure is provably space efficient and optimal in retrieving active cells. More importantly, the set of active cells for any possible isovalue are already organized in a Compact Hyperoctree, which enables very efficient slicing of the isocontour along spatial and temporal dimensions. Experimental results based on the very large RichtmyerMeshkov instability data set demonstrate the effectiveness of our approach. This technique can also be used for other isosurfacing schemes such as viewdependent isosurfacing and raytracing, which will benefit from the inherent hierarchical structure associated with the active cells. 1
Compression and Streaming of Polygon Meshes
, 2005
"... Polygon meshes provide a simple way to represent threedimensional surfaces and are the defacto standard for interactive visualization of geometric models. Storing large polygon meshes in standard indexed formats results in files of substantial size. Such formats allow listing vertices and polygons ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Polygon meshes provide a simple way to represent threedimensional surfaces and are the defacto standard for interactive visualization of geometric models. Storing large polygon meshes in standard indexed formats results in files of substantial size. Such formats allow listing vertices and polygons in any order so that not only the mesh is stored but also the particular ordering of its elements. Mesh compression rearranges vertices and polygons into an order that allows more compact coding of the incidence between vertices and predictive compression of their positions. Previous schemes were designed for triangle meshes and polygonal faces were triangulated prior to compression. I show that polygon models can be encoded more compactly by avoiding the initial triangulation step. I describe two compression schemes that achieve better compression by encoding meshes directly in their polygonal representation. I demonstrate that the
Spectral predictors
 Data Compression Conference
, 2007
"... Many scientific, imaging, and geospatial applications produce large highprecision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Many scientific, imaging, and geospatial applications produce large highprecision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are used to predict subsequent nearby samples. In hierarchical, incremental, or selective transmission, the spatial pattern of the known neighbors is often irregular and varies from one sample to the next, which precludes prediction based on a single stencil and fixed set of weights. To handle such situations and make the best use of available neighboring samples, we propose a local spectral predictor that offers optimal prediction by tailoring the weights to each configuration of known nearby samples. These weights may be precomputed and stored in a small lookup table. We show that predictive coding using our spectral predictor improves compression for various sources of highprecision data.
Spectral Predictors
"... Many scientific, imaging, and geospatial applications produce large highprecision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are use ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Many scientific, imaging, and geospatial applications produce large highprecision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are used to predict subsequent nearby samples. In hierarchical, incremental, or selective transmission, the spatial pattern of the known neighbors is often irregular and varies from one sample to the next, which precludes prediction based on a single stencil and fixed set of weights. To handle such situations and make the best use of available neighboring samples, we propose a local spectral predictor that offers optimal prediction by tailoring the weights to each configuration of known nearby samples. These weights may be precomputed and stored in a small lookup table. We show that predictive coding using our spectral predictor improves compression for various sources of highprecision data. 1.