Results 1 - 10
of
159
Dual Contouring of Hermite Data
, 2002
"... This paper describes a new method for contouring a signed grid whose edges are tagged by Hermite data (exact intersection points and normals). This method avoids the need to explicitly identify and process "features" as required in previous Hermite contouring methods. We extend this contou ..."
Abstract
-
Cited by 258 (15 self)
- Add to MetaCart
This paper describes a new method for contouring a signed grid whose edges are tagged by Hermite data (exact intersection points and normals). This method avoids the need to explicitly identify and process "features" as required in previous Hermite contouring methods. We extend this contouring method to the case of multi-signed functions and demonstrate how to model textured contours using multi-signed functions. Using a new, numerically stable representation for quadratic error functions, we develop an octree-based method for simplifying these contours and their textured regions. We next extend our contouring method to these simplified octrees. This new method imposes no constraints on the octree (such as being a restricted octree) and requires no "crack patching". We conclude with a simple test for preserving the topology of both the contour and its textured regions during simplification.
A Developer's Survey of Polygonal Simplification Algorithms
- IEEE COMPUTER GRAPHICS AND APPLICATIONS
, 2001
"... Polygonal simplification, a.k.a. level of detail, is an important tool for anyone doing interactive rendering, but how is a developer to choose among the dozens of published algorithms? This article surveys the field from a developer's point of view, attempting to identify the issues in picking ..."
Abstract
-
Cited by 157 (2 self)
- Add to MetaCart
Polygonal simplification, a.k.a. level of detail, is an important tool for anyone doing interactive rendering, but how is a developer to choose among the dozens of published algorithms? This article surveys the field from a developer's point of view, attempting to identify the issues in picking an algorithm, relate the strengths and weaknesses of different approaches, and describe a number of published algorithms as examples.
Semi-Regular Mesh Extraction from Volumes
, 2000
"... We present a novel method to extract iso-surfaces from distance volumes. It generates high quality semi-regular multiresolution meshes of arbitrary topology. Our technique proceeds in two stages. First, a very coarse mesh with guaranteed topology is extracted. Subsequently an iterative multi-scale f ..."
Abstract
-
Cited by 104 (13 self)
- Add to MetaCart
(Show Context)
We present a novel method to extract iso-surfaces from distance volumes. It generates high quality semi-regular multiresolution meshes of arbitrary topology. Our technique proceeds in two stages. First, a very coarse mesh with guaranteed topology is extracted. Subsequently an iterative multi-scale force-based solver refines the initial mesh into a semi-regular mesh with geometrically adaptive sampling rate and good aspect ratio triangles. The coarse mesh extraction is performed using a new approach we call surface wavefront propagation. A set of discrete iso-distance ribbons are rapidly built and connected while respecting the topology of the iso-surface implied by the data. Subsequent multi-scale refinement is driven by a simple force-based solver designed to combine good iso-surface fit and high quality sampling through reparameterization. In contrast to the Marching Cubes technique our output meshes adapt gracefully to the iso-surface geometry, have a natural multiresolution structure and good aspect ratio triangles, as demonstrated with a number of examples.
Streaming Meshes
, 2005
"... Recent years have seen an immense increase in the complexity of geometric data sets. Today's gigabyte-sized polygon models can no longer be completely loaded into the main memory of common desktop PCs. Unfortunately, current mesh formats do not account for this. They were designed years ago whe ..."
Abstract
-
Cited by 86 (18 self)
- Add to MetaCart
Recent years have seen an immense increase in the complexity of geometric data sets. Today's gigabyte-sized polygon models can no longer be completely loaded into the main memory of common desktop PCs. Unfortunately, current mesh formats do not account for this. They were designed years ago when meshes were orders of magnitudes smaller. Using such formats to store large meshes is inefficient and unduly complicates all subsequent processing.
Adaptive TetraPuzzles: Efficient Out-of-Core Construction and Visualization of Gigantic Multiresolution Polygonal Models
- ACM Transactions on Graphics
, 2004
"... We describe an efficient technique for out-of-core construction and accurate view-dependent visualization of very large surface models. The method uses a regular conformal hierarchy of tetrahedra to spatially partition the model. Each tetrahedral cell contains a precomputed simplified version of the ..."
Abstract
-
Cited by 83 (32 self)
- Add to MetaCart
We describe an efficient technique for out-of-core construction and accurate view-dependent visualization of very large surface models. The method uses a regular conformal hierarchy of tetrahedra to spatially partition the model. Each tetrahedral cell contains a precomputed simplified version of the original model, represented using cache coherent indexed strips for fast rendering. The representation is constructed during a fine-to-coarse simplification of the surface contained in diamonds (sets of tetrahedral cells sharing their longest edge). The construction preprocess operates out-ofcore and parallelizes nicely. Appropriate boundary constraints are introduced in the simplification to ensure that all conforming selective subdivisions of the tetrahedron hierarchy lead to correctly matching surface patches. For each frame at runtime, the hierarchy is traversed coarse-to-fine to select diamonds of the appropriate resolution given the view parameters. The resulting system can interatively render high quality views of out-of-core models of hundreds of millions of triangles at over 40Hz (or 70M triangles/s) on current commodity graphics platforms.
Out-of-Core Compression for Gigantic Polygon Meshes
, 2003
"... Polygonal models acquired with emerging 3D scanning technology or from large scale CAD applications easily reach sizes of several gigabytes and do not fit in the address space of common 32-bit desktop PCs. In this paper we propose an out-of-core mesh compression technique that converts such gigantic ..."
Abstract
-
Cited by 81 (23 self)
- Add to MetaCart
Polygonal models acquired with emerging 3D scanning technology or from large scale CAD applications easily reach sizes of several gigabytes and do not fit in the address space of common 32-bit desktop PCs. In this paper we propose an out-of-core mesh compression technique that converts such gigantic meshes into a streamable, highly compressed representation. During decompression only a small portion of the mesh needs to be kept in memory at any time. As full connectivity information is available along the decompression boundaries, this provides seamless mesh access for incremental in-core processing on gigantic meshes. Decompression speeds are CPU-limited and exceed one million vertices and two million triangles per second on a 1.8 GHz Athlon processor.
Explicit Surface Remeshing
, 2003
"... We present a new remeshing scheme based on the idea of improving mesh quality by a series of local modifications of the mesh geometry and connectivity. Our contribution to the family of local modification techniques is an areabased smoothing technique. Area-based smoothing allows the control of bo ..."
Abstract
-
Cited by 66 (6 self)
- Add to MetaCart
We present a new remeshing scheme based on the idea of improving mesh quality by a series of local modifications of the mesh geometry and connectivity. Our contribution to the family of local modification techniques is an areabased smoothing technique. Area-based smoothing allows the control of both triangle quality and vertex sampling over the mesh, as a function of some criteria, e.g. the mesh curvature. To perform local modifications of arbitrary genus meshes we use dynamic patch-wise parameterization. The parameterization is constructed and updated on-the-fly as the algorithm progresses with local updates. As a post-processing stage, we introduce a new algorithm to improve the regularity of the mesh connectivity. The algorithm is able to create an unstructured mesh with a very small number of irregular vertices. Our remeshing scheme is robust, runs at interactive speeds and can be applied to arbitrary complex meshes.
Delaunay Based Shape Reconstruction from Large Data
, 2001
"... Surface reconstruction provides a powerful paradigm for modeling shapes from samples. For point cloud data with only geometric coordinates as input, Delaunay based surface reconstruction algorithms have been shown to be quite effective both in theory and practice. However, a major complaint against ..."
Abstract
-
Cited by 64 (5 self)
- Add to MetaCart
Surface reconstruction provides a powerful paradigm for modeling shapes from samples. For point cloud data with only geometric coordinates as input, Delaunay based surface reconstruction algorithms have been shown to be quite effective both in theory and practice. However, a major complaint against Delaunay based methods is that they are slow and cannot handle large data. We extend the COCONE algorithm to handle supersize data. This is the first reported Delaunay based surface reconstruction algorithm that can handle data containing more than a million sample points on a modest machine.
Efficient Adaptive Simplification of Massive Meshes
, 2001
"... The growing availability of massive polygonal models, and the inability of most existing visualization tools to work with such data, has created a pressing need for memory efficient methods capable of simplifying very large meshes. In this paper, we present a method for performing adaptive simplific ..."
Abstract
-
Cited by 59 (2 self)
- Add to MetaCart
The growing availability of massive polygonal models, and the inability of most existing visualization tools to work with such data, has created a pressing need for memory efficient methods capable of simplifying very large meshes. In this paper, we present a method for performing adaptive simplification of polygonal meshes that are too large to fit in-core.
Out-of-core algorithms for scientific visualization and computer graphics
- In Visualization’02 Course Notes
, 2002
"... Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in re ..."
Abstract
-
Cited by 59 (11 self)
- Add to MetaCart
(Show Context)
Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of high-resolution 3D scanners, and the need to visualize ASCI-size (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, which penalizes algorithms that do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing out-of-core (and often cache-friendly) techniques. This paper surveys fundamental issues, current problems, and unresolved questions, and aims to provide graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Keywords: Out-of-core algorithms, scientific visualization, computer graphics, interactive rendering, vol-ume rendering, surface simplification.