Results 1 - 10
of
157
Texture mapping progressive meshes
, 2001
"... Given an arbitrary mesh, we present a method to construct a progressive mesh (PM) such that all meshes in the PM sequence share a common texture parametrization. Our method considers two important goals simultaneously. It minimizes texture stretch (small texture distances mapped onto large surface d ..."
Abstract
-
Cited by 251 (7 self)
- Add to MetaCart
(Show Context)
Given an arbitrary mesh, we present a method to construct a progressive mesh (PM) such that all meshes in the PM sequence share a common texture parametrization. Our method considers two important goals simultaneously. It minimizes texture stretch (small texture distances mapped onto large surface distances) to balance sampling rates over all locations and directions on the surface. It also minimizes texture deviation (“slippage ” error based on parametric correspondence) to obtain accurate textured mesh approximations. The method begins by partitioning the mesh into charts using planarity and compactness heuristics. It creates a stretch-minimizing parametrization within each chart, and resizes the charts based on the resulting stretch. Next, it simplifies the mesh while respecting the chart boundaries. The parametrization is re-optimized to reduce both stretch and deviation over the whole PM sequence. Finally, the charts are packed into a texture atlas. We demonstrate using such atlases to sample color and normal maps over several models. Additional Keywords: mesh simplification, surface flattening, surface parametrization, texture stretch.
A survey of free-form object representation and recognition techniques
- Computer Vision and Image Understanding
, 2001
"... Advances in computer speed, memory capacity, and hardware graphics acceleration have made the interactive manipulation and visualization of complex, detailed (and therefore large) three-dimensional models feasible. These models are either painstakingly designed through an elaborate CAD process or re ..."
Abstract
-
Cited by 200 (1 self)
- Add to MetaCart
(Show Context)
Advances in computer speed, memory capacity, and hardware graphics acceleration have made the interactive manipulation and visualization of complex, detailed (and therefore large) three-dimensional models feasible. These models are either painstakingly designed through an elaborate CAD process or reverse engineered from sculpted prototypes using modern scanning technologies and integration methods. The availability of detailed data describing the shape of an object offers the computer vision practitioner new ways to recognize and localize free-form objects. This survey reviews recent literature on both the 3D model building process and techniques used to match and identify free-form objects from imagery. c ○ 2001 Academic Press 1.
Out-of-Core Simplification of Large Polygonal Models
, 2000
"... We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information for the placement of each cluster's representative ve ..."
Abstract
-
Cited by 159 (10 self)
- Add to MetaCart
(Show Context)
We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information for the placement of each cluster's representative vertex, which better preserves fine details and results in a low mean geometric error. The use of quadrics instead of the vertex grading approach in [13] has the additional benefits of requiring less disk space and only a single pass over the model rather than two. The resulting linear time algorithm allows simplification of datasets of arbitrary complexity. In order
Silhouette Clipping
, 2000
"... Approximating detailed models with coarse, texture-mapped meshes results in polygonal silhouettes. To eliminate this artifact, we introduce silhouette clipping, a framework for efficiently clipping the rendering of coarse geometry to the exact silhouette of the original model. The coarse mesh is obt ..."
Abstract
-
Cited by 102 (8 self)
- Add to MetaCart
(Show Context)
Approximating detailed models with coarse, texture-mapped meshes results in polygonal silhouettes. To eliminate this artifact, we introduce silhouette clipping, a framework for efficiently clipping the rendering of coarse geometry to the exact silhouette of the original model. The coarse mesh is obtained using progressive hulls, a novel representation with the nesting property required for proper clipping. We describe an improved technique for constructing texture and normal maps over this coarse mesh. Given a perspective view, silhouettes are efficiently extracted from the original mesh using a precomputed search tree. Within the tree, hierarchical culling is achieved using pairs of anchored cones. The extracted silhouette edges are used to set the hardware stencil buffer and alpha buffer, which in turn clip and antialias the rendered coarse geometry. Results demonstrate that silhouette clipping can produce renderings of similar quality to high-resolution meshes in less rendering time.
Image-Driven Simplification
, 2000
"... We introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. This is a departure from approaches that make polygonal simplification decisions based on geometry. As with many methods, we use the edge collapse operator to make ..."
Abstract
-
Cited by 100 (5 self)
- Add to MetaCart
We introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. This is a departure from approaches that make polygonal simplification decisions based on geometry. As with many methods, we use the edge collapse operator to make incremental changes to a model. Unique to our approach, however, is the use of comparisons between images of the original model against those of a simplified model to determine the cost of an edge collapse. We use common graphics rendering hardware to accelerate the creation of the required images. As expected, this method produces models that are close to the original model according to image differences. Perhaps more surprising, however, is that the method yields models that have high geometric fidelity as well. Our approach also solves the quandary of how to weight the geometric distance versus appearance properties such as normals, color and texture. All of these tradeoffs are ba...
Progressive Compression for Lossless Transmission of Triangle Meshes
, 2001
"... Lossless transmission of 3D meshes is a very challenging and timely problem for many applications, ranging from collaborative design to engineering. Additionally, frequent delays in transmissions call for progressive transmission in order for the end user to receive useful successive refinements of ..."
Abstract
-
Cited by 99 (4 self)
- Add to MetaCart
Lossless transmission of 3D meshes is a very challenging and timely problem for many applications, ranging from collaborative design to engineering. Additionally, frequent delays in transmissions call for progressive transmission in order for the end user to receive useful successive refinements of the final mesh. In this paper, we present a novel, fully progressive encoding approach for lossless transmission of triangle meshes with a very fine granularity. A new valence-driven decimating conquest, combined with patch tiling and an original strategic retriangulation is used to maintain the regularity of valence. We demonstrate that this technique leads to good mesh quality, near-optimal connectivity encoding, and therefore a good rate-distortion ratio throughout the transmission. We also improve upon previous lossless geometry encoding by decorrelating the normal and tangential components of the surface. For typical meshes, our method compresses connectivity down to less than 3.7 bits per vertex, 40% better in average than the best methods previously reported [5, 18]; we further reduce the usual geometry bit rates by 20% in average by exploiting the smoothness of meshes. Concretely, our technique can reduce an ascii VRML 3D model down to 1.7% of its size for a 10-bit quantization (2.3% for a 12-bit quantization) while providing a very progressive reconstruction.
A topological hierarchy for functions on triangulated surfaces.
- IEEE Transactions on Visualization and Computer Graphics,
, 2004
"... ..."
(Show Context)
Delaunay Based Shape Reconstruction from Large Data
, 2001
"... Surface reconstruction provides a powerful paradigm for modeling shapes from samples. For point cloud data with only geometric coordinates as input, Delaunay based surface reconstruction algorithms have been shown to be quite effective both in theory and practice. However, a major complaint against ..."
Abstract
-
Cited by 64 (5 self)
- Add to MetaCart
Surface reconstruction provides a powerful paradigm for modeling shapes from samples. For point cloud data with only geometric coordinates as input, Delaunay based surface reconstruction algorithms have been shown to be quite effective both in theory and practice. However, a major complaint against Delaunay based methods is that they are slow and cannot handle large data. We extend the COCONE algorithm to handle supersize data. This is the first reported Delaunay based surface reconstruction algorithm that can handle data containing more than a million sample points on a modest machine.