Results 1  10
of
99
Progressive lossless compression of arbitrary simplicial complexes
 ACM Trans. Graphics (Proc. ACM SIGGRAPH 2002
, 2002
"... Efficient algorithms for compressing geometric data have been widely developed in the recent years, but they are mainly designed for closed polyhedral surfaces which are manifold or “nearly manifold”. We propose here a progressive geometry compression scheme which can handle manifold models as well ..."
Abstract

Cited by 76 (0 self)
 Add to MetaCart
(Show Context)
Efficient algorithms for compressing geometric data have been widely developed in the recent years, but they are mainly designed for closed polyhedral surfaces which are manifold or “nearly manifold”. We propose here a progressive geometry compression scheme which can handle manifold models as well as “triangle soups ” and 3D tetrahedral meshes. The method is lossless when the decompression is complete which is extremely important in some domains such as medical or finite element. While most existing methods enumerate the vertices of the mesh in an order depending on the connectivity, we use a kdtree technique [8] which does not depend on the connectivity. Then we compute a compatible sequence of meshes which can be encoded using edge expansion [14] and vertex split [24]. 1 The main contributions of this paper are: the idea of using the kdtree encoding of the geometry to drive the construction of a sequence of meshes, an improved coding of the edge expansion and vertex split since the vertices to split are implicitly defined, a prediction scheme which reduces the code for simplices incident to the split vertex, and a new generalization of the edge expansion operation to tetrahedral meshes. 1
Dynapack: SpaceTime compression of the 3D animations of triangle meshes with fixed connectivity
 ACM Symp. Computer Animation
, 2003
"... Lengyel) contains 400 frames of the same connectivity, each having 41 components with a total of 5664 triangles and 3030 vertices. Dynapack quantizes the floating point coordinates of the vertices to 13 (respectively 11, and 7) bits, shown in rows 2 (respectively 3, and 5). It compresses them down t ..."
Abstract

Cited by 67 (1 self)
 Add to MetaCart
(Show Context)
Lengyel) contains 400 frames of the same connectivity, each having 41 components with a total of 5664 triangles and 3030 vertices. Dynapack quantizes the floating point coordinates of the vertices to 13 (respectively 11, and 7) bits, shown in rows 2 (respectively 3, and 5). It compresses them down to 2.91 (respectively 2.35, and 1.37) bits, resulting in a worstcase geometric error of 0.0061 (respectively 0.024, and 0.3) percent of the size of the minimum axisaligned bounding box of the animation sequence. Note that the result of the 13bit quantization is undistinguishable from the original and yields an 11to1 compression ratio over the floatingpoint representation with a 42.1 dB signaltonoise ratio. Dynapack exploits spacetime coherence to compress the
Leastsquares meshes
 In Shape Modeling International (SMI
, 2004
"... Figure 1: LSmesh: a mesh constructed from a given connectivity graph and a sparse set of control points with geometry. In this example the connectivity is taken from the camel mesh. In (a) the LSmesh is constructed with 100 control points and in (c) with 2000 control points. The connectivity graph ..."
Abstract

Cited by 46 (5 self)
 Add to MetaCart
(Show Context)
Figure 1: LSmesh: a mesh constructed from a given connectivity graph and a sparse set of control points with geometry. In this example the connectivity is taken from the camel mesh. In (a) the LSmesh is constructed with 100 control points and in (c) with 2000 control points. The connectivity graph contains 39074 vertices (without any geometric information). (b) and (d) show closeups on the head; the control points are marked by red balls. In this paper we introduce Leastsquares Meshes: meshes with a prescribed connectivity that approximate a set of control points in a leastsquares sense. The given mesh consists of a planar graph with arbitrary connectivity and a sparse set of control points with geometry. The geometry of the mesh is reconstructed by solving a sparse linear system. The linear system not only defines a surface that approximates the given control points, but it also distributes the vertices over the surface in a fair way. That is, each vertex lies as close as possible to the center of gravity of its immediate neighbors. The Leastsquares Meshes (LSmeshes) are a visually smooth and fair approximation of the given control points. We show that the connectivity of the mesh contains geometric information that affects the shape of the reconstructed surface. Finally, we discuss the applicability of LSmeshes to approximation of given surfaces, smooth completion, mesh editing and progressive transmission.
PlanetSized Batched Dynamic Adaptive Meshes (PBDAM)
"... We describe an efficient technique for outofcore management and interactive rendering of planet sized textured terrain surfaces. The technique, called PBatched Dynamic Adaptive Meshes (P BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
We describe an efficient technique for outofcore management and interactive rendering of planet sized textured terrain surfaces. The technique, called PBatched Dynamic Adaptive Meshes (P BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched hosttographics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of outofcore data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.
Simplification and Compression of 3D Meshes
 In Proceedings of the European Summer School on Principles of Multiresolution in Geometric Modelling (PRIMUS
, 1998
"... We survey recent developments in compact representations of 3D mesh data. This includes: Methods to reduce the complexity of meshes by simplification, thereby reducing the number of vertices and faces in the mesh; Methods to resample the geometry in order to optimize the vertex distribution; Methods ..."
Abstract

Cited by 35 (6 self)
 Add to MetaCart
(Show Context)
We survey recent developments in compact representations of 3D mesh data. This includes: Methods to reduce the complexity of meshes by simplification, thereby reducing the number of vertices and faces in the mesh; Methods to resample the geometry in order to optimize the vertex distribution; Methods to compactly represent the connectivity data (the graph structure defined by the edges) of the mesh; Methods to compactly represent the geometry data (the vertex coordinates) of a mesh.
Tetfusion: an algorithm for rapid tetrahedral mesh simplification
 In IEEE Visualization ’02 (2002), IEEE Computer Society
"... (cells that intersect a vertical cutting plane in the XY plane at a specific Z value) is rendered to show the interior elements. (Dataset courtesy: Peter Williams, Lawrence Livermoore National Laboratory). This paper introduces an algorithm for rapid progressive simplification of tetrahedral meshes: ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
(Show Context)
(cells that intersect a vertical cutting plane in the XY plane at a specific Z value) is rendered to show the interior elements. (Dataset courtesy: Peter Williams, Lawrence Livermoore National Laboratory). This paper introduces an algorithm for rapid progressive simplification of tetrahedral meshes: TetFusion. We describe how a simple geometry decimation operation steers a rapid and controlled progressive simplification of tetrahedral meshes, while also taking care of complex meshinconsistency problems. The algorithm features a high decimation ratio per step, and inherently discourages any cases of selfintersection of boundary, elementboundary intersection at concave boundaryregions, and negative volume tetrahedra (flipping). We achieved rigorous reduction ratios of up to 98 % for meshes consisting of 827,904 elements in less than 2 minutes, progressing through a series of levelofdetails (LoDs) of the mesh in a controlled manner. We describe how the approach supports a balanced redistribution of space between tetrahedral elements, and explain some useful control parameters that make it faster and more intuitive than ‘edge collapse’based decimation methods for volumetric meshes [3, 19, 21, 22]. Finally, we discuss how this approach can be employed for rapid LoD prototyping of large timevarying datasets as an aid to interactive visualization.
Hybrid meshes: Multiresolution using regular and irregular refinement
, 2002
"... A hybrid mesh is a multiresolution surface representation that combines advantages from regular and irregular meshes. Irregular operations allow a hybrid mesh to change topology throughout the hierarchy and approximate detailed features at multiple scales. A preponderance of regular refinements allo ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
A hybrid mesh is a multiresolution surface representation that combines advantages from regular and irregular meshes. Irregular operations allow a hybrid mesh to change topology throughout the hierarchy and approximate detailed features at multiple scales. A preponderance of regular refinements allows for efficient datastructures and processing algorithms. We provide a user driven procedure for creating a hybrid mesh from scanned geometry and present a progressive hybrid mesh compression algorithm.
Statistical Point Geometry
, 2003
"... We propose a scheme for modeling point sample geometry with statistical analysis. In our scheme we depart from the current schemes that deterministically represent the attributes of each point sample. We show how the statistical analysis of a densely sampled point model can be used to improve the ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
We propose a scheme for modeling point sample geometry with statistical analysis. In our scheme we depart from the current schemes that deterministically represent the attributes of each point sample. We show how the statistical analysis of a densely sampled point model can be used to improve the geometry bandwidth bottleneck and to do randomized rendering without sacrificing visual realism. We first carry out a hierarchical principal component analysis (PCA) of the model. This stage partitions the model into compact local geometries by exploiting local coherence. Our scheme handles vertex coordinates, normals, and color. The input model is reconstructed and rendered using a probability distribution derived from the PCA analysis. We demonstrate the benefits of this approach in all stages of the graphics pipeline: (1) orders of magnitude improvement in the storage and transmission complexity of point geometry, (2) direct rendering from compressed data, and (3) viewdependent randomized rendering.
Errorresilient transmission of 3D models
 ACM Trans. on Graphics
, 2005
"... In this article, we propose an errorresilient transmission method for progressively compressed 3D models. The proposed method is scalable with respect to both channel bandwidth and channel packetloss rate. We jointly design source and channel coders using a statistical measure that (i) calculates ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
In this article, we propose an errorresilient transmission method for progressively compressed 3D models. The proposed method is scalable with respect to both channel bandwidth and channel packetloss rate. We jointly design source and channel coders using a statistical measure that (i) calculates the number of both source and channel coding bits, and (ii) distributes the channel coding bits among the transmitted refinement levels in order to maximize the expected decoded model quality. In order to keep the total number of bits before and after applying error protection the same, we transmit fewer triangles in the latter case to accommodate the channel coding bits. When the proposed method is used to transmit a typical model over a channel with a 10% packetloss rate, the distortion (measured using the Hausdorff distance between the original and the decoded models) is reduced by 50 % compared to the case when no error protection is applied.