Results 1  10
of
33
Recent advances in compression of 3D meshes
 In Advances in Multiresolution for Geometric Modelling
, 2003
"... Summary. 3D meshes are widely used in graphic and simulation applications for approximating 3D objects. When representing complex shapes in a raw data format, meshes consume a large amount of space. Applications calling for compact storage and fast transmission of 3D meshes have motivated the multit ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
Summary. 3D meshes are widely used in graphic and simulation applications for approximating 3D objects. When representing complex shapes in a raw data format, meshes consume a large amount of space. Applications calling for compact storage and fast transmission of 3D meshes have motivated the multitude of algorithms developed to efficiently compress these datasets. In this paper we survey recent developments in compression of 3D surface meshes. We survey the main ideas and intuition behind techniques for singlerate and progressive mesh coding. Where possible, we discuss the theoretical results obtained for asymptotic behavior or optimality of the approach. We also list some open questions and directions for future research. 1
OutofCore Compression for Gigantic Polygon Meshes
, 2003
"... Polygonal models acquired with emerging 3D scanning technology or from large scale CAD applications easily reach sizes of several gigabytes and do not fit in the address space of common 32bit desktop PCs. In this paper we propose an outofcore mesh compression technique that converts such gigantic ..."
Abstract

Cited by 82 (23 self)
 Add to MetaCart
Polygonal models acquired with emerging 3D scanning technology or from large scale CAD applications easily reach sizes of several gigabytes and do not fit in the address space of common 32bit desktop PCs. In this paper we propose an outofcore mesh compression technique that converts such gigantic meshes into a streamable, highly compressed representation. During decompression only a small portion of the mesh needs to be kept in memory at any time. As full connectivity information is available along the decompression boundaries, this provides seamless mesh access for incremental incore processing on gigantic meshes. Decompression speeds are CPUlimited and exceed one million vertices and two million triangles per second on a 1.8 GHz Athlon processor.
SwingWrapper: Retiling Triangle Meshes for Better EdgeBreaker Compression
, 2003
"... We focus on the lossy compression of manifold triangle meshes. Our SwingWrapper approach partitions the surface of an original mesh M into simply connected regions, called triangloids. From these, we generate a new mesh M'. Each triangle of M' is an approximation of a triangloid of M. By c ..."
Abstract

Cited by 37 (12 self)
 Add to MetaCart
We focus on the lossy compression of manifold triangle meshes. Our SwingWrapper approach partitions the surface of an original mesh M into simply connected regions, called triangloids. From these, we generate a new mesh M'. Each triangle of M' is an approximation of a triangloid of M. By construction, the connectivity of M' is fairly regular and can be compressed to less than a bit per triangle using EdgeBreaker or one of the other recently developed schemes. The locations of the vertices of M' are compactly encoded with our new prediction technique, which uses a single correction parameter per vertex. SwingWrapper strives to reach a userdefined output file size rather than to guarantee a given error bound. For a variety of popular models, a rate of 0.4 bits/triangle yields an L2 distortion of about 0.01% of the bounding box diagonal. The proposed solution may also be used to encode crude meshes for adaptive transmission or for controlling subdivision surfaces.
Progressive Encoding of Complex Isosurfaces
, 2003
"... Some of the largest and most intricate surfaces result from isosurface extraction of volume data produced by 3D imaging modalities and scientific simulations. Such surfaces often possess both complicated geometry and topology (i.e., many connected components and high genus). Because of their sheer s ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
Some of the largest and most intricate surfaces result from isosurface extraction of volume data produced by 3D imaging modalities and scientific simulations. Such surfaces often possess both complicated geometry and topology (i.e., many connected components and high genus). Because of their sheer size, efficient compression algorithms, in particular progressive encodings, are critical in working with these surfaces. Most standard mesh compression algorithms have been designed to deal with generally smooth surfaces of low topologic complexity. Much better results can be achieved with algorithms which are specifically designed for isosurfaces arising from volumetric datasets.
Compressing Hexahedral Volume Meshes
 GRAPHICAL MODELS
, 2002
"... Unstructured hexahedral volume meshes are of particular interest for visualization and simulation applications. They allow regular tiling of the threedimensional space and show good numerical behaviour in finite element computations. Beside such appealing properties, volume meshes take huge amount ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
(Show Context)
Unstructured hexahedral volume meshes are of particular interest for visualization and simulation applications. They allow regular tiling of the threedimensional space and show good numerical behaviour in finite element computations. Beside such appealing properties, volume meshes take huge amount of space when stored in a raw format. In this paper we present a technique for encoding connectivity and geometry of unstructured hexahedral volume meshes. For
Delphi: Geometrybased Connectivity Prediction in Triangle Mesh Compression
 In The Visual Computer, International Journal of Computer Graphics (2004
, 2004
"... Delphi is a new geometryguided predictive scheme for compressing the connectivity of triangle meshes. Both compression and decompression algorithms traverse the mesh using the EdgeBreaker state machine. However, instead of encoding the EdgeBreaker clers symbols that capture connectivity explicitly, ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Delphi is a new geometryguided predictive scheme for compressing the connectivity of triangle meshes. Both compression and decompression algorithms traverse the mesh using the EdgeBreaker state machine. However, instead of encoding the EdgeBreaker clers symbols that capture connectivity explicitly, they estimate the location of the unknown vertex, v, of the next triangle. If the predicted location lies sufficiently close to the nearest vertex, w, on the boundary of the previously traversed portion of the mesh, then Delphi estimates that v coincides with w. When the guess is correct, a single confirmation bit is encoded. Otherwise, additional bits are used to encode the rectification of that prediction. When v coincides with a previously visited vertex that is not adjacent to the parent triangle (EdgeBreaker S case), the offset, which identifies the vertex v, must be encoded, mimicking the CutBorder Machine compression proposed by Gumhold and Strasser. On models where 97% of Delphi predictions are correct, the connectivity is compressed down to 0.19 bits per triangle. Compression rates decrease with the frequency of wrong predictors, but remains below 1.50 bits per triangle for all models tested.
Compressing Texture Coordinates with Selective Linear Predictions
, 2003
"... In this paper we describe a strategy for efficient predictive compression of texture coordinates. Previous works in mesh compression often claim that this mesh property can simply be compressed with the same predictor that is already used for vertex positions. However, in the presence of discontinui ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
In this paper we describe a strategy for efficient predictive compression of texture coordinates. Previous works in mesh compression often claim that this mesh property can simply be compressed with the same predictor that is already used for vertex positions. However, in the presence of discontinuities in the texture mapping such an approach results in unreasonable predictions. Our method avoids such predictions altogether. Rather than performing an unreasonable prediction, we switch to a less promising, but at least reasonable predictor. The resulting correctors are then compressed with different arithmetic contexts.
Mesh compression with random accessibility
 IsraelKorea BiNational Conf
, 2004
"... Previous mesh compression techniques provide nice properties such as high compression ratio, progressive decoding, and outofcore processing. However, none of them supports the random accessibility in decoding, which enables the details of any specific part to be available without decoding other pa ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
Previous mesh compression techniques provide nice properties such as high compression ratio, progressive decoding, and outofcore processing. However, none of them supports the random accessibility in decoding, which enables the details of any specific part to be available without decoding other parts. This paper introduces the random accessibility to mesh compression and proposes an effective framework for the property. The key component of the framework is a wirenet mesh constructed from a chartification of the given mesh. Experimental results show that random accessibility can be achieved with competent compression ratio, only a little worse than singlerate and comparable to progressive encoding.
Geometry compression of tetrahedral meshes using optimized prediction
 In Proc. European Conference on Signal Processing
, 2005
"... In this paper we propose a novel geometry compression technique for volumetric datasets represented as tetrahedral meshes. We focus on a commonly used technique for predicting vertex geometries via a flipping operation using an extension of the parallelogram rule. We demonstrate that the efficiency ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
In this paper we propose a novel geometry compression technique for volumetric datasets represented as tetrahedral meshes. We focus on a commonly used technique for predicting vertex geometries via a flipping operation using an extension of the parallelogram rule. We demonstrate that the efficiency of the flipping operation is dependent on the order in which tetrahedra are traversed and vertices are predicted accordingly. We formulate the problem of optimally (traversing tetrahedra and) predicting the vertices via flippings as a combinatorial optimization problem of constructing a constrained minimum spanning tree. We give heuristic solutions for this problem and show that we can achieve prediction efficiency very close to that of the unconstrained minimum spanning tree which is an unachievable lower bound. We also show significant improvements of our new geometry compression over the stateoftheart flipping approach, whose traversal order does not take into account the geometry of the mesh. 1.