### Table 7. Compression results (b/v) with no quantization (128 b/v in raw data), compared with lossless ipping ILS- ip. For Blunt and Post, we show both of our results without (left of / ) and with (right of / ) the special treatment of the z coordinate.

"... In PAGE 9: ... Therefore, lossless Flipping is not a competitive predictor for our volume meshes. Moreover, we compare our lossless compression results with those of ILS- ip, as shown in Table7 , where we also show our results after adding the raw cost ( + lg n ) and the encoding cost ( +perm ) of vertex permutation when integrated with the connectivity coder [40]. It is interesting to see that ILS- ip compressed better than the 8-bit entropies shown in Table 6.... In PAGE 9: ... This is due to the fact that the ILS code does not encode by units of bytes and is able to capture the dependencies between bytes, and therefore is not lower bounded by the 8-bit entropies. More importantly, from Table7 we see that our nal compression results (the entries in +perm ) are always better than those of ILS- ip, showing the ef cacy of our approach. Now, we want to see the compression performance of our methods when an initial quantization (i.... ..."

### Table 6. The 8-bit entropy (b/v) of the prediction errors of lossless Flipping and the original input data. The raw-data bit rates are: 128 b/v for the rst four datasets, 416 b/v for Tpost10, and 736 b/v for Tpost20.

"... In PAGE 9: ... Moreover, we compare our lossless compression results with those of ILS- ip, as shown in Table 7, where we also show our results after adding the raw cost ( + lg n ) and the encoding cost ( +perm ) of vertex permutation when integrated with the connectivity coder [40]. It is interesting to see that ILS- ip compressed better than the 8-bit entropies shown in Table6 . This is due to the fact that the ILS code does not encode by units of bytes and is able to capture the dependencies between bytes, and therefore is not lower bounded by the 8-bit entropies.... In PAGE 10: ... For our method, we also list the results of adding the extra cost of encoding the permutation sequence ( +perm ), which are our nal results. As before, ILS- ip compressed much better than the 8-bit entropies shown in Table6 , showing that the ILS code is a nice coding technique. More importantly, we see from Table 11 that our nal compression results (those in +perm ) are always much better than those of ILS- ip, with the best one (A-Cg10 on Tpost20) 104.... In PAGE 10: ...he best one (A-Cg10 on Tpost20) 104.14 b/v (28.09%) more ef cient, despite paying the additional cost of encoding the vertex permutation. This shows that while lossless Flipping does not predict well as seen in Table6 , our technique, in particular the vertex re-ordering approach, is quite effective in achieving compression ef ciencies. In Table 12, we compare the results of applying our method as well as Flipping after an initial 32-bit quantization was performed.... ..."

### Table 8. 8-bit entropy results (b/v) with 32-bit quantization (128 b/v in raw data).

"... In PAGE 10: ... Observe that at this time AC(S) is always better than AC(A) for encoding the ipping errors. Moreover, it is interesting to see that our encoding results (A-Cg) are better than their 8-bit entropies (TSP-MST) listed in Table8 in all cases except for two (Comb 216 and Comb 512). This is due to the fact that our two-layer arithmetic coding technique is able to capture the dependencies between bytes whereas a simple 8-bit entropy calculation is not, therefore our encoding results are not lower bounded by the 8-bit entropies.... ..."

### Table 1 W0 V = B(V ) Iterations CPU Time

2004

"... In PAGE 14: ...) For different initial intervals W0 we computed V = B(V ) and in all cases convergence to V was achieved relatively quickly. Information regarding the computation of V is presented in Table1 . All the computations of V take 100 equally spaced nodes at every iteration.... ..."

### Table 5: Compression results (b/v) with no quantization and the special treatment of the z coordinate.

2005

"... In PAGE 9: ... For this, we slightly modify our algorithm and give the z-values a special treatment: all the partitioning/clustering and the TSP-MST vertex re-ordering steps are the same; only at the nal encoding step, we code the entire z-values by gzip, while the x;y and scalar values are encoded as before (A-Cg for the mantissa differences and gzip for the signed exponents). We show the results in Table5 . Observe that gzip compressed the z-values from 32 b/v to 0.... In PAGE 9: ...07 and 0.05 b/v! With our special treatment for z, our results are now bet- ter than Gzip for these two special datasets (see Table5 ). In general, at the nal encoding step, we can rst try to gzip each of the entire x, y, z and scalar values to see if any of them deserves a special treatment, and then proceed to com- press the remaining portions with our normal technique as above.... ..."

Cited by 2

### Table 6: The 8-bit entropy (b/v) of the prediction errors of lossless Flipping and the original input data.

2005

"... In PAGE 10: ... Flipping is widely considered the state of the art when applied after quantization, and oating-point ipping meth- ods (with no quantization) were recently given in [ILS04] for polygonal meshes. We computed the 8-bit entropy of the prediction errors of lossless Flipping, as well as the 8-bit entropy of the original input data, as shown in Table6 x. In- terestingly, lossless Flipping actually increases the entropy for all our datasets, including steady-state and time-varying ones (such events have been observed in [ILS04] for polygo- nal meshes, but only for very few datasets).... In PAGE 10: ... We also compared with Gzip, 8-bit adaptive and static arithmetic coding (AC(A) and AC(S)), on the original input data. (As seen in Table6 lossless Flipping did not predict well and hence we did not compare with lossless Flipping.) We can see that Gzip is the best among the three, and our results are always signi cantly better than Gzip, with the best one... ..."

Cited by 2

### Table 7: 8-bit entropy results with 32-bit quantization (128 b/v in row data).

2005

"... In PAGE 10: ... Note that now our method does not use gzip at all (since there is no exponent). To com- pare the prediction performance, we computed the 8-bit en- tropy of the prediction errors using our TSP-MST method and Flipping; the results are shown in Table7 . To make the comparison fair we also show our results after adding to the entropy the raw cost ( +lgn ) and the encoding cost ( +perm ) of vertex permutation when integrated with the x Surprisingly, Spx has only 2896 vertices (14.... ..."

Cited by 2

### Table 6.1: The B(v; k; ) design families classified.

2001

Cited by 2