### Table 1: Unsupervised distance metric learning methods. This group of methods es- sentially learn a low-dimensional embedding of the original feature space; and can be categorized along two dimensions: preserving glocal structure vs. preserving local structure; and linear vs. nonlinear

2007

### Table 2. The proportion of square modular matrices of low-dimensional kernel.

1999

Cited by 19

### Table 1: Algorithm for planning in low-dimensional belief space.

in Abstract

"... In PAGE 4: ... Our conversion algorithm is a variant of the Augmented MDP, or Coastal Navigation algorithm [9], using belief features instead of entropy. Table1 outlines the steps of this... ..."

### Table 4: Communication cost for low-dimensional hypercube matrices with domain partitioning for p = 100

1994

"... In PAGE 22: ...c 4 2nz(A)=p = 2p nz(A) : (21) This implies that problems with more than 200,000 nonzeros can be solved efficiently on a 100-processor BSP computer with l 1000. 7 Results for structure dependent distributions Table4 shows the normalised communication cost for hypercube matrices of distance one and dimension d = 2;; 3;; 4, distributed by domain partitioning of the corresponding hypercube graph. The radix r is the number of points in each dimension, and P k ;; 0 k lt;d , is the number of subdomains into which dimension k is split.... In PAGE 22: ... The distribution of the grid points and hence of the vector components uniquely determines the distribution of the matrix. The results of Table4 show that the lowest communication cost for separate di- mension splitting is achieved if the resulting blocks are cubic. This is an immediate consequence of the surface-to-volume effect, where the communication across the block boundaries grows as the number of points near the surface, and the computation as the number of points within the volume of the block.... In PAGE 22: ... By symmetry, the same argument holds for sending. Therefore, the normalised communication cost for cubic partitioning is b = 2dp 1=d (4d + 1)r p 1=d 2r : (22) This formula explains the results for d = 2 and P 0 = P 1 = 10 in Table4 . It implies for instance that two-dimensional grid problems with more than 45 grid points per direction can be solved efficiently on 100-processor BSP computers with g 10.... ..."

Cited by 81

### Table 1. Upper bounds on the coding gain of low-dimensional lattices 10

1999

"... In PAGE 8: ...2 can be converted into an upper bound on the highest possible coding gain that may be achieved, at speci c symbol error probabilities, using any n-dimensional lattice. The resulting bounds for n = 1; 2; ; 32 are summarized in Table1 , and compared with the nominal coding gains of the best known lattices in the corresponding dimensions. Coding gain is usually de ned in terms of the signal-to-noise ratios required by the coded and uncoded systems to achieve a given probability of error.... In PAGE 8: ... For the sake of brevity, we only consider the case where n is even. The development for n odd is similar [27], and the results are summarized in Table1 for all odd n 31. For even n, let k = n=2 and de ne the function gk(x) def= e?x 1 + x 1! + x2 2! + + xk?1 (k?1)! ! (14) The lower bound (9) of Theorem 2.... In PAGE 10: ...heorem 2.4. Let be an n-dimensional lattice, and let n = 2k. Then the coding gain of over Zn is upper bounded by e ( ) (k; Pe)2 z(k; Pe) 4(k!)1=k (19) The coding gain e ( ) is de ned in terms of lattice SNR, and the foregoing bound is parametrized by both the dimension and the probability of symbol error. The bound of (19) is tabulated for normalized error probabilities P e = 10?5; 10?6; 10?7 and dimensions n 32 in Table1 . All the entries in Table 1 are given in terms of dB.... In PAGE 10: ...4 is not asymptotic for Pe ! 0; it is reasonably tight for symbol error rates of practical interest. As can be seen from Table1 , it is generally much tighter than the results obtained by computing the nominal (asymptotic for Pe ! 0) coding gains of the densest known n-dimensional lattices, and/or the upper bounds thereupon.... ..."

Cited by 5

### Table 1. Average MAEs for both neighborhood dimensions high-dimensional low-dimensional

"... In PAGE 9: ... Figure 3 includes the Mean Absolute Errors for high (ib) and low (svd-ib) di- mensions, as observed for each of the 5 data splits of the data set. These error values are then averaged and Table1 records the flnal results for both implemen- tations. From both the preceding flgure and table, we can conclude that applying Item- based Filtering on the low-rank neighborhood, provides a clear improvement over the higher dimension neighborhood.... ..."

Cited by 1

### Table 4.1: Low-dimensional nonstifi ODE system: classical approach.

2005

### Table 2. The proportion of square modular matrices of low-dimensional kernel. Prime modulus p 2 3 5

### Table 2. The proportion of square modular matrices of low-dimensional kernel. Prime modulus p 2 3 5

### Table 4: Communication cost for low-dimensional hypercube matrices with domain partitioning for p = 100

"... In PAGE 22: ...c 4 2nz(A)=p = 2p nz(A): (21) This implies that problems with more than 200,000 nonzeros can be solved efficiently on a 100-processor BSP computer with l 1000. 7 Results for structure dependent distributions Table4 shows the normalised communication cost for hypercube matrices of distance one and dimension d = 2; 3; 4, distributed by domain partitioning of the corresponding hypercube graph. The radix r is the number of points in each dimension, and Pk; 0 k lt; d, is the number of subdomains into which dimension k is split.... In PAGE 22: ... The distribution of the grid points and hence of the vector components uniquely determines the distribution of the matrix. The results of Table4 show that the lowest communication cost for separate di- mension splitting is achieved if the resulting blocks are cubic. This is an immediate consequence of the surface-to-volume effect, where the communication across the block boundaries grows as the number of points near the surface, and the computation as the number of points within the volume of the block.... In PAGE 22: ... By symmetry, the same argument holds for sending. Therefore, the normalised communication cost for cubic partitioning is b = 2dp1=d (4d + 1)r p1=d 2r : (22) This formula explains the results for d = 2 and P0 = P1 = 10 in Table4 . It implies for instance that two-dimensional grid problems with more than 45 grid points per direction can be solved efficiently on 100-processor BSP computers with g 10.... ..."