### Table 2. Transductive Setting: Error Rates (100-PRBEP for WebKb) on unlabeled examples. Results on which Laplacian SVMs (LapSVM) and Laplacian RLS (LapRLS) outperform all other methods are shown in bold. LapSVMjoint, LapRLSjoint use the sum of graph laplacians in each WebKB representation. Results for Graph-Trans, TSVM,rTSVM,Graph-density, and LDS are taken from (Chapelle amp; Zien, 2005)

2005

"... In PAGE 6: ... Transductive Setting In the transductive setting, the training set comprises of n examples, l of which are labeled (n; l are speci ed in Table 1). In Table2 , we lay out a performance com- parison of several algorithms in predicting the labels of the n l unlabeled examples. The experimental protocol is based on (Joachims, 2003) for the We- bKB dataset and (Chapelle amp; Zien, 2005) for other datasets.... ..."

Cited by 30

### Table 1. Precisions for four difierent approaches: content features based transduction; link structure based transduction; linearly combining graph Laplacians; and the Markov mixture model. The numbers in the flst line denote the proportion of labeled instances. Each precision result is averaged over 100 trials.

### Table 1: This table compares the performance of RPI using diffusion wavelets and Laplacian eigenfunctions with LSPI using handcoded polynomial and radial basis functions on a 50 state chain graph MDP.

in Abstract

2006

"... In PAGE 31: ...Table 1: This table compares the performance of RPI using diffusion wavelets and Laplacian eigenfunctions with LSPI using handcoded polynomial and radial basis functions on a 50 state chain graph MDP. Table1 compares the performance of RPI using diffusion wavelets and Laplacian eigen- functions, along with LSPI using two handcoded parametric basis functions: polynomials and radial-basis functions (RBF). Each row reflects the performance of either RPI using learned basis functions or LSPI with a handcoded basis function (values in parentheses indicate the number of basis functions used for each architecture).... ..."

### Table 1: This table compares the performance of RPI using diffusion wavelets and Laplacian eigenfunctions with LSPI using handcoded polynomial and radial basis functions on a 50 state chain graph MDP.

in Abstract

2005

"... In PAGE 10: ...Table 1: This table compares the performance of RPI using diffusion wavelets and Laplacian eigenfunctions with LSPI using handcoded polynomial and radial basis functions on a 50 state chain graph MDP. Table1 compares the performance of RPI using diffusion wavelets and Laplacian eigen- functions, along with LSPI using two handcoded parametric basis functions: polynomials and radial-basis functions (RBF). Each row reflects the performance of either RPI using learned basis functions or LSPI with a handcoded basis function (values in parentheses indicate the number of basis functions used for each architecture).... ..."

### Table 3. Recursive spectral bisection algorithm 1. Form Laplacian matrix of dual graph of the mesh. 2. Calculate Fiedler vector by Lanczos method. 3. Sort vertices according to size of entries in Fiedler vector. 4. Assign half of the vertices to each subdomain. 5. Repeat recursively on each subdomain.

1995

Cited by 6

### Table 1. The set of test graphs.

"... In PAGE 5: ... The quotient graphs were obtained using Unbalanced Recursive Bisection (URB) in Par2 [11]. Table1 summarizes the test suite of graphs and their properties. In the table, 1 and n?1 are the largest and sec- ond smallest eigenvalues of the Laplacian matrix L = D ? A associated with graph G (here A is the adjacency matrix of G and D is the diagonal matrix of the degrees of the nodes).... In PAGE 7: ... In fact, for regular graphs, 1(L) = 2d, and, usually, for irregular graphs, 1(L) 2d. For the graphs in our test suite ( Table1 ), we observed that b 1(L) d + d, where d is the average degree of the graph, and we used this as an estimate of 1(L). Estimating n?1(L): Our approach to estimate n?1(L) is via the isoperimetric constant of the graph.... In PAGE 7: ... Our experiments on the test suite (cf. Table1 ) suggest that 12h0(G) is a good estimation for n?1. In summary, we use b 1 = d + d and b n?1 = 12h0(G), and choose b and b as... ..."

### Table 2. Classification error rates (%) and standard deviations when GLKS-SSKM is compared with GLKS-SSL. Euclidean Transformation Tangent All

2007

"... In PAGE 7: ..., 2006), we run GLKS-SSKM and GLKS-SSL on four different convex sets of graph Laplacian kernels, with three sets based on the three distance metrics and the last one based on all col- lected metrics. Table2 summarizes the experimen- tal results. GLKS-SSKM outperforms GLKS-SSL in different cases, showing that integrating the cluster assumption and the manifold assumption does help.... ..."

Cited by 1

### Table 2.Elementary Landscapes. Problem Move Set K 1?

"... In PAGE 44: ...e nition. Let ? be a graph, the Laplacian of ? and f a landscape on ?. We say f is elementary if f + K0(f ? f) = 0; (43) where f = hfi = 1=N P x2? f(x) is the average value of f, and K0 is a constant. Grover (1992) observed that the landscapes of a number of classical combinatorial optimization problems are of this form, see Table2 . In order to keep the notation consistent with Grover apos;s work for regular graphs we introduce K = K0=D where D is as usual the vertex degree.... ..."