#### DMCA

## Semi-Supervised Learning Using Gaussian Fields and Harmonic Functions (2003)

### Cached

### Download Links

- [www.gatsby.ucl.ac.uk]
- [www.aaai.org]
- [www.aaai.org]
- [mlg.eng.cam.ac.uk]
- [www.gatsby.ucl.ac.uk]
- [mlg.eng.cam.ac.uk]
- CiteULike
- DBLP

### Other Repositories/Bibliography

Venue: | IN ICML |

Citations: | 726 - 14 self |

### Citations

3724 | Normalized cuts and image segmentation - Shi, Malik - 2000 |

2098 | Fast Approximate Energy Minimization via Graph Cuts
- Boykov, Veksler, et al.
(Show Context)
Citation Context ... are given in terms of a similarity function between instances. Unlike other recent work based on energy minimization and random fields in machine learning (Blum & Chawla, 2001) and image processing (=-=Boykov et al., 2001-=-), we adopt Gaussian fields over a continuous state space rather than random fields over the discrete label set. This “relaxation” to a continuous rather than discrete sample space results in many att... |

1673 | On spectral clustering: Analysis and an algorithm
- Ng, Jordan, et al.
- 2001
(Show Context)
Citation Context ...o semi-supervised learning as well. In general, E when is close to block diagonal, it can be shown that data points are tightly clustered in the eigenspace spanned by the first few eigenvectors of ƒ (=-=Ng et al., 2001-=-a; Meila & Shi, 2001), leading to various spectral clustering algorithms. Perhaps the most interesting and substantial connection to the methods we propose here is the graph mincut approach proposed b... |

512 | Large margin classification using the perceptron algorithm
- Freund, Schapire
- 1999
(Show Context)
Citation Context ...f is among ’s 10 nearest neighbors, as measured by cosine similarity. We use the following weight function on the edges: !"O P Q . + +- We use one-nearest neighbor and the voted perceptron algorithm (=-=Freund & Schapire, 1999-=-) (10 epochs with a linž žjž ž (16)a Z a ear kernel) as baselines–our results with support vector machines are comparable. The results are shown in Figure 4. As before, each point is the average of 1... |

471 | Random Walks and Electric Networks
- Doyle, Snell
- 1984
(Show Context)
Citation Context ...which is consistent with our prior notion of smoothness _ of with respect to the graph. Expressed slightly differently, !”“ , where “^!”…C• E . Because of the maximum principle of harmonic functions (=-=Doyle & Snell, 1984-=-), _ unique and is either a constant or it +c– satisfies i™'š; for . —i ˜–‹. To compute the harmonic solution explicitly in terms of matrix operations, we split the weight E matrix (and similarly …š “... |

327 | Learning from Labeled and Unlabeled Data Using Graph Mincuts
- Blum, Chawla
- 2001
(Show Context)
Citation Context ...labeled and labeled data, where the weights are given in terms of a similarity function between instances. Unlike other recent work based on energy minimization and random fields in machine learning (=-=Blum & Chawla, 2001-=-) and image processing (Boykov et al., 2001), we adopt Gaussian fields over a continuous state space rather than random fields over the discrete label set. This “relaxation” to a continuous rather tha... |

292 | Correctness of belief propagation in Gaussian graphical models of arbitrary topology - Weiss, Freeman |

281 | Handwritten digit recognition with a back-propagation network - LeCun - 1989 |

269 |
A database for handwritten text recognition research
- Hull
- 1994
(Show Context)
Citation Context ...ward extension of the above analysis. 7. Experimental Results We first evaluate harmonic energy minimization on a handwritten digits dataset, originally from the Cedar Buffalo binary digits database (=-=Hull, 1994-=-). The digits were preprocessed to reduce the size of each image down to a . - B . - grid by down-sampling and Gaussian smoothing, with pixel values ranging from 0 to 255 (Le Cun et al., 1990). Each i... |

245 | Partially labeled classification with markov random walks
- Szummer, Jaakkola
- 2001
(Show Context)
Citation Context ... (2001), however there are two major differences. First, we fix the value of _ on the labeled points, and second, our solution is an equilibrium state, expressed in terms of a hitting time, while in (=-=Szummer & Jaakkola, 2001-=-) the walk crucially depends on the time parameter . We will return to this point when discussing heat kernels. An electrical network interpretation is given in (Doyle & Snell, 1984). Imagine the edge... |

218 | Diffusion kernels on graphs and other discrete input spaces - Kondor, Lafferty - 2002 |

213 | A random walks view of spectral segmentation
- Meila, Shi
- 2001
(Show Context)
Citation Context ...learning as well. In general, E when is close to block diagonal, it can be shown that data points are tightly clustered in the eigenspace spanned by the first few eigenvectors of ƒ (Ng et al., 2001a; =-=Meila & Shi, 2001-=-), leading to various spectral clustering algorithms. Perhaps the most interesting and substantial connection to the methods we propose here is the graph mincut approach proposed by Blum and Chawla (2... |

187 | Cluster kernels for semi-supervised learning - Chapelle, Weston, et al. - 2003 |

96 | Using manifold structure for partially labelled classification - Belkin, Niyogi |

58 | Discrete Green’s functions
- Chung, Yau
- 2000
(Show Context)
Citation Context ... E > • (6) xE£, ! _ or _ X V V ! K J K #"% i (7) ji I! K Expression (7) shows that this approach can be viewed as a kernel classifier with the kernel and a specific form of kernel machine. (See also (=-=Chung & Yau, 2000-=-), where a normalized Laplacian is used instead of the combinatorial Laplacian.) From (6) we also see that the spectrum of is a connection to the work of Chapelle et al. (2002), who manipulate the eig... |

42 | PAC-Bayesian generalization error bounds for gaussian process classification. Informatics report series EDI-INF-RR-0094 - Seeger - 2002 |

33 |
Learning with labeled and unlabeled data (Technical Report
- Seeger
- 2001
(Show Context)
Citation Context ... of central importance in machine learning. The semi-supervised learning problem has attracted an increasing amount of interest recently, and several novel approaches have been proposed; we refer to (=-=Seeger, 2001-=-) for an overview. Among these methods is a promising family of techniques that exploit the “manifold structure” of the data; such methods are generally based upon an assumption that similar unlabeled... |