Results 1  10
of
5,252
Exact Matrix Completion via Convex Optimization
, 2008
"... We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfe ..."
Abstract

Cited by 873 (26 self)
 Add to MetaCart
We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 555 (22 self)
 Add to MetaCart
of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Offtheshelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple firstorder and easy
OpenFlow: Enabling Innovation in Campus Networks
, 2008
"... This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flowtable, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors ..."
Abstract

Cited by 718 (84 self)
 Add to MetaCart
This whitepaper proposes OpenFlow: a way for researchers to run experimental protocols in the networks they use every day. OpenFlow is based on an Ethernet switch, with an internal flowtable, and a standardized interface to add and remove flow entries. Our goal is to encourage networking vendors
Imagenet classification with deep convolutional neural networks.
 In Advances in the Neural Information Processing System,
, 2012
"... Abstract We trained a large, deep convolutional neural network to classify the 1.2 million highresolution images in the ImageNet LSVRC2010 contest into the 1000 different classes. On the test data, we achieved top1 and top5 error rates of 37.5% and 17.0% which is considerably better than the pr ..."
Abstract

Cited by 1010 (11 self)
 Add to MetaCart
Abstract We trained a large, deep convolutional neural network to classify the 1.2 million highresolution images in the ImageNet LSVRC2010 contest into the 1000 different classes. On the test data, we achieved top1 and top5 error rates of 37.5% and 17.0% which is considerably better than
Summary cache: A scalable widearea web cache sharing protocol
, 1998
"... The sharing of caches among Web proxies is an important technique to reduce Web traffic and alleviate network bottlenecks. Nevertheless it is not widely deployed due to the overhead of existing protocols. In this paper we propose a new protocol called "Summary Cache"; each proxy keeps a su ..."
Abstract

Cited by 894 (3 self)
 Add to MetaCart
entry. Using tracedriven simulations and a prototype implementation, we show that compared to the existing Internet Cache Protocol (ICP), Summary Cache reduces the number of intercache messages by a factor of 25 to 60, reduces the bandwidth consumption by over 50%, and eliminates between 30 % to 95
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
likelihood weighting 3.1 The PYRAMID network All nodes were binary and the conditional probabilities were represented by tablesentries in the conditional probability tables (CPTs) were chosen uniformly in the range (0, 1]. 3.2 The toyQMR network All nodes were binary and the conditional probabilities
The design and use of algorithms for permuting large entries to the diagonal of sparse matrices
 SIAM J. MATRIX ANAL. APPL
, 1999
"... ..."
and Performance on Multiple Choice Exams in Large Entrylevel Courses
"... Scores on a vocabulary test given at the beginning of two semesters in a large entrylevel course predicted performance on multiplechoice exams more strongly than precourse knowledge and critical thinking. Words on the vocabulary instrument were derived from multiplechoice exam items in the cours ..."
Abstract
 Add to MetaCart
Scores on a vocabulary test given at the beginning of two semesters in a large entrylevel course predicted performance on multiplechoice exams more strongly than precourse knowledge and critical thinking. Words on the vocabulary instrument were derived from multiplechoice exam items
On the distribution of the largest eigenvalue in principal components analysis
 ANN. STATIST
, 2001
"... Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart distribu ..."
Abstract

Cited by 422 (4 self)
 Add to MetaCart
Let x �1 � denote the square of the largest singular value of an n × p matrix X, all of whose entries are independent standard Gaussian variates. Equivalently, x �1 � is the largest principal component variance of the covariance matrix X ′ X, or the largest eigenvalue of a pvariate Wishart
Results 1  10
of
5,252