Results 11  20
of
359
Recovering Sparse Signals Using Sparse Measurement Matrices in Compressed DNA Microarrays
, 2008
"... Microarrays (DNA, protein, etc.) are massively parallel affinitybased biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
Microarrays (DNA, protein, etc.) are massively parallel affinitybased biosensors capable of detecting and quantifying a large number of different genomic particles simultaneously. Among them, DNA microarrays comprising tens of thousands of probe spots are currently being employed to test multitude of targets in a single experiment. In conventional microarrays, each spot contains a large number of copies of a single probe designed to capture a single target, and, hence, collects only a single data point. This is a wasteful use of the sensing resources in comparative DNA microarray experiments, where a test sample is measured relative to a reference sample. Typically, only a fraction of the total number of genes represented by the two samples is differentially expressed, and, thus, a vast number of probe spots may not provide any useful information. To this end, we propose an alternative design, the socalled compressed microarrays, wherein each spot contains copies of several different probes and the total number of spots is potentially much smaller than the number of targets being tested. Fewer spots directly translates to significantly lower costs due to cheaper array manufacturing, simpler image acquisition and processing, and smaller amount of genomic material needed for experiments. To recover signals from compressed microarray measurements, we leverage ideas from compressive sampling. For sparse measurement matrices, we propose an algorithm that has significantly lower computational complexity than the widely used linearprogrammingbased methods, and can also recover signals with less sparsity.
Quasirandom Rumor Spreading
 In Proc. of SODA’08
, 2008
"... We propose and analyse a quasirandom analogue to the classical push model for disseminating information in networks (“randomized rumor spreading”). In the classical model, in each round each informed node chooses a neighbor at random and informs it. Results of Frieze and Grimmett (Discrete Appl. Mat ..."
Abstract

Cited by 37 (12 self)
 Add to MetaCart
(Show Context)
We propose and analyse a quasirandom analogue to the classical push model for disseminating information in networks (“randomized rumor spreading”). In the classical model, in each round each informed node chooses a neighbor at random and informs it. Results of Frieze and Grimmett (Discrete Appl. Math. 1985) show that this simple protocol succeeds in spreading a rumor from one node of a complete graph to all others within O(log n) rounds. For the network being a hypercube or a random graph G(n, p) with p ≥ (1+ε)(log n)/n, also O(log n) rounds suffice (Feige, Peleg, Raghavan, and Upfal, Random Struct. Algorithms 1990). In the quasirandom model, we assume that each node has a (cyclic) list of its neighbors. Once informed, it starts at a random position of the list, but from then on informs its neighbors in the order of the list. Surprisingly, irrespective of the orders of the lists, the above mentioned bounds still hold. In addition, we also show a O(log n) bound for sparsely connected random graphs G(n, p) with p = (log n+f(n))/n, where f(n) → ∞ and f(n) = O(log log n). Here, the classical model needs Θ(log 2 (n)) rounds. Hence the quasirandom model achieves similar or better broadcasting times with a greatly reduced use of random bits.
SINGULARITIES, EXPANDERS AND TOPOLOGY OF MAPS. PART 2: FROM COMBINATORICS TO TOPOLOGY VIA ALGEBRAIC ISOPERIMETRY
 GEOMETRIC AND FUNCTIONAL ANALYSIS
, 2010
"... We find lower bounds on the topology of the fibers F −1 (y) ⊂ X of continuous maps F: X → Y in terms of combinatorial invariants of certain polyhedra and/or of the cohomology algebras H ∗ (X). Our exposition is conceptually related to but essentially independent of Part 1 of the paper. ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We find lower bounds on the topology of the fibers F −1 (y) ⊂ X of continuous maps F: X → Y in terms of combinatorial invariants of certain polyhedra and/or of the cohomology algebras H ∗ (X). Our exposition is conceptually related to but essentially independent of Part 1 of the paper.
Sampling community structure
 In WWW
, 2010
"... We propose a novel method, based on concepts from expander graphs, to sample communities in networks. We show that our sampling method, unlike previous techniques, produces subgraphs representative of community structure in the original network. These generated subgraphs may be viewed as stratified ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
(Show Context)
We propose a novel method, based on concepts from expander graphs, to sample communities in networks. We show that our sampling method, unlike previous techniques, produces subgraphs representative of community structure in the original network. These generated subgraphs may be viewed as stratified samples in that they consist of members from most or all communities in the network. Using samples produced by our method, we show that the problem of community detection may be recast into a case of statistical relational learning. We empirically evaluate our approach against several realworld datasets and demonstrate that our sampling method can effectively be used to infer and approximate community affiliation in the larger network.
Dynamics of Large Networks
, 2008
"... A basic premise behind the study of large networks is that interaction leads to complex collective behavior. In our work we found very interesting and counterintuitive patterns for time evolving networks, which change some of the basic assumptions that were made in the past. We then develop models ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
A basic premise behind the study of large networks is that interaction leads to complex collective behavior. In our work we found very interesting and counterintuitive patterns for time evolving networks, which change some of the basic assumptions that were made in the past. We then develop models that explain processes which govern the network evolution, fit such models to real networks, and use them to generate realistic graphs or give formal explanations about their properties. In addition, our work has a wide range of applications: it can help us spot anomalous graphs and outliers, forecast future graph structure and run simulations of network evolution. Another important aspect of our research is the study of “local ” patterns and structures of propagation in networks. We aim to identify building blocks of the networks and find the patterns of influence that these blocks have on information or virus propagation over the network. Our recent work included the study of the spread of influence in a large persontoperson
Algorithms, Graph Theory, and Linear Equations in Laplacian Matrices
"... Abstract. The Laplacian matrices of graphs are fundamental. In addition to facilitating the application of linear algebra to graph theory, they arise in many practical problems. In this talk we survey recent progress on the design of provably fast algorithms for solving linear equations in the Lapla ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
Abstract. The Laplacian matrices of graphs are fundamental. In addition to facilitating the application of linear algebra to graph theory, they arise in many practical problems. In this talk we survey recent progress on the design of provably fast algorithms for solving linear equations in the Laplacian matrices of graphs. These algorithms motivate and rely upon fascinating primitives in graph theory, including lowstretch spanning trees, graph sparsifiers, ultrasparsifiers, and local graph clustering. These are all connected by a definition of what it means for one graph to approximate another. While this definition is dictated by Numerical Linear Algebra, it proves useful and natural from a graph theoretic perspective.
The complexity of propositional proofs
 BULLETIN OF SYMBOLIC LOGIC
"... Propositional proof complexity is the study of the sizes of propositional proofs, and more generally, the resources necessary to certify propositional tautologies. Questions about proof sizes have connections with computational complexity, theories of arithmetic, and satisfiability algorithms. Thi ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
(Show Context)
Propositional proof complexity is the study of the sizes of propositional proofs, and more generally, the resources necessary to certify propositional tautologies. Questions about proof sizes have connections with computational complexity, theories of arithmetic, and satisfiability algorithms. This is article includes a broad survey of the field, and a technical exposition of some recently developed techniques for proving lower bounds on proof sizes.
Sparse and LowRank Matrix Decompositions
, 2009
"... Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, but obtaining an ex ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, but obtaining an exact solution is NPhard in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components; in fact our approach reduces to solving a semidefinite program. We provide sufficient conditions that guarantee exact recovery of the components by solving the semidefinite program. We also show that when the sparse and lowrank matrices are drawn from certain natural random ensembles, these sufficient conditions are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.
Expander graphs in pure and applied mathematics
 Bull. Amer. Math. Soc. (N.S
"... Expander graphs are highly connected sparse finite graphs. They play an important role in computer science as basic building blocks for network constructions, error correcting codes, algorithms and more. In recent years they have started to play an increasing role also in pure mathematics: number th ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
Expander graphs are highly connected sparse finite graphs. They play an important role in computer science as basic building blocks for network constructions, error correcting codes, algorithms and more. In recent years they have started to play an increasing role also in pure mathematics: number theory, group theory, geometry and more. This expository article describes their constructions and various applications in pure and applied mathematics. This paper is based on notes prepared for the Colloquium Lectures at the
Efficient Algorithms Using The Multiplicative Weights Update Method
, 2006
"... Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more eff ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact. In this thesis, we explore applications of the Multiplicative Weights method in the design of efficient algorithms for various optimization problems. This method, which was repeatedly discovered in quite diverse fields, is an algorithmic technique which maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution. We present a single metaalgorithm which unifies all known applications of this method in a common framework. Next, we generalize the method to the setting of symmetric matrices rather than real numbers. We derive the following applications of the resulting Matrix Multiplicative Weights algorithm: 1. The first truly general, combinatorial, primaldual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for obtaining O(plog n) approximations to various graph partitioning problems, such as Sparsest Cut, Balanced Separator in both directed and undirected weighted graphs, and constraint satisfaction problems such as Min UnCut and Min 2CNF Deletion. 2. An ~O(n3) time derandomization of the AlonRoichman construction of expanders using Cayley graphs. The algorithm yields a set of O(log n) elements which generates an expanding Cayley graph in any group of n elements. 3. An ~O(n3) time deterministic O(log n) approximation algorithm for the quantum hypergraph covering problem. 4. An alternative proof of a result of Aaronson that the flfatshattering dimension of quantum states on n qubits is O ( nfl2).