Results 1  10
of
22
Aggregating inconsistent information: ranking and clustering
, 2005
"... We address optimization problems in which we are given contradictory pieces of input information and the goal is to find a globally consistent solution that minimizes the number of disagreements with the respective inputs. Specifically, the problems we address are rank aggregation, the feedback arc ..."
Abstract

Cited by 226 (17 self)
 Add to MetaCart
We address optimization problems in which we are given contradictory pieces of input information and the goal is to find a globally consistent solution that minimizes the number of disagreements with the respective inputs. Specifically, the problems we address are rank aggregation, the feedback arc set problem on tournaments, and correlation and consensus clustering. We show that for all these problems (and various weighted versions of them), we can obtain improved approximation factors using essentially the same remarkably simple algorithm. Additionally, we almost settle a longstanding conjecture of BangJensen and Thomassen and show that unless NP⊆BPP, there is no polynomial time algorithm for the problem of minimum feedback arc set in tournaments.
Integrating microarray data by consensus clustering
 In Proceedings of International Conference on Tools with Artificial Intelligence (ICTAI
, 2003
"... With the exploding volume of microarray experiments comes increasing interest in mining repositories of such data. Meaningfully combining results from varied experiments on an equal basis is a challenging task. Here we propose a general method for integrating heterogeneous data sets based on the con ..."
Abstract

Cited by 45 (3 self)
 Add to MetaCart
(Show Context)
With the exploding volume of microarray experiments comes increasing interest in mining repositories of such data. Meaningfully combining results from varied experiments on an equal basis is a challenging task. Here we propose a general method for integrating heterogeneous data sets based on the consensus clustering formalism. Our method analyzes sourcespecific clusterings and identifies a consensus setpartition which is as close as possible to all of them. We develop a general criterion to assess the potential benefit of integrating multiple heterogeneous data sets, i.e. whether the integrated data is more informative than the individual data sets. We apply our methods on two popular sets of microarray data yielding gene classifications of potentially greater interest than could be derived from the analysis of each individual data set. 1.
Aggregation of partial rankings, pratings and topm lists
 ACMSIAM Symposium on Discrete Algorithms (SODA
, 2007
"... We study the problem of aggregating partial rankings. This problem is motivated by applications such as metasearching and information retrieval, search engine spam fighting, ecommerce, learning from experts, analysis of population preference sampling, committee decision making and more. We improve ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
(Show Context)
We study the problem of aggregating partial rankings. This problem is motivated by applications such as metasearching and information retrieval, search engine spam fighting, ecommerce, learning from experts, analysis of population preference sampling, committee decision making and more. We improve recent constant factor approximation algorithms for aggregation of full rankings and generalize them to partial rankings. Our algorithms improved constant factor approximation with respect to all metrics discussed in Fagin et al’s recent important work on comparing partial rankings. We pay special attention to two important types of partial rankings: the wellknown topm lists and the more general pratings which we define. We provide first evidence for hardness of aggregating them for constant m, p.
Deterministic algorithms for rank aggregation and other ranking and clustering problems
 In In Proceedings of the Fifth International Workshop on Approximation and Online Algorithms
, 2007
"... Abstract. We consider ranking and clustering problems related to the aggregation of inconsistent information. Ailon, Charikar, and Newman [1] proposed randomized constant factor approximation algorithms for these problems. Together with Hegde and Jain, we recently proposed deterministic versions of ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We consider ranking and clustering problems related to the aggregation of inconsistent information. Ailon, Charikar, and Newman [1] proposed randomized constant factor approximation algorithms for these problems. Together with Hegde and Jain, we recently proposed deterministic versions of some of these randomized algorithms [2]. With one exception, these algorithms required the solution of a linear programming relaxation. In this paper, we introduce a purely combinatorial deterministic pivoting algorithm for weighted ranking problems with weights that satisfy the triangle inequality; our analysis is quite simple. We then shown how to use this algorithm to get the first deterministic combinatorial approximation algorithm for the partial rank aggregation problem with performance guarantee better than 2. In addition, we extend our approach to the linear programming based algorithms in Ailon et al. [1] and Ailon [3]. Finally, we show that constrained rank aggregation is not harder than unconstrained rank aggregation.
Abstract Consensus Clustering Algorithms: Comparison and Refinement
"... Consensus clustering is the problem of reconciling clustering information about the same data set coming from different sources or from different runs of the same algorithm. Cast as an optimization problem, consensus clustering is known as median partition, and has been shown to be NPcomplete. A nu ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
(Show Context)
Consensus clustering is the problem of reconciling clustering information about the same data set coming from different sources or from different runs of the same algorithm. Cast as an optimization problem, consensus clustering is known as median partition, and has been shown to be NPcomplete. A number of heuristics have been proposed as approximate solutions, some with performance guarantees. In practice, the problem is apparently easy to approximate, but guidance is necessary as to which heuristic to use depending on the number of elements and clusterings given. We have implemented a number of heuristics for the consensus clustering problem, and here we compare their performance, independent of data size, in terms of efficacy and efficiency, on both simulated and real data sets. We find that based on the underlying algorithms and their behavior in practice the heuristics can be categorized into two distinct groups, with ramification as to which one to use in a given situation, and that a hybrid solution is the best bet in general. We have also developed a refined consensus clustering heuristic for the occasions when the given clusterings may be too disparate, and their consensus may not be representative of any one of them, and we show that in practice the refined consensus clusterings can be much superior to the general consensus clustering. 1
Consensus answers for queries over probabilistic databases
 in PODS
, 2009
"... We address the problem of finding a “best ” deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (a ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
We address the problem of finding a “best ” deterministic query answer to a query over a probabilistic database. For this purpose, we propose the notion of a consensus world (or a consensus answer) which is a deterministic world (answer) that minimizes the expected distance to the possible worlds (answers). This problem can be seen as a generalization of the wellstudied inconsistent information aggregation problems (e.g. rank aggregation) to probabilistic databases. We consider this problem for various types of queries including SPJ queries, Topk ranking queries, groupby aggregate queries, and clustering. For different distance metrics, we obtain polynomial time optimal or approximation algorithms for computing the consensus answers (or prove NPhardness). Most of our results are for a general probabilistic database model, called and/xor tree model, which significantly generalizes previous probabilistic database models like xtuples and blockindependent disjoint models, and is of independent interest.
Heterogeneous Data Integration with the Consensus Clustering Formalism
 Proceedings of Data Integration in the Life Sciences
, 2004
"... Abstract. Meaningfully integrating massive multiexperimental genomic data sets is becoming critical for the understanding of gene function. We have recently proposed methodologies for integrating large numbers of microarray data sets based on consensus clustering. Our methods combine gene clusters ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Meaningfully integrating massive multiexperimental genomic data sets is becoming critical for the understanding of gene function. We have recently proposed methodologies for integrating large numbers of microarray data sets based on consensus clustering. Our methods combine gene clusters into a unified representation, or a consensus, that is insensitive to misclassifications in the individual experiments. Here we extend their utility to heterogeneous data sets and focus on their refinement and improvement. First of all we compare our best heuristic to the popular majority rule consensus clustering heuristic, and show that the former yields tighter consensuses. We propose a refinement to our consensus algorithm by clustering of the sourcespecific clusterings as a step before finding the consensus between them, thereby improving our original results and increasing their biological relevance. We demonstrate our methodology on three data sets of yeast with biologically interesting results. Finally, we show that our methodology can deal successfully with missing experimental values. 1
Optimal Meta Search Results Clustering
 Proc. 33rd Int’l ACM SIGIR Conf. Research and Development in Information Retrieval
, 2010
"... By analogy with merging documents rankings, the outputs from multiple search results clustering algorithms can be combined into a single output. In this paper we study the feasibility of meta search results clustering, which has unique features compared to the general meta clustering problem. After ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
By analogy with merging documents rankings, the outputs from multiple search results clustering algorithms can be combined into a single output. In this paper we study the feasibility of meta search results clustering, which has unique features compared to the general meta clustering problem. After showing that the combination of multiple search results clusterings is empirically justified, we cast meta clustering as an optimization problem of an objective function measuring the probabilistic concordance between the clustering combination and the single clusterings. We then show, using an easily computable upper bound on such a function, that a simple stochastic optimization algorithm delivers reasonable approximations of the optimal value very efficiently, and we also provide a method for labeling the generated clusters with the most agreed upon cluster labels. Optimal meta clustering with meta labeling is applied to three descriptioncentric, stateoftheart search results clustering algorithms. The performance improvement is demonstrated through a range of evaluation techniques (i.e., internal, classificationoriented, and information retrievaloriented), using suitable test collections of search results with documentlevel relevance judgments per subtopic.
Average Parameterization and Partial Kernelization for Computing Medians
 PROC. 9TH LATIN
, 2010
"... We propose an effective polynomialtime preprocessing strategy for intractable median problems. Developing a new methodological framework, we show that if the input instances of generally intractable problems exhibit a sufficiently high degree of similarity between each other on average, then there ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
We propose an effective polynomialtime preprocessing strategy for intractable median problems. Developing a new methodological framework, we show that if the input instances of generally intractable problems exhibit a sufficiently high degree of similarity between each other on average, then there are efficient exact solving algorithms. In other words, we show that the median problems Swap Median Permutation, Consensus Clustering, Kemeny Score, and Kemeny Tie Score all are fixedparameter tractable with respect to the parameter “average distance between input objects”. To this end, we develop the new concept of “partial kernelization” and identify interesting polynomialtime solvable special cases for the considered problems.