Results 1 
8 of
8
On the parameterized complexity of reconfiguration problems
, 2013
"... We present the first results on the parameterized complexity of reconfiguration problems, where a reconfiguration version of an optimization problem Q takes as input two feasible solutions S and T and determines if there is a sequence of reconfiguration steps that can be applied to transform S into ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
(Show Context)
We present the first results on the parameterized complexity of reconfiguration problems, where a reconfiguration version of an optimization problem Q takes as input two feasible solutions S and T and determines if there is a sequence of reconfiguration steps that can be applied to transform S into T such that each step results in a feasible solution to Q. For most of the results in this paper, S and T are subsets of vertices of a given graph and a reconfiguration step adds or deletes a vertex. Our study is motivated by recent results establishing that for most NPhard problems, the classical complexity of reconfiguration is PSPACEcomplete. We address the question for several important graph properties under two natural parameterizations: k, the size of the solutions, and `, the length of the sequence of steps. Our first general result is an algorithmic paradigm, the reconfiguration kernel, used to obtain fixedparameter algorithms for the reconfiguration versions of Vertex Cover and, more generally, Bounded Hitting Set and Feedback Vertex Set, all parameterized by k. In contrast, we show that reconfiguring Unbounded Hitting Set is W [2]hard when parameterized by k+`. We also demonstrate the W [1]hardness of the reconfiguration versions of a large class of maximization problems parameterized by k + `, and of their corresponding deletion problems parameterized by `; in doing so, we show that there exist problems in FPT when parameterized by k, but whose reconfiguration versions are W [1]hard when parameterized by k + `.
On the Hardness of Approximating Stopping and Trapping Sets ∗
, 704
"... We prove that approximating the size of stopping and trapping sets in Tanner graphs of linear block codes, and more restrictively, the class of lowdensity paritycheck (LDPC) codes, is NPhard. The ramifications of our findings are that methods used for estimating the height of the errorfloor of m ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
We prove that approximating the size of stopping and trapping sets in Tanner graphs of linear block codes, and more restrictively, the class of lowdensity paritycheck (LDPC) codes, is NPhard. The ramifications of our findings are that methods used for estimating the height of the errorfloor of moderate and longlength LDPC codes based on stopping and trapping set enumeration cannot provide accurate worstcase performance predictions. 1
Some fixedparameter tractable classes of hypergraph duality and related problems
 3rd Int. Workshop on Parameterized and Exact Computation IWPEC 2008, LNCS 5018
, 2005
"... and related problems ..."
Boundeddegree techniques accelerate some parameterized graph algorithms
 FOMIN (EDS.), INT. WORKSHOP ON PARAMETERIZED AND EXACT COMP., IWPEC 2009, LNCS 5917
, 2009
"... Many parameterized algorithms for NPhard graph problems are search tree algorithms with sophisticated local branching rules. But it has also been noticed that the global structure of input graphs can help improve the time bounds. Here we present some new results based on the global structure of bou ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Many parameterized algorithms for NPhard graph problems are search tree algorithms with sophisticated local branching rules. But it has also been noticed that the global structure of input graphs can help improve the time bounds. Here we present some new results based on the global structure of boundeddegree graphs after branching away the highdegree vertices. Some techniques and structural results are generic and should find more applications. First, we decompose a graph by branchings along a separator into cheap or small components where we can further branch separately. We apply this technique to accelerate the O∗(1.3803k) time algorithm for counting the vertex covers of size k (Mölle, Richter, and Rossmanith, 2006) to O∗(1.3740k). Next we give a complete characterization of graphs where every edge is in at most two conflict triples, i.e., triples of vertices with exactly two edges. This enables us to improve to O∗(1.47k) the previous O∗(1.53k) time algorithm (Gramm, Guo, Hüffner, Niedermeier, 2004) for Cluster Deletion. i.e., for deleting k edges
Estimating Entity Importance via Counting Set Covers
"... The datamining literature is rich in problems asking to assess the importance of entities in a given dataset. At a high level, existing work identifies important entities either by ranking or by selection. Ranking methods assign a score to every entity in the population, and then use the assigned s ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The datamining literature is rich in problems asking to assess the importance of entities in a given dataset. At a high level, existing work identifies important entities either by ranking or by selection. Ranking methods assign a score to every entity in the population, and then use the assigned scores to create a ranked list. The major shortcoming of such approaches is that they ignore the redundancy between highranked entities, which may in fact be very similar or even identical. Therefore, in scenarios where diversity is desirable, such methods perform poorly. Selection methods overcome this drawback by evaluating the importance of a group of entities collectively. To achieve this, they typically adopt a setcover formulation, which identifies the entities in the minimum set cover as the important ones. However, this dichotomy of entities conceals the fact that, even though an entity may not be in the reported cover, it may still participate in many other optimal or nearoptimal solutions. In this paper, we propose a framework that overcomes the above drawbacks by integrating the ranking and selection paradigms. Our approach assigns importance scores to entities based on both the number and the quality of setcover solutions that they participate in. Our methodology applies to a wide range of applications. In a user study and an experimental evaluation on real data, we demonstrate that our framework is efficient and provides useful and intuitive results.
Parameterized Enumeration of Neighbour Strings and Kemeny Aggregations
, 2013
"... I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii In this thesis, we consider approaches to enumeration ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii In this thesis, we consider approaches to enumeration problems in the parameterized complexity setting. We obtain competitive parameterized algorithms to enumerate all, as well as several of, the solutions for two related problems Neighbour String and Kemeny Rank Aggregation. In both problems, the goal is to find a solution that is as close as possible to a set of inputs (strings and total orders, respectively) according to some distance measure. We also introduce a notion of enumerative kernels for which there is a bijection between solutions to the original instance and solutions to the kernel, and provide such a kernel for Kemeny Rank Aggregation, improving a previous kernel for the problem. We demonstrate how several of the algorithms and notions discussed in this thesis are extensible to a group of parameterized problems, improving published results for some other problems. iii Acknowledgements I would like to thank my supervisor, Professor Naomi Nishimura, for her generous support and invaluable advice on my research. I would also like to thank Professor Jonathan Buss and Professor Timothy Chan for their helpful comments on the direction of my research. I wish to thank Professor Bin Ma for fruitful discussions on some of the results in this work. I am also grateful to my thesis committee for spending their valuable time reading this thesis, and for their suggestions which improved the content and presentation of my work. Special thanks to my family for their encouragement and love, and to the many friends I met in Waterloo, for making my PhD experience really enjoyable. iv
Parameterized Algorithms for Double Hypergraph Dualization with Rank Limitation and Maximum Minimal Vertex Cover
"... Motivated by the need for succinct representations of all “small” transversals (or hitting sets) of a hypergraph of fixed rank, we study the complexity of computing such a representation. Next, the existence of a minimal hitting set of at least a given size arises as a subproblem. We give one algori ..."
Abstract
 Add to MetaCart
(Show Context)
Motivated by the need for succinct representations of all “small” transversals (or hitting sets) of a hypergraph of fixed rank, we study the complexity of computing such a representation. Next, the existence of a minimal hitting set of at least a given size arises as a subproblem. We give one algorithm for hypergraphs of any fixed rank, and we largely improve an earlier algorithm (by H. Fernau, 2005) for the rank2 case, i.e., for computing a minimal vertex cover of at least a given size in a graph. We were led to these questions by combinatorial aspects of the protein inference problem in shotgun proteomics.
Beyond Itemsets: Mining Frequent Featuresets over Structured Items
"... We assume a dataset of transactions generated by a set of users over structured items where each item could be described through a set of features. In this paper, we are interested in identifying the frequent featuresets (set of features) by mining item transactions. For example, in a news website, ..."
Abstract
 Add to MetaCart
We assume a dataset of transactions generated by a set of users over structured items where each item could be described through a set of features. In this paper, we are interested in identifying the frequent featuresets (set of features) by mining item transactions. For example, in a news website, items correspond to news articles, the features are the namedentities/topics in the articles and an item transaction would be the set of news articles read by a user within the same session. We show that mining frequent featuresets over structured item transactions is a novel problem and show that straightforward extensions of existing frequent itemset mining techniques provide unsatisfactory results. This is due to the fact that while users are drawn to each item in the transaction due to a subset of its features, the transaction by itself does not provide any information about such underlying preferred features of users. In order to overcome this hurdle, we propose a featureset uncertainty model where each item transaction could have been generated by various featuresets with different probabilities. We describe a novel approach to transform item transactions into uncertain transaction over featuresets and estimate their probabilities using constrained least squares based approach. We propose diverse algorithms to mine frequent featuresets. Our experimental evaluation provides a comparative analysis of the different approaches proposed.