Results 1  10
of
11
On the stopping distance and the stopping redundancy of codes
 IEEE Trans. Inf. Theory
, 2006
"... Abstract — It is now well known that the performance of a linear code C under iterative decoding on a binary erasure channel (and other channels) is determined by the size of the smallest stopping set in the Tanner graph for C. Several recent papers refer to this parameter as the stopping distance s ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
(Show Context)
Abstract — It is now well known that the performance of a linear code C under iterative decoding on a binary erasure channel (and other channels) is determined by the size of the smallest stopping set in the Tanner graph for C. Several recent papers refer to this parameter as the stopping distance s of C. This is somewhat of a misnomer since the size of the smallest stopping set in the Tanner graph for C depends on the corresponding choice of a paritycheck matrix. It is easy to see that s � d, whered is the minimum Hamming distance of C, and we show that it is always possible to choose a paritycheck matrix for C (with sufficiently many dependent rows) such that s = d. We thus introduce a new parameter, termed the stopping redundancy of C, defined as the minimum number of rows in a paritycheck matrix H for C such that the corresponding stopping distance s(H) attains its largest possible value, namely s(H) =d. We then derive general bounds on the stopping redundancy of linear codes. We also examine several simple ways of constructing codes from other codes, and study the effect of these constructions on the stopping redundancy. Specifically, for the family of binary ReedMuller codes (of all orders), we prove that their stopping redundancy is at most a constant times their conventional redundancy. We show that the stopping redundancies of the binary and ternary extended Golay codes are at most 34 and 22, respectively. Finally, we provide upper and lower bounds on the stopping redundancy of MDS codes. I.
New Constructions for Covering Designs
 J. COMBIN. DESIGNS
, 1995
"... A (v,k,t) covering design, or covering, is a family of ksubsets, called blocks, chosen from a vset, such that each tsubset is contained in at least one of the blocks. The number of blocks is the covering’s size, and the minimum size of such a covering is denoted by C(v,k,t). This paper gives thre ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
A (v,k,t) covering design, or covering, is a family of ksubsets, called blocks, chosen from a vset, such that each tsubset is contained in at least one of the blocks. The number of blocks is the covering’s size, and the minimum size of such a covering is denoted by C(v,k,t). This paper gives three new methods for constructing good coverings: a greedy algorithm similar to Conway and Sloane’s algorithm for lexicographic codes [6], and two methods that synthesize new coverings from preexisting ones. Using these new methods, together with results in the literature, we build tables of upper bounds on C(v,k,t) for v ≤ 32, k ≤ 16, and t ≤ 8.
New bounds on nearly perfect matchings in hypergraphs: higher codegrees do help
 Random Struct. Alg
, 2000
"... Let H be a (k + 1)uniform, Dregular hypergraph on n vertices and U(H) be the minimum number of vertices left uncovered by a matching in H. Cj(H), the jcodegree of H, is the maximum number of edges sharing a set of j vertices in common. We prove a general upper bound on U(H), based on the codegr ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Let H be a (k + 1)uniform, Dregular hypergraph on n vertices and U(H) be the minimum number of vertices left uncovered by a matching in H. Cj(H), the jcodegree of H, is the maximum number of edges sharing a set of j vertices in common. We prove a general upper bound on U(H), based on the codegree sequence C2(H), C3(H).... Our bound improves and generalizes many results on the topic, including those of Grable [Gra], AlonKimSpencer [AKS], and KostochkaRödl [KR]. It also leads to a substantial improvement in several applications. The key ingredient of the proof is the socalled polynomial technique, which is a new and useful tool to prove concentration results for functions with large Lipschitz coefficient. This technique is of independent interest.
Towards a Theory of Intrusion Detection
 In Proc. of European Symposium on Research in computer Security (ESORICS 2005
"... Abstract. We embark into theoretical approaches for the investigation of intrusion detection schemes. Our main motivation is to provide rigorous security requirements for intrusion detection systems that can be used by designers of such systems. Our model captures and generalizes wellknown methodol ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We embark into theoretical approaches for the investigation of intrusion detection schemes. Our main motivation is to provide rigorous security requirements for intrusion detection systems that can be used by designers of such systems. Our model captures and generalizes wellknown methodologies in the intrusion detection area, such as anomalybased and signaturebased intrusion detection, and formulates security requirements based on both wellknown complexitytheoretic notions and wellknown notions in cryptography (such as computational indistinguishability). Under our model, we present two efficient paradigms for intrusion detection systems, one based on nearest neighbor search algorithms, and one based on both the latter and clustering algorithms. Under formally specified assumptions on the representation of network traffic, we can prove that our two systems satisfy our main security requirement for an intrusion detection system. In both cases, while the potential truth of the assumption rests on heuristic properties of the representation of network traffic (which is hard to avoid due to the unpredictable nature of external attacks to a network), the proof that the systems satisfy desirable detection properties is rigorous and of probabilistic and algorithmic nature. Additionally, our framework raises open questions on intrusion detection systems that can be rigorously studied. As an example, we study the problem of arbitrarily and efficiently extending the detection window of any intrusion detection system, which allows the latter to catch attack sequences interleaved with normal traffic packet sequences. We use combinatoric tools such as time and spaceefficient covering set systems to present provably correct solutions to this problem. 1
On Random Greedy Triangle Packing
 ELEC. J. COMBINAT
, 1997
"... The behaviour of the random greedy algorithm for constructing a maximal packing of edgedisjoint triangles on n points (a maximal partial triple system) is analysed with particular emphasis on the final number of unused edges. It is shown that this number is at most n 7=4+o(1) , "halfway" ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The behaviour of the random greedy algorithm for constructing a maximal packing of edgedisjoint triangles on n points (a maximal partial triple system) is analysed with particular emphasis on the final number of unused edges. It is shown that this number is at most n 7=4+o(1) , "halfway" from the previous bestknown upper bound o(n²) to the conjectured value n 3=2+o(1) . The more general problem of random greedy packing in hypergraphs is also considered.
Random triangle removal
 ADVANCES IN MATHEMATICS, 280, 379468.
, 2012
"... Starting from a complete graph on n vertices, repeatedly delete the edges of a uniformly chosen triangle. This stochastic process terminates once it arrives at a trianglefree graph, and the fundamental question is to estimate the final number of edges (equivalently, the time it takes the process t ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Starting from a complete graph on n vertices, repeatedly delete the edges of a uniformly chosen triangle. This stochastic process terminates once it arrives at a trianglefree graph, and the fundamental question is to estimate the final number of edges (equivalently, the time it takes the process to finish, or how many edgedisjoint triangles are packed via the random greedy algorithm). Bollobás and Erdős (1990) conjectured that the expected final number of edges has order n3/2, motivated by the study of the Ramsey number R(3, t). An upper bound of o(n2) was shown by Spencer (1995) and independently by Rödl and Thoma (1996). Several bounds were given for variants and generalizations (e.g., Alon, Kim and Spencer (1997) and Wormald (1999)), while the best known upper bound for the original question of Bollobás and Erdős was n7/4+o(1) due to Grable (1997). No nontrivial lower bound was available. Here we prove that with high probability the final number of edges in random triangle removal is equal to n3/2+o(1), thus confirming the 3/2 exponent conjectured by Bollobás and Erdős and matching the predictions of Spencer et al. For the upper bound, for any fixed ε> 0 we construct a family of exp(O(1/ε)) graphs by gluing O(1/ε) triangles sequentially in a prescribed manner, and dynamically track all homomorphisms from them, rooted at any two vertices, up to the point where
Generalized covering designs and clique coverings
, 2011
"... Inspired by the “generalized tdesigns ” defined by Cameron [P. J. Cameron, A generalisation of tdesigns, Discrete Math. 309 (2009), 4835–4842], we define a new class of combinatorial designs which simultaneously provide a generalization of both covering designs and covering arrays. We then obtain ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Inspired by the “generalized tdesigns ” defined by Cameron [P. J. Cameron, A generalisation of tdesigns, Discrete Math. 309 (2009), 4835–4842], we define a new class of combinatorial designs which simultaneously provide a generalization of both covering designs and covering arrays. We then obtain a number of bounds on the minimum sizes of these designs, and describe some methods of constructing them, which in some cases we prove are optimal. Many of our results are obtained from an interpretation of these designs in terms of clique coverings of graphs.
Constructing Designs Straightforwardly: Worst Arising Cases
, 2002
"... Suppose that we consecutively remove edges from some kgraph of order n in which every t vertices are covered by at least * edges to obtain a minimal such kgraph. What can be said about the size of the eventual kgraph? While, by the result of R"odl [9], the minimum is *\Gamma nt\Delta =\ ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Suppose that we consecutively remove edges from some kgraph of order n in which every t vertices are covered by at least * edges to obtain a minimal such kgraph. What can be said about the size of the eventual kgraph? While, by the result of R&quot;odl [9], the minimum is *\Gamma nt\Delta =\Gamma kt\Delta + o(nt), we show that the maximum is *\Gamma nt \Delta + o(nt). Also, some partial results are obtained about possible size of a maximal kgraph covering every tset by at most * edges.