@MISC{Lee_learningtalagrand, author = {Homin K. Lee}, title = {Learning Talagrand DNF Formulas}, year = {} }

Share

OpenURL

Abstract

Consider the following version of Talagrand’s probabilistic construction of a monotone function f: {0, 1} n → {0, 1}. Let f be an n-term monotone DNF formula where each term is selected independently and uniformly at random (with replacement) from the set of all n Ω(log log n) possible terms of length log(n) over the first log 2 (n) variables. Let us call such a DNF formula a Talagrand DNF formula. This is a scaled-down version of the construction by Talagrand (1996) which is defined over all n variables and has subexponentially many terms. I am interested in the following problem: Does there exist a polynomial-time algorithm for learning the class of Talagrand DNF formulas over the uniform distribution on {0, 1} n given uniform random examples? Ideally, I am looking for algorithms that will learn the class of all possible Talagrand DNF formulas in the worst-case. However, an average-case learning algorithm that succeeds with high probability over the choice of the Talgrand DNF formula as described above would be of significant interest as well. Motivation: This problem is of course a special case of the corresponding learning problem for general polynomial-size DNF formulas. The problem of learning polynomial-size DNF formulas without queries has been open for almost twenty years (Valiant, 1984) and there has been no significant progress on the question until the last couple years.