#### DMCA

## Learning polynomials with queries: The highly noisy case (1995)

### Cached

### Download Links

Citations: | 97 - 17 self |

### Citations

2661 |
The Theory of Error-Correcting Codes.
- MacWilliams, Sloane
- 1977
(Show Context)
Citation Context ...and let w = P m j=1 w j m : The fact that the C j 's are close to R implies that for all j, w js(1 \Gamma ffi) \Delta N . Our proof generalizes a proof due to S. Johnson (c.f., MacWilliams and Sloane =-=[31]-=-) for the case q = 2. The central quantity used to bound m in the binary case can be generalized in one of the two following ways: S j X j 1 ;j 2 ;i \Gamma j 1 (i)\Gamma j 2 (i): S 0 j X j 1 ;j 2 ;i X... |

520 | Fast probabilistic algorithms for verification of polynomial identities.
- Schwartz
- 1980
(Show Context)
Citation Context ... GF(q) form an [N; K; D] q code, for N = q n , K = q ( n+d d ) and D = (q \Gamma d)q n\Gamma1 . Proof: The parameters N and K follow by definition. The distance D is equivalent to the well-known fact =-=[10, 38, 44]-=- that two degree d (multivariate) polynomials over GF(q) may agree in at most d=q fraction of the inputs. Combining Theorem 15 with Proposition 16 (and using fl = d q in the theorem), we get the follo... |

440 | A hard-core predicate for all one-way functions.
- Goldreich, Levin
- 1989
(Show Context)
Citation Context ...s with the given program on an ffi ! 1=2 fraction of the inputs. Linear Polynomials: A special case of the (explicit) reconstruction problem for d = 1 and F = GF(2) was studied by Goldreich and Levin =-=[17]-=-. They solved the reconstruction problem in this case for every ffi ? 1 2 . (Notice that the self-corrector of [15] mentioned above does not apply to this case, since here d=jF j = 1=2 and does not te... |

361 | Self-testing/correcting with applications to numerical problems.
- Blum, Luby, et al.
- 1990
(Show Context)
Citation Context ...olation problem. Self-Correction: In the case when the noise rate is positive but small, one approach used to solving the reconstruction problem is to use self-correctors, introduced independently in =-=[8]-=- and [28]. Self-correctors convert programs that are known to be correct on a fraction ffi of inputs into programs that are correct on each input. Self-correctors for values of ffi that are larger tha... |

361 | Robust characterization of polynomials with applications to program testing. - Rubinfeld, Sudan - 1996 |

353 | Efficient noise-tolerant learning from statistical queries.
- Kearns
- 1998
(Show Context)
Citation Context ... in this direction either tolerate only small amounts of noise [2, 41, 21, 39] (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random =-=[1, 26, 20, 25, 33, 13, 36]-=- (i.e., that the decision of whether or not to modify the function at any given input is made by a random process). In contrast, we take the setting to an extreme, by considering a very large amount o... |

345 | Improved decoding of Reed-Solomon and Algebraic-Geometric codes.
- Guruswami, Sudan
- 1999
(Show Context)
Citation Context ..., but with polynomial dependence on n, the number of variables. Subsequently some new reconstruction algorithms for polynomials have been developed. In particular, Sudan [40], and Guruswami and Sudan =-=[19]-=- have provided new algorithms for reconstructing univariate polynomials from large amounts of noise. Their running time depends only polynomially in d and works for ffi = \Omega\Gamma q d=jF j). Notic... |

340 |
Probabilistic algorithms for sparse polynomials.
- Zippel
- 1979
(Show Context)
Citation Context ... GF(q) form an [N; K; D] q code, for N = q n , K = q ( n+d d ) and D = (q \Gamma d)q n\Gamma1 . Proof: The parameters N and K follow by definition. The distance D is equivalent to the well-known fact =-=[10, 38, 44]-=- that two degree d (multivariate) polynomials over GF(q) may agree in at most d=q fraction of the inputs. Combining Theorem 15 with Proposition 16 (and using fl = d q in the theorem), we get the follo... |

274 | Decoding of Reed-Solomon codes beyond the error-correction bound.
- Sudan
- 1997
(Show Context)
Citation Context ...th exponential dependence on d, but with polynomial dependence on n, the number of variables. Subsequently some new reconstruction algorithms for polynomials have been developed. In particular, Sudan =-=[40]-=-, and Guruswami and Sudan [19] have provided new algorithms for reconstructing univariate polynomials from large amounts of noise. Their running time depends only polynomially in d and works for ffi =... |

255 |
Learning from noisy examples.
- Angluin, Laird
- 1988
(Show Context)
Citation Context ...s in this direction either tolerate only small amounts of noise [2, 41, 21, 39] (i.e., that the function is modied only at a small fraction of all possible inputs) or assume that the noise is random [=-=1, 26, 20, 25, 33, 13, 36]-=- (i.e., that the decision of whether or not to modify the function at any given input is made by a random process). In contrast, we take the setting to an extreme, by considering a very large amount o... |

231 | Toward efficient agnostic learning.
- Kearns, Schapire, et al.
- 1992
(Show Context)
Citation Context ...tuations in which the noise disturbs the outputs for almost all inputs. A second interpretation of the reconstruction problem is within the paradigm of "agnostic learning" introduced by Kear=-=ns et al. [23]-=- (see also [29, 30, 24]). In the setting of agnostic learning, the learner is to make no assumptions regarding the natural phenomena underlying the input/output relationship of the function, and the g... |

207 | Learning decision trees using the Fourier spectrum.
- Kushilevitz, Mansour
- 1993
(Show Context)
Citation Context ... in this direction either tolerate only small amounts of noise [2, 41, 21, 39] (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random =-=[1, 26, 20, 25, 33, 13, 36]-=- (i.e., that the decision of whether or not to modify the function at any given input is made by a random process). In contrast, we take the setting to an extreme, by considering a very large amount o... |

184 | Learning in the presence of malicious errors.
- Kearns, Li
- 1993
(Show Context)
Citation Context ...ersistent noise. Here one assumes that the function f is derived from some function in the class C by "adding" noise to it. Typical works in this direction either tolerate only small amounts=-= of noise [2, 41, 21, 39]-=- (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random [1, 26, 20, 25, 33, 13, 36] (i.e., that the decision of whether or not to modi... |

159 |
Hiding Instances in Multioracle Queries
- Beaver, Feigenbaum
- 1990
(Show Context)
Citation Context ...nown to be correct on a fractionsof inputs into programs that are correct on each input. Self-correctors for values ofsthat are larger than 3=4 have been constructed for several (algebraic) functions =-=[5, 8, 9, 28, 34]-=-, and in one case this was done fors> 1=2 [15]. 2 We stress that self-correctors correct a given program using only the information that the program is supposed to be computing a function from a given... |

144 | Foundations of Cryptography – Fragments of a Book
- Goldreich
- 1995
(Show Context)
Citation Context ...uch that for this suffix, P (oe) is at least 1 q + ffl 2 ; thus the correct candidate 2 We refer to the original algorithm as in [17], not to a simpler algorithm which appears in later versions (cf., =-=[27, 16]-=-). 6 Test-prefix(f; ffl; n; (c 1 ; : : : ; c i )) Repeat poly(n=ffl) times: Pick s i+1 ; : : : ; s n 2R GF(q). Let t def = poly(n=ffl). for k = 1 to t do Pick r 1 ; : : : ; r i 2R GF(q) oe (k) / f(r;s... |

142 | Improved low-degree testing and its applications.
- Arora, Sudan
- 2003
(Show Context)
Citation Context ...le machines, such that for every multivariate polynomial that has agreementswith the function f , one of these oracle machines, given access to f , computes that polynomial. Recently, Arora and Sudan =-=[4-=-] gave an algorithm for this implicit reconstruction problem. The running time of their algorithm is bounded by a polynomial in n and d, and it works correctly provided thats (d O(1) )=jF j 45 ; that ... |

138 | Pseudorandom generators without the xor lemma.
- Sudan, Trevisan, et al.
- 2001
(Show Context)
Citation Context ... (E.g., by applying noise-free polynomial-interpolation to each of the oracle machines provided above, and testing the resulting polynomial for agreement with f.) Finally, Sudan, Trevisan, and Vadhan =-=[41]-=-, have recently improved the result of [4], further reducing the requirement on δ to δ > 2 √ d/|F |. The algorithm of [41] thus subsumes the algorithm of this paper for all choices of parameters, exce... |

134 |
Error correction of algebraic block codes,
- Berlekamp, Welch
- 1986
(Show Context)
Citation Context ...5; 28]. For d=jF j ! 0, the fraction of errors that a self-corrector could correct was improved to almost 1=4 by [14] and then to almost 1=2 by [15] (using a solution for the univariate case given by =-=[43]-=-). These self-correctors correct a given program using only the information that the program is supposed to be computing a low-degree polynomial. Thus, when the error is larger than 1 2 (or, alternati... |

126 | List decoding for noisy channels.
- Elias
- 1957
(Show Context)
Citation Context ...e given word in ffi fraction of the coordinates (i.e., 1\Gammaffi fraction of the coordinates have been corrupted by errors). This task is referred to in the literature as the "list-decoding"=-=; problem [11]-=-. For codes achieved from setting d such that d=jF j ! 0, our list decoding algorithm recovers from errors when the rate of errors approaches 1. We are not aware of any other case where an approach ot... |

120 |
Learning disjunctions of conjunctions.
- Valiant
- 1985
(Show Context)
Citation Context ...ersistent noise. Here one assumes that the function f is derived from some function in the class C by "adding" noise to it. Typical works in this direction either tolerate only small amounts=-= of noise [2, 41, 21, 39]-=- (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random [1, 26, 20, 25, 33, 13, 36] (i.e., that the decision of whether or not to modi... |

110 |
Ronitt Rubinfeld. Self-testing/correcting with applications to numerical problems
- Blum, Luby
- 1993
(Show Context)
Citation Context ...olation problem. Self-Correction: In the case when the noise rate is positive but small, one approach used to solving the reconstruction problem is to use self-correctors, introduced independently in =-=[8]-=- and [28]. Self-correctors convert programs that are known to be correct on a fraction δ of inputs into programs that are correct on each input. Selfcorrectors for values of δ that are larger than 3/4... |

105 | Cryptographic primitives based on hard learning problems
- Blum, Furst, et al.
- 1993
(Show Context)
Citation Context ...and d = 1 the problem reduces to the well-known problem of "learning parity with noise" [20] which is commonly believed to be hard when one is only allowed uniformly and independently chosen=-= examples [20, 7, 22]-=-. (Actually, learning parity with noise is considered hard even for random noise, whereas here the noise is adversarial.) 4. In Section 6, we give evidence that the reconstruction problem may be hard,... |

103 | New directions in testing. - Lipton - 1989 |

102 |
Effective Polynomial Computation.
- Zippel
- 1993
(Show Context)
Citation Context ...ate of agreement δ. Polynomial interpolation: When the noise rate is 0, our problem is simply that of polynomial interpolation. In this case the problem is well analyzed and the reader is referred to =-=[48]-=-, for instance, for a history of the polynomial interpolation problem. Self-Correction: In the case when the noise rate is positive but small, one approach used to solving the reconstruction problem i... |

94 | A probabilistic remark on algebraic program testing,"
- DeMillo, Lipton
- 1978
(Show Context)
Citation Context ... GF(q) form an [N; K; D] q code, for N = q n , K = q ( n+d d ) and D = (q \Gamma d)q n\Gamma1 . Proof: The parameters N and K follow by definition. The distance D is equivalent to the well-known fact =-=[10, 38, 44]-=- that two degree d (multivariate) polynomials over GF(q) may agree in at most d=q fraction of the inputs. Combining Theorem 15 with Proposition 16 (and using fl = d q in the theorem), we get the follo... |

69 | Selftesting/correcting for polynomials and for approximate functions - Gemmell, Lipton, et al. - 1991 |

60 | Interpolating polynomials from their values - Zippel - 1990 |

59 |
Efficient and secure pseudo-random number generation.
- Vazirani, Vazirani
- 1984
(Show Context)
Citation Context ...ernative construction of generic hardcore functions was also presented in [17], where its security was reduced to the security of a specific hard-core predicate via a “computational XOR lemma” due to =-=[43]-=-. For further details, see [16]. Loosely speaking, a function h : {0, 1} ∗ → {0, 1} ∗ is called a hard-core of a function f : {0, 1} ∗ → {0, 1} ∗ if h is polynomial-time, but given f(x) it is infeasib... |

56 |
Learning from Good and Bad Data,
- LAIRD
- 1988
(Show Context)
Citation Context ... in this direction either tolerate only small amounts of noise [2, 42, 21, 39] (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random =-=[1, 26, 20, 25, 33, 13, 36]-=- (i.e., that the decision of whether or not to modify the function at any given input is made by a random process). In contrast, we take the setting to an extreme, by considering a very large amount o... |

50 | Randomized interpolation and approximation of sparse polynomials. - Mansour - 1995 |

49 | Reconstructing Algebraic Functions from Mixed Data
- Ar, Lipton, et al.
- 1999
(Show Context)
Citation Context ...t least a ffi fraction of the inputs but that for some (say 2ffi) fraction of the inputs f does not agree with any of the g i 's. This setting is very related to the setting investigated by Ar et al. =-=[3]-=-, except that their techniques require that the fraction of inputs left unexplained by any g i be smaller than the fraction of inputs on which each g i agrees with f . We believe that our relaxation m... |

49 | Highly resilient correctors for polynomials
- Gemmell, Sudan
- 1992
(Show Context)
Citation Context ...ions over a finite field F , jF jsd + 2, were found by [5; 28]. For d=jF j ! 0, the fraction of errors that a self-corrector could correct was improved to almost 1=4 by [14] and then to almost 1=2 by =-=[15]-=- (using a solution for the univariate case given by [43]). These self-correctors correct a given program using only the information that the program is supposed to be computing a low-degree polynomial... |

48 | Types of noise in data for concept learning - Sloan - 1988 |

38 | Efficient agnostic pac-learning with simple hypotheses
- Maass
- 1994
(Show Context)
Citation Context ...ch the noise disturbs the outputs for almost all inputs. A second interpretation of the reconstruction problem is within the paradigm of "agnostic learning" introduced by Kearns et al. [23] =-=(see also [29, 30, 24]-=-). In the setting of agnostic learning, the learner is to make no assumptions regarding the natural phenomena underlying the input/output relationship of the function, and the goal of the learner is t... |

30 |
New directions in testing. Distributed computing and cryptography: proceedings of a DIMACS Workshop,
- Lipton
- 1989
(Show Context)
Citation Context ...problem. Self-Correction: In the case when the noise rate is positive but small, one approach used to solving the reconstruction problem is to use self-correctors, introduced independently in [8] and =-=[28]-=-. Self-correctors convert programs that are known to be correct on a fraction ffi of inputs into programs that are correct on each input. Self-correctors for values of ffi that are larger than 3=4 hav... |

25 |
Selftesting /correcting for polynomials and for approximate functions
- Gemmell, Lipton, et al.
- 1991
(Show Context)
Citation Context ...t are degree d polynomial functions over a finite field F , jF jsd + 2, were found by [5; 28]. For d=jF j ! 0, the fraction of errors that a self-corrector could correct was improved to almost 1=4 by =-=[14]-=- and then to almost 1=2 by [15] (using a solution for the univariate case given by [43]). These self-correctors correct a given program using only the information that the program is supposed to be co... |

23 | Agnostic PAC-learning of functions on analog neural nets (extended abstract
- Maass
- 1994
(Show Context)
Citation Context ...ch the noise disturbs the outputs for almost all inputs. A second interpretation of the reconstruction problem is within the paradigm of "agnostic learning" introduced by Kearns et al. [23] =-=(see also [29, 30, 24]-=-). In the setting of agnostic learning, the learner is to make no assumptions regarding the natural phenomena underlying the input/output relationship of the function, and the goal of the learner is t... |

22 | Randomness and Non-determinism.
- Levin
- 1993
(Show Context)
Citation Context ...ction f ′(x1, . . . , xi) def = f(x1, . . . , xi, si+1, . . . , sn). It is possible to 3 We refer to the original algorithm as in [17], not to a simpler algorithm that appears in later versions (cf., =-=[27, 16]-=-). j=1LEARNING POLYNOMIALS WITH QUERIES: THE HIGHLY NOISY CASE 7 Test-prefix(f, ɛ, n, (c1, . . . , ci)) Repeat poly(n/ɛ) times: Pick ¯s = si+1, . . . , sn ∈R GF(q). Let t def = poly(n/ɛ). for k = 1 t... |

20 | On the robustness of functional equations
- Rubinfeld
- 1994
(Show Context)
Citation Context ...re known to be correct on a fraction ffi of inputs into programs that are correct on each input. Self-correctors for values of ffi that are larger than 3=4 have been constructed for several functions =-=[5, 8, 9, 28, 34]-=-. Self-correctors correcting 1 \Theta(d) fraction of error for f that are degree d polynomial functions over a finite field F , jF jsd + 2, were found by [5; 28]. For d=jF j ! 0, the fraction of error... |

20 | A noise model on learning sets of strings - Sakakibaxa, Siromoney - 1992 |

19 |
On learning from queries and counterexamples in the presence of noise
- Sakakibara
- 1991
(Show Context)
Citation Context ... in this direction either tolerate only small amounts of noise [2, 41, 21, 39] (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random =-=[1, 26, 20, 25, 33, 13, 36]-=- (i.e., that the decision of whether or not to modify the function at any given input is made by a random process). In contrast, we take the setting to an extreme, by considering a very large amount o... |

19 | Robust characterizations of polynomials and their applications to program testing. IBM - Rubinfeld, Sudan - 1993 |

19 | private communication” - Coppersmith - 1987 |

18 | Learning switching concepts - Blum, Chalasani - 1992 |

17 | Robust functional equations and their applications to program testing - Rubinfeld - 1999 |

16 | Exact identification of read-once formulas using fixed points of amplification functions.
- Goldman, Kearns, et al.
- 1993
(Show Context)
Citation Context |

15 | Learning to model sequences generated by switching distributions - Freund, Ron - 1995 |

14 |
Learning with malicious membership queries and exceptions
- Angluin, Krikis
- 1994
(Show Context)
Citation Context ...ersistent noise. Here one assumes that the function f is derived from some function in the class C by \adding" noise to it. Typical works in this direction either tolerate only small amounts of n=-=oise [2, 41, 21, 39-=-] (i.e., that the function is modied only at a small fraction of all possible inputs) or assume that the noise is random [1, 26, 20, 25, 33, 13, 36] (i.e., that the decision of whether or not to modif... |

14 |
Ecient learning of continuous neural networks
- Koiran
- 1994
(Show Context)
Citation Context ...ch the noise disturbs the outputs for almost all inputs. A second interpretation of the reconstruction problem is within the paradigm of "agnostic learning" introduced by Kearns et al. [23] =-=(see also [29, 30, 24]-=-). In the setting of agnostic learning, the learner is to make no assumptions regarding the natural phenomena underlying the input/output relationship of the function, and the goal of the learner is t... |

12 |
Symbolic Logic.
- unknown authors
- 1971
(Show Context)
Citation Context ...uch that for this suffix, P (oe) is at least 1 q + ffl 2 ; thus the correct candidate 2 We refer to the original algorithm as in [17], not to a simpler algorithm which appears in later versions (cf., =-=[27, 16]-=-). 6 Test-prefix(f; ffl; n; (c 1 ; : : : ; c i )) Repeat poly(n=ffl) times: Pick s i+1 ; : : : ; s n 2R GF(q). Let t def = poly(n=ffl). for k = 1 to t do Pick r 1 ; : : : ; r i 2R GF(q) oe (k) / f(r;s... |

10 |
A note on self-testing/correcting methods for trigonometric functions
- Cleve, Luby
- 1990
(Show Context)
Citation Context ...re known to be correct on a fraction ffi of inputs into programs that are correct on each input. Self-correctors for values of ffi that are larger than 3=4 have been constructed for several functions =-=[5, 8, 9, 28, 34]-=-. Self-correctors correcting 1 \Theta(d) fraction of error for f that are degree d polynomial functions over a finite field F , jF jsd + 2, were found by [5; 28]. For d=jF j ! 0, the fraction of error... |

10 |
Ronitt Rubinfeld, Madhu Sudan, and Avi Wigderson. Selftesting/correcting for polynomials and for approximate functions
- Gemmell, Lipton
- 1991
(Show Context)
Citation Context ...egree d polynomial Θ(d) functions over a finite field F , |F | ≥ d + 2, were found by [5, 28]. For d/|F | → 0, the fraction of errors that a self-corrector could correct was improved to almost 1/4 by =-=[14]-=- and then to almost 1/2 by [15] (using a solution for the univariate case given by [45]).LEARNING POLYNOMIALS WITH QUERIES: THE HIGHLY NOISY CASE 5 polynomially in d and works for δ = Ω( √ d/|F |). N... |

8 |
Ronitt Rubinfeld, and Madhu Sudan. Reconstructing algebraic functions from erroneous data
- Ar, Lipton
- 1999
(Show Context)
Citation Context ...e that each g i agrees with f on at least asfraction of the inputs but that for some (say 2) fraction of the inputs f does not agree with any of the g i 's. This setting was investigated by Ar et al. =-=[3]-=-. The reconstruction problem described above may be viewed as a (simpler) abstraction of the problem considered in [3]. As in the case of learning with noise, there is no explicit requirement in the s... |

7 |
Learning fallible finite state automata.Proceedings of the
- Ron, Rubinfeld
- 1993
(Show Context)
Citation Context |

6 |
Learning from good data and bad
- Laird
- 1987
(Show Context)
Citation Context |

6 | Randomized approximation and interpolation of sparse polynomials - Mansour - 1995 |

5 | On the learnability of discrete distributions (extended abstract
- Kearns, Mansour, et al.
- 1994
(Show Context)
Citation Context ...and d = 1 the problem reduces to the well-known problem of "learning parity with noise" [20] which is commonly believed to be hard when one is only allowed uniformly and independently chosen=-= examples [20, 7, 22]-=-. (Actually, learning parity with noise is considered hard even for random noise, whereas here the noise is adversarial.) 4. In Section 6, we give evidence that the reconstruction problem may be hard,... |

5 | Hiding Instance - Beaver, Feigenbaum - 1990 |

5 | Hard-core Predicate for ANY one-way function , STOC 89 - Goldreich, Levin |

4 | Reconstructing randomly sampled multivariate polynomials from highly noisy data
- Wasserman
- 1998
(Show Context)
Citation Context ...s of [40] also provide some reconstruction algorithms for multivariate polynomials, but not for as low an error as given here. Also in his case, the running time grows exponentially with n. Wasserman =-=[42]-=- gives an algorithm for reconstructing polynomials from noisy data that works without making queries. The running time here also grows exponentially in n and polynomially in d. As noted earlier (see R... |

4 | and Ronitt Rubinfeld. Learning fallible finite state automata - Ron - 1995 |

4 |
and Ronitt Rubinfeld. Learning fallible deterministic finite automata
- Ron
- 1995
(Show Context)
Citation Context ... in this direction either tolerate only small amounts of noise [2, 42, 21, 39] (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random =-=[1, 26, 20, 25, 33, 13, 36]-=- (i.e., that the decision of whether or not to modify the function at any given input is made by a random process). In contrast, we take the setting to an extreme, by considering a very large amount o... |

3 |
Types of Noise in Data for Concept Learning (Extended Abstract
- Sloan
- 1988
(Show Context)
Citation Context ...ersistent noise. Here one assumes that the function f is derived from some function in the class C by "adding" noise to it. Typical works in this direction either tolerate only small amounts=-= of noise [2, 41, 21, 39]-=- (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random [1, 26, 20, 25, 33, 13, 36] (i.e., that the decision of whether or not to modi... |

3 | Ronitt Rubinfeld and Madhu Sudan: Learning polynomials with queries: The highly noisy case, preprint September 13 - Goldreich - 1998 |

2 |
Toward efficient agnostic learning (extended abstract
- Kearns, Schapire, et al.
- 1992
(Show Context)
Citation Context ...11. Email: madhu@mit.edu. 12 O. GOLDREICH, R. RUBINFELD, and M. SUDAN A second interpretation of the reconstruction problem is within the framework of “agnostic learning” introduced by Kearns et al. =-=[23]-=- (see also [29, 30, 24]). In the setting of agnostic learning, the learner is to make no assumptions regarding the natural phenomenon underlying the input/output relationship of the function, and the ... |

1 | Efficient Learning of Continuous Neural Nets - Koiran - 1994 |

1 |
Angluin and Mārtiņˇs Krik¸is. Learning with malicious membership queries and exceptions
- Dana
- 1994
(Show Context)
Citation Context ...ersistent noise. Here one assumes that the function f is derived from some function in the class C by “adding” noise to it. Typical works in this direction either tolerate only small amounts of noise =-=[2, 42, 21, 39]-=- (i.e., that the function is modified only at a small fraction of all possible inputs) or assume that the noise is random [1, 26, 20, 25, 33, 13, 36] (i.e., that the decision of whether or not to modi... |