#### DMCA

## Error Correcting Tournaments (2008)

Citations: | 26 - 4 self |

### Citations

719 | Solving multiclass learning problems via error-correcting output codes
- Dietterich, Bakiri
- 1995
(Show Context)
Citation Context ...rounds for every label [10], use more information than a single bit per pairing [2]. Perhaps the best known reduction result uses error-correcting output codes (ECOC) to predict the most likely label =-=[7, 12]-=-. There are two substantial drawbacks of the ECOC approach, which the tournament approach addresses. 1. Tournament reductions have a loss which is linear in the adversary’s per round budget. The best ... |

556 | Reducing multiclass to binary: A unifying approach for margin classifiers
- Allwein, Schapire, et al.
- 2000
(Show Context)
Citation Context ...eduction should yield an optimal multiclass classifier. Known consistent methods are inadequate because they have n − 1 rounds for every label [10], use more information than a single bit per pairing =-=[2]-=-. Perhaps the best known reduction result uses error-correcting output codes (ECOC) to predict the most likely label [7, 12]. There are two substantial drawbacks of the ECOC approach, which the tourna... |

240 |
Applied regression analysis, linear models, and related methods
- Fox
- 1997
(Show Context)
Citation Context ... imply a PECOC multiclass regret of 0.4. 2. Is there a consistent reduction that requires just O(log k) computation, matching the information theoretic lower bound? The well known tree reduction (see =-=[9]-=-) distinguishes between the labels using a balanced binary tree, where each non-leaf nodes predicts “Is the correct multiclass label to the left or right?”. As shown in Section 2, this method is incon... |

155 | Cost-sensitive learning by cost-proportionate example weighting
- Zadrozny, Langford, et al.
(Show Context)
Citation Context ...to the leaves. When reducing to importance-weighted classification, the theorem statement depends on importance weights. To remove the importances, we compose the reduction with the Costing reduction =-=[19]-=-, which alters the underlying distribution using rejection sampling on the importance weights. This composition transforms DFT into a distribution D ′ over binary examples. 6We use the folk theorem f... |

93 | Multi-label prediction via compressed sensing
- Hsu, Kakade, et al.
- 2009
(Show Context)
Citation Context ...of classes k. When only a constant number of labels have non-zero probability given x, the complexity can be reduced to O(log k) examples per multiclass example and O(k log k) computation per example =-=[13]-=-. This leads to several questions: 1. Is there a consistent reduction from multiclass to binary classification that does not have a square root dependence [17]? For example, an average binary regret o... |

45 |
TrueSkill(TM): A Bayesian skill rating system
- Herbrich, Minka, et al.
(Show Context)
Citation Context ...all of them are based on the assumption that the best player beats any other player with probability greater than 1shalf in any individual game, and all outcomes are independent. Herbrich and Graepel =-=[11]-=- model the performance of each player as a normally distributed random variable centered around the skill level; the outcome of each pairing is determined by the outcomes of the corresponding random v... |

37 |
Computing with Unreliable Information
- Feige, Peleg, et al.
- 1990
(Show Context)
Citation Context ...they do not properly capture it. Previous Approaches A common approach to analyzing tournaments is to use probabilistic assumptions about the outcomes of different pairings. For example, Feige et al. =-=[8]-=- assume that each outcome has a fixed probability of being erroneous, independently of other outcomes. Adler et al. [1] consider several probabilistic models (noise models), but all of them are based ... |

36 |
Sensitive error correcting output codes
- Langford, Beygelzimer
- 2005
(Show Context)
Citation Context ...rounds for every label [10], use more information than a single bit per pairing [2]. Perhaps the best known reduction result uses error-correcting output codes (ECOC) to predict the most likely label =-=[7, 12]-=-. There are two substantial drawbacks of the ECOC approach, which the tournament approach addresses. 1. Tournament reductions have a loss which is linear in the adversary’s per round budget. The best ... |

28 |
Searching in the presence of linearly bounded errors
- Aslam, Dhagat
- 1991
(Show Context)
Citation Context ... can be used in one round, if the adversary desires, unlike in the prefix-bounded model, where the number of incorrect responses at no point can exceed some constant fraction of the number of queries =-=[5, 3]-=-. We also strengthen the adversary by allowing her to choose skill levels of players and charging her according to the difference in the skills for every missort (rather than charging the same amount ... |

20 |
P.: Multiclass classification with filter trees
- Beygelzimer, Langford, et al.
- 2007
(Show Context)
Citation Context ...hyperplane, but subsets of the classes are not. Further details completely covering the single elimination tournament case are presented in a technical report (not intended as a separate publication) =-=[4]-=-. 5 Lower Bound The first lower bound says that for any algorithm B, there exists an adversary A with the average per-round error r such that A can make B incur loss 2r even if B knows the error budge... |

20 |
On fault-tolerant networks for sorting
- Yao, Yao
- 1985
(Show Context)
Citation Context ...tcomes of the corresponding random variables. All these assumptions do not fit our understanding of what a bribing bookie or some other adversary is capable of. In comparator-based selection networks =-=[6, 14]-=-, the adversary can only fail to sort an unsorted pair (passing it on without any processing), but she cannot missort an already sorted pair. The adversary we are concerned with here could defeat such... |

20 | Estimating class membership probabilities using classifier learners
- Langford, Zadrozny
(Show Context)
Citation Context ...iclass regret may be as high as √ 2kr, where r is the average squared loss regret on the induced problems, which is upper bounded by the average binary classification regret via the Probing reduction =-=[15]-=-. The probabilistic error correcting output code approach (PECOC) [14] reduces kclass classification to learning O(k) regressors on the interval [0, 1], creating O(k) binary examples per multiclass ex... |

16 |
On selecting the largest element in spite of erroneous information
- Ravikumar, Ganesan, et al.
- 1987
(Show Context)
Citation Context ...winner is viable. A simple way to avoid this problem is to use repeated comparisons, as in the best four-of-seven playoff in the World Series baseball championship. Ravikumar, Ganesan, and Lakshmanan =-=[13]-=- present a strategy which reliably finds the largest element in an n-element set with at most (e+1)n−1 comparisons, where e (known to the algorithm) is the number of times the adversary can lie. This ... |

14 |
private communication
- Williamson
(Show Context)
Citation Context ...e and O(k log k) computation per example [13]. This leads to several questions: 1. Is there a consistent reduction from multiclass to binary classification that does not have a square root dependence =-=[17]-=-? For example, an average binary regret of just 0.01 may imply a PECOC multiclass regret of 0.4. 2. Is there a consistent reduction that requires just O(log k) computation, matching the information th... |

13 | Selection in the presence of noise: The design of playoff systems
- Adler, Gemmell, et al.
- 1994
(Show Context)
Citation Context ...ry classification regret via the Probing reduction [15]. The probabilistic error correcting output code approach (PECOC) [14] reduces kclass classification to learning O(k) regressors on the interval =-=[0, 1]-=-, creating O(k) binary examples per multiclass example at both training and test time, with a test time computation of O(k 2 ). The resulting multiclass regret is bounded by 4 √ r, where r is the aver... |

6 |
Classification by pairwise coupling, NIPS
- Hastie, Tibshirani
- 1997
(Show Context)
Citation Context ...inary classification problems are solved optimally, the reduction should yield an optimal multiclass classifier. Known consistent methods are inadequate because they have n − 1 rounds for every label =-=[10]-=-, use more information than a single bit per pairing [2]. Perhaps the best known reduction result uses error-correcting output codes (ECOC) to predict the most likely label [7, 12]. There are two subs... |

5 | Reliable minimum finding comparator networks
- Denejko, Diks, et al.
- 2000
(Show Context)
Citation Context ...tcomes of the corresponding random variables. All these assumptions do not fit our understanding of what a bribing bookie or some other adversary is capable of. In comparator-based selection networks =-=[6, 14]-=-, the adversary can only fail to sort an unsorted pair (passing it on without any processing), but she cannot missort an already sorted pair. The adversary we are concerned with here could defeat such... |