### Table 1. Learning Algorithm

2006

"... In PAGE 7: ...Table1 summarizes the algorithm presented in this section. Note that, as presented here, the algorithm works in batch mode.... ..."

Cited by 1

### Table 1: RDN learning algorithm.

2007

"... In PAGE 12: ... The algorithm input consists of: (1) GD: a relational data graph, (2) R: a conditional relational learner, and (3) Qt: a set of queries12 that specify the relational neighborhood considered in R for each type T. Table1 outlines the learning algorithm in pseudocode. The algorithm cycles over each attribute of each item type and learns a separate CPD, conditioned on the other values in the training data.... ..."

Cited by 5

### Table 3: Learning selector algorithms

"... In PAGE 7: ...ttribute domain) and measurement (i.e. data matrix attribute values), statistical methods are based on the selector frequency which is the number of instances that each selector describes, and information-theory methods are based on Shanon apos;s entropy function. Table3 shows some learning selector algorithms sorted according to the above criteria. Exact values and positive exact values generate a selector for each di erent value of each attribute in the data matrix.... ..."

### Table 1: Rule learning algorithms.

"... In PAGE 2: ... Since the compu- tational cost grows as the product of the number of generated rules and the number of training examples, IREP normally allows substantially larger training sets to be handled within a given amount of time compared to using separate-and-conquer with no pruning. In Table1 , two variants of incremental reduced error pruning are shown. The flrst, called IREP-O, generates order-dependent rules and is a variant of the algorithms presented in [15, 7], while the second, called IREP-U, generates order-independent rules and is taken from [3].... ..."

### Table 2: The MAXQ-0 learning algorithm.

2000

"... In PAGE 24: ... We will rst prove its convergence properties and then show how it can be extended to give the second algorithm, MAXQ-Q, which works with general pseudo-reward functions. Table2 gives pseudo-code for MAXQ-0. MAXQ-0 is a recursive function that executes the current exploration policy starting at Max node i in state s.... In PAGE 25: ...There are three things that must be speci ed in order to make this algorithm description complete. First, to keep the pseudo-code readable, Table2 does not show how \ancestor termi- nation quot; is handled. Recall that after each action, the termination predicates of all of the subroutines on the calling stack are checked.... ..."

Cited by 239