### Table 1: Experimental results with repeated market conditions and three variations of MinneTAC for order probability, price and price trend predictions. Mean profit and standard deviation results are based on 23 games. Regime-M uses the regime model with Markov prediction process, and Regime-E uses the regime model with exponential smoother lookup process.

2007

Cited by 1

### Table 1: Multiagent Q-Learning: Centralized.

"... In PAGE 9: ...1 Pseudocode: Centralized and Decentralized The generalization of Q-learning from MDPs to Markov games is intuitively straightforward. However, one important application-specific issue arises: can we assume the existence of a trusted third party who can act as a referee, or central coordinator? Or need we decentralize the implementation of multiagent Q-learning? We present two generic formulations of multiagent Q-learning: one centralized ( Table1 ), and one decentralized (Table 2). Given a Markov game, the multiagent Q-learning process is initialized at some state with some action profile, after which the game is played as follows: First (step 1), the current action profile is simulated in the current state.... ..."

Cited by 37

### Table 6.4: Experimental setup with controlled market conditions and different variations of MinneTAC for order probability, price and price trend predictions. Mean profit and standard deviation results are based on 23 games per set of experiments. Regime MP 1-day stands for regime prediction using a 1-day Markov transition matrix, Regime MP n-day uses the n-day interval Markov transition matrix, and Regime ExpS does regime prediction via an exponential smoother lookup process. 8 4

2007

### Table 11. Markov partitions.

"... In PAGE 32: ...Appendix: Dimension data Markov partitions. Table11 shows, for certain individual calculations, the size of the Markov partition jPj used to determine . These parti- tions were constructed by adaptive re nement of a given partition until max diam Pi lt; r was achieved; the value of r is in column 3.... ..."

Cited by 9

### Table 3. Dependency to the model: Markov orders 0, 1 and 2. word Markov 0 Markov 1 Markov 2

### Table 1. Markov Localization Algorithm

"... In PAGE 3: ... 3.1 Markov Localization algorithm The basic Markov Localization algorithm is shown in Table1 . We use the same notation as in (D.... ..."

### Table 1: The Markov Blanket Algorithm.

1999

"... In PAGE 4: ...hildren and parents of children of X. An example Markov blanket is shown in Fig. 1. Note that any of the blanket nodes, say Y , is dependent with X given B#28X#29 ,fYg. In Table1 we present an algorithm for the recovery of the Markov blanket of X based on pairwise indepen- dence tests. It consists of two phases, a growing and a shrinking one.... ..."

Cited by 32