Results 1 - 10
of
17,810
Table 2 Non-cooperative and cooperative costs (times 10-3) and relative bargaining power in the Nash Bargaining solutions
"... In PAGE 13: ... In the cooperative case, the common money supply and the fiscal deficits are expanded. In the first (I) row of Table2 we tabulate the welfare losses that result under non- cooperative and cooperative policy design in the EMU and the bargaining weights in the cooperative decision making between monetary and fiscal authorities. Table 2 Non-cooperative and cooperative costs (times 10-3) and relative bargaining power in the Nash Bargaining solutions... In PAGE 14: ... The adjustment in country 1 is approximately the same as before. According to the second (II) line of Table2 , this lower labor market flexibility entails for country 2 considerable welfare losses compared to the base case (I), both under non-cooperative and cooperative macroeconomic policy design. In this perspective, it is interesting to note that currently in various countries in the EU attempts to reform labor markets and institutions are undertaken that aim at increasing the flexibility of the labor market.... ..."
Table 3. Algorithm: Minimax-Q and Nash-Q. The difference be- tween the algorithms is in the Value function and the Q values. Minimax-Q uses the linear programming solution for zeros-sum games and Nash-Q uses the quadratic programming solution for general-sum games. Also, the Q values in Nash-Q are actually a vector of expected rewards, one entry for each player.
2000
"... In PAGE 4: ....2.1 MINIMAX-Q Littman (1994) extended the traditional Q-Learning algo- rithm for MDPs to zero-sum stochastic games. The al- gorithm is shown in Table3 . The notion of a Q function is extended to maintain the value of joint actions, and the backup operation computes the value of states differently.... In PAGE 5: ....2.2 NASH-Q Hu amp; Wellman (1998) extended the Minimax-Q algorithm to general-sum games. The algorithm is structurally identi- cal and is also shown in Table3 . The extension requires that each agent maintain Q values for all the other agents.... ..."
Cited by 17
Table 3. Algorithm: Minimax-Q and Nash-Q. The difference be- tween the algorithms is in the Value function and the Q values. Minimax-Q uses the linear programming solution for zeros-sum games and Nash-Q uses the quadratic programming solution for general-sum games. Also, the Q values in Nash-Q are actually a vector of expected rewards, one entry for each player.
2000
"... In PAGE 4: ....2.1 MINIMAX-Q Littman (1994) extended the traditional Q-Learning algo- rithm for MDPs to zero-sum stochastic games. The al- gorithm is shown in Table3 . The notion of a Q function is extended to maintain the value of joint actions, and the backup operation computes the value of states differently.... In PAGE 4: ....2.2 NASH-Q Hu amp; Wellman (1998) extended the Minimax-Q algorithm to general-sum games. The algorithm is structurally identi- cal and is also shown in Table3 . The extension requires that each agent maintain Q values for all the other agents.... ..."
Cited by 17
Table 6: Simulation results of running the explicit memory version of the algorithm on a 5-Player 10-Action random game consisting of six Nash equilibria
"... In PAGE 4: ...0, we allowed the algorithm to take non-improving moves which helped it move into new territory when a best response was on the tabu list. Table6 shows similar data collected for a random game with five players, ten actions, and six equilibria. Solution 49660 was by... ..."
Cited by 1
Table 5: A Nash Problem
1995
Cited by 100
Table 5: A Nash Problem
1995
Cited by 100
Table 2: Nash Equilibria of the Games
2004
"... In PAGE 9: ... Eight two-person games were used. Table 1 gives a listing of the games, and Table2 provides a listing of the set of all Nash Equilibria for each game. We used only the experiments corresponding to full information repeated games.... ..."
Table 7: Nash and Sofer problems.
1998
"... In PAGE 31: ... Furthermore, the starting point x0 is de ned as follows x0 i = 8 gt; gt; gt; gt; lt; gt; gt; gt; gt; : li + 0:5 if y0 i lt; li li + 10?4 if y0 i = li y0 i if li lt; y0 i lt; ui ui ? 10?4 if y0 i = ui ui ? 0:5 if y0 i gt; ui: ; i = 1; : : :; n Twenty-two problems are proposed in [33], but we have only considered the fourteen that are not dense. The main characteristics of the problems are given in Table7 , where we report the information given in [32] together with the number p of Hessian-vector products needed to evaluate the Hessian. In Table 8 we report the ASNM results on these problems.... ..."
Cited by 4
Table 8.6. Distances to Nash of Imbalanced Power Agreements. Balanced versus Distance to Nash t-test
Results 1 - 10
of
17,810