### Table 5 we tabulate the difference between the true Hilbert series and (30).

2002

"... In PAGE 10: ...Table 5 we tabulate the difference between the true Hilbert series and (30). n 2 3 4 5 6 7 8 9 10 11 12 13 Diff 0 0 0 t3 0 t4 t4 t5 10t5 t6 + t5 64t6 t7 + 13t6 Table5 . Difference between the true Hilbert series and the an- ticipated Hilbert series for generic ideals generated by two qua- dratic forms Appendix A.... ..."

### Table 1. Hilbert Curve State Diagram: for Mapping from One Dimension

"... In PAGE 5: ... This state diagram is then expressed in tablular form in Tables 1 and 2. Table1 contains a row for each state and is used in the mapping from one to n dimensions. Within a row, points, expressed as n-points, are ordered by sequence number, or derived-key value.... In PAGE 14: ... Our rules result in a number of states given by n2 n;1 According to Bially, this is the minimum possible. The size of a single state is determined by the number of derived-key { next-state pairs in Table1 , for example, and is given by the expression 2 n . The complexity of the mapping algorithm given in Algorithm 1 can be seen to be O(kn) and this is the same as for the algorithm given by Butz which relies on calculation alone, although the latter includes a higher constant fac- tor.... ..."

### Table 7 Reduced-Form Employment and Wage Equations with Lagged Value Added per Worker Ordinary Least Squares

"... In PAGE 19: ... This equation can be interpreted as a reduced-form for wages that takes into account productivity changes. Results are shown in Table7 . As far as the employment and hours equations are concemed, adding value added per worker to the equation does not greatly alter the estimated coefficients for the protection variables.... ..."

### Table 3: Coe#0Ecients d#28n#29 of the modular form with N=5 and k=12

### Table 3: Comparison of gamma saddlepoint with ordinary saddlepoint and corrected saddlepoint approximations

"... In PAGE 15: ... When the gamma saddlepoint approximation out- performs the ordinary saddlepoint approximation it can also out-perform even the corrected saddlepoint approximation. For example, Table3 shows an example tran- sition rate sequence where the gamma saddlepoint approximation out-performs both the saddlepoint approximation and corrected saddlepoint approximation over n = 0 to n = 7. Generally speaking, however, computations suggest that the corrected normal approximation is more accurate for a range of varying n-dependent forms and the in-... ..."

### Table 5. Alternativesfor setting up a Hilbert matrix hilbert1.c hilbert2.c

### Table 1: Ordinary TD( ) for linearly approximating the undiscounted value function of a fixed proper policy.

1999

"... In PAGE 2: ...Table 1: Ordinary TD( ) for linearly approximating the undiscounted value function of a fixed proper policy. To what weights does TD( ) converge? Examining the update rule for in Table1 , it is not difficult to see that the coefficient changes made by TD( ) after an observed trajectory (x0; x1; : : : ; xL; END) have the form := + n(d + C + !), where d = E L Xi=0 ziRi ; C = E L Xi=0 zi? (xi+1) ? (xi) T ; (1) and ! = zero-mean noise. The expectations are taken with respect to the distribution of trajectories through the Markov chain.... In PAGE 3: ... /* Use SVD. */ g Table 2: A least-squares version of TD( ) (compare Table1 ). Note that A has dimension K K, and b, , z, and (x) all have dimension K 1.... ..."

Cited by 35

### Table 1: Ordinary TDa0a2a1a4a3

1999

"... In PAGE 2: ...Table 1: Ordinary TDa0a2a1a4a3 for linearly approximating the undiscounted value function of a fixed proper policy. To what weights does TDa0a2a1a4a3 converge? Examining the update rule for a53 in Table1 , it is not difficult to see that the coefficient changes made by TDa0a20a1a21a3 after an observed trajectory a0a2a19 a36 a28a32a19a66a5a30a28a47a46a10a46a47a46a4a28a32a19a7a67 a28 ENDa3 have the form a23 a40 a5 a68a23a69a1 a44a70a65 a0 a4a71a72a1a41a73a55a23a69a1a63a74 a3 , where a71 a5 a76a75a55a77 a67 a78 a31 a32 a36 a27 a31 a0 a31a22a79a81a80 a73 a5 a76a75a55a77 a67 a78 a31 a32 a36 a27 a31 a56 a16 a0a2a19 a31 a3a6a5 a3 a13a25 a16 a0a2a19 a31 a3 a59 a21 a79a81a80 (1) and a74 a5 zero-mean noise. The expectations are taken with respect to the distribution of trajectories through the Markov chain.... In PAGE 3: ... /* Use SVD. */ a62 Table 2: A least-squares version of TDa0a20a1a21a3 (compare Table1 ). Note that a7 has dimension a8a10a9a11a8 , and a12 , a23 , a27 , and a16 a0a20a19a21a3 all have dimension a8a10a9 a12 .... ..."

Cited by 35

### Table 3: Modularity

2007

"... In PAGE 4: ... A modularity of 0 means that there is no natural way to subdivide the network into groups, and a high modularity means that one can easily subdivide the network (maximum modularity is 1). As is shown in Table3 , for all three blog networks, we find relatively high modularity, but it is highest for DFW, which is the sparsest and most easily broken up network. In contrast, Kuwait and UAE, while displaying a degree of local interaction between subgroups of blogs, have a tighter cohesion ... ..."

Cited by 5