### Table 2.1: Representation of non-contiguous subsets of the time axis in the dense-order constraint theory

2001

### Table 6: Results for the test problem CBRATU3D on the ALLIANT FX/80. As we have already observed this very well conditioned problem, the number of iterations does not decrease signi cantly with the number of partitions. Thus, the initial decompo- sition with dense kernels remains the best compromise. For such a problem, we can use a large number of partitions, but when considering the two last lines of the table which com- pare using the initial p elements with a sparse or dense representation, we see that our code would bene t from switching to a dense-storage mode for elements with many nonzeros. We now consider a problem with the same structure, but the numerical values are modi ed so that the convergence is harder. The eigenvalues in each element are randomly chosen in

### Table 3. Abstract counters used to represent TVS at different execution points of the iterative procedure. For MA we sample every k = 2000 structures, for CA k = 5000, and for GC k = 10, 000. D, S, B, O, and F refer to the Dense, Sparse, Base, OBDD, and Functional representations respectively.

2002

"... In PAGE 14: ... Specifically, we computed the instrumentation-based space usage estimate periodically. Table3 shows that as the analysis proceeds, the average size of the structure increases, as measured by both the dense metric and the sparse metric. This is consistent with our expectations, sincethestatespaceexplorationtypicallystartsexaminingmorecomplexstructures,with more individuals, as time proceeds.... ..."

Cited by 14

### Table 3. Abstract counters used to represent TVS at different execution points of the iterative procedure. For MA we sample every k = 2000 structures, for CA k = 5000, and for GC k = 10, 000. D, S, B, O, and F refer to the Dense, Sparse, Base, OBDD, and Functional representations respectively.

2002

"... In PAGE 14: ... CA 40,000 7,473,737 11,053,352 22,516,937 2,302,140 1,749,384 GC 189,772 9,769,618 32,722,016 41,835,001 4,268,780 7,288,032 JFE 10,424 49,172,345 382,156 1,201,570 181,940 300,336 Kernel 6,079 64,768,292 799,604 2,292,436 420,520 315,168 MA 20,000 18,866,654 8,170,412 17,077,413 496,960 724,152 we computed the instrumentation-based space usage estimate periodically. Table3 shows that as the analysis proceeds, the average size of the structure increases, as measured by both the dense metric and the sparse metric. This is consistent with our expectations, sincethestatespaceexplorationtypicallystartsexaminingmorecomplexstructures,with more individuals, as time proceeds.... ..."

Cited by 14

### Table 5: Enabling functions and transition function for variable x5. 9 Experimental Results The e ciency of the proposed encoding technique will be measured in terms of the BDD node count reduction to represent the reachability set of the PNs, and the speed-up for that computation. Two experimental scenarios will be analyzed. First, number of variables, BDD sizes and CPU times are compared between the conventional sparse encoding and the proposed dense encoding schemes. Second, the improvements achieved by using the more dense code representation o ered by ZDDs (as proposed by Yoneda et al. [27]) are compared against the dense encoding scheme. CPU times have been obtained by executing the algorithms using the BDD library developed by David Long [19] on a Sun SPARC 20 workstation (128MB).

1998

"... In PAGE 27: ... Therefore, two binary codes are assigned to the same marking. Finally, Table5 presents the enabling functions for all transitions of the PN, and the transitions functions for variable x5 according to the de nitions given by the Equations (9) and (10).... ..."

Cited by 11

### Table 5: Enabling functions and transition function for variable x5. 9 Experimental Results The e ciency of the proposed encoding technique will be measured in terms of the BDD node count reduction to represent the reachability set of the PNs, and the speed-up for that computation. Two experimental scenarios will be analyzed. First, number of variables, BDD sizes and CPU times are compared between the conventional sparse encoding and the proposed dense encoding schemes. Second, the improvements achieved by using the more dense code representation o ered by ZDDs (as proposed by Yoneda et al. [27]) are compared against the dense encoding scheme. CPU times have been obtained by executing the algorithms using the BDD library developed by David Long [19] on a Sun SPARC 20 workstation (128MB).

1998

"... In PAGE 27: ... Therefore, two binary codes are assigned to the same marking. Finally, Table5 presents the enabling functions for all transitions of the PN, and the transitions functions for variable x5 according to the de nitions given by the Equations (9) and (10).... ..."

Cited by 11

### Table 1: The Domains for Distribution/Compression Scheme Selections non-zeros among processors, as long as it achieves over-all balanced computation and communication. Likewise, (*, Block) partitions the matrix by column, while (Block, Block) partitions the matrix both by row and column. To give an idea of how the generic matrix class looks like, we show in the following part of its code fragments. public class Matrix {

"... In PAGE 2: ... The compression schemes being considered are CRS, CCS, and dense representations. Table1 lists the compression and distribution schemes for distributed sparse matrices. Note that by (Block, *) scheme, we mean distributing the sparse matrix by row.... ..."

### Table 3.2: Space counters for different representations. These counters indicate the number of bytes required to represent the structures and are computed as explained in Appendix C. The Struct. column denotes the total number of structures produced by the analysis. Benchmark Struct. Dense Sparse Base OBDD Func.

### Table 2. Compression and distribution schemes for two-dimensional arrays

2001

"... In PAGE 5: ... The compression schemes being considered are Compressed Row Storage (CRS), Compressed Column Storage (CCS), and dense representations. Table2 lists the compression and distribution schemes for sparse matrices. Note that by (*, Block) scheme, we mean distributing the sparse matrix by row in a two-dimensional array.... ..."

Cited by 3

### Table 2: Compression/Distribution Schemes for Two-dimensional Arrays 0 lt;0,0,0,5,3,0 gt; 0 0 0 0

"... In PAGE 4: ... The compression schemes being considered are Compressed Row Storage (CRS), Compressed Column Storage (CCS), and dense representations. Table2 lists the compression and distribution schemes for sparse matrices. Note that by (Block, *) scheme, we mean distributing the sparse matrix by row in a two-dimensional array.... ..."