### Table 3. Effects of Wetland Storage in Reducing Peak Flows During Flood Events in the

"... In PAGE 26: ...diversions so that they would occur nearer the hydrograph peak only slightly decreased the peak flow. Table3 summarizes reduction in peak flows and stages for 3 precipitation flood events under a baseline condition (no wetland storage) and wetland storage of 2,700 AF (1 foot bounce), 5,400 AF (2 foot bounce), and 10,800 AF (4 foot bounce). Reductions in peak flood stages increased (albeit at a nonproportional and diminishing rate) as wetland bounce increased and decreased as flood events became larger.... In PAGE 31: ... Simple wetland restoration/construction costs for plugging a wetland drain with an earthen spillway are estimated to be $300 per acre for small-size wetlands ( lt; 1 acre), $200 per acre for medium-size wetlands (1-5 acres), and $100 per acre for large-size wetlands ( gt; 5 acres). Using wetland size classes for the two watersheds summarized in Table3 , total simple wetland restoration/construction costs for the upper Maple Watershed are $520,000 and for the WRW the cost is $1.8 million.... In PAGE 37: ... - Hydrological relationships from a study in the Maple Watershed (Bengtson and Padmanabahn, 1999). - From Table3 , storage with 1 foot of bounce reduces peak flood stage by 3.... In PAGE 38: ... - Hydrological relationships from a study in the Maple Watershed (Bengtson and Padmanabahn, 1999). - From Table3 , storage with 1 foot of bounce reduces peak flood stage by 3.... In PAGE 47: ... - Hydrological relationships from a study in the Maple Watershed (Bengtson and Padmanabahn, 1999). - From Table3 , Storage with 4 feet of bounce reduces peak flood stage by 8.5% (high-frequency floods), 4.... In PAGE 48: ... - Hydrological relationships from a study in the Maple Watershed (Bengtson and Padmanabahn, 1999). - From Table3 , Storage with 4 feet of bounce reduces peak flood stage by 8.5% (high-frequency floods), 4.... ..."

### Table 4: Number of DAU faults for Problem C, for 16, 32, and 64 PEs. The data storage is 11 Mbytes per node. To understand the di erences among the three execution modes, a monitoring tool (PAT) was used, that reports the time for node communication, system calls and the wait time. This showed that in the case of the DN scheme, typically 95% or more of the elapsed time is spent by each node for computation, as opposed to about 60% for the DD. As expected, the DN mode leads to an increase in the number of DAU faults, caused by the fact that the amount of memory available for dynamic storage is reduced by the size of the static storage. For example, Table 4 presents the minimum, the maximum and the average number of DAU faults per node for Problem C. The e ect of the DAU size. All the measurements reported above use a DAU size B = 16 16 16 = 4096 grid points. The results presented in Table 5 are for Problem C running on 32 nodes on the Gamma system, with a replication factor of 4, and with 11 Mbytes of data storage.

### Table 1: RMS error percentages between the true solution and previous forward solutions for six levels of mesh re nement. temporal demands of inverting and storing large sub-matrices on a single CPU workstation. As a result, we are currently pursuing several techniques to improve our sparse matrix storage and manipulation routines. We employed a sparse storage structure consistent with that used in Numerical Recipes in C 17 in order to use their LU decomposition and back-substitution code, as well as their SVD code. This storage technique required an N K matrix where K was equal to the full bandwidth of the matrix. While we were able to signi cantly reduce this number via a Reverse Cuthill-McKee18 node reordering, K was generally gt; 10% of N. We plan to rewrite this code using a more sparse structure (e.g. the Yale sparse storage format), thereby further decreasing our storage costs.The number of volume nodes was also limited temporally by the time required to decompose AVV into its L and U components. An improved sparse matrix structure will signi cantly reduce this expense, as will employing a parallel block LU decomposition algorithm. Our other matrix operations, though a lesser percentage of our total execution time, were signi cant as well. We intend to improve this performance through distributed matrix multiplication and singular value decomposition algorithms.

1995

"... In PAGE 7: ... The vast majority of the computational time (over 80%) was spent in the LU decomposition of the AV V sub-matrix. The RMS errors contained in Table1 indicate that the forward solution is converging to the true solution with increased mesh re nement. Furthermore, we observe in Table 2 that the inverse solution at each level of re nement varies from the forward solution by a constant of approximately 45%.... ..."

Cited by 3

### Table 3. Simulation Results of Integrating Two Simple Heuristics

in Simple and Integrated Heuristic Algorithms for Scheduling Tasks with Time and Resource Constraints

1987

Cited by 21

### Table 1{Performance bounds for zero propagation delay algorithms Class of Scheduling Range of Property P3 Property P2 Property P1 Algorithms Throughput k N k

1997

"... In PAGE 13: ...3 For gt; 12, S 6, and n 3, no scheduling algorithm in the class CONTIN- UOUS STATIC has any property P1{P4. Table1 summarizes the throughput and delay characteristics of the scheduling algorithms pre- sented in this and the previous section. The last three columns list the upper bounds for k N k,... ..."

Cited by 45

### Table 9: Number of oating point operations, compensator computations. Full Order Reduced Comp. Reduced Order N

2000

"... In PAGE 25: ...same storage (if M=P as we have done here) and the same number of ops. In Table9 , we show the number of ops to compute the full order compensator, the number to compute the reduced order compensator including the grid selection algorithm, and the number required to compute the reduced order compensator if the grid is already known. In practice, the grid selection algorithm would be performed once.... ..."

Cited by 7

### Table 7. Number of oating point operations, compen- sator computations.

"... In PAGE 22: ... We consider only Method II, as the Method I requires the same storage (if M=P as we have done here) and the same number of ops. In Table7 , we show the number of ops to compute the full order compensator, the number to compute the reduced order compensator including the grid selection algorithm, and the number required to compute the reduced order compensator if the grid is already known. In practice, the grid selection algorithm would be performed once.... ..."