• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

D.G.: Static scheduling of synchronous data flow programs for digital signal processing (1987)

by E A Lee, Messerschmitt
Venue:IEEE Trans. Computers
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 598
Next 10 →

Synchronous data flow

by Edward A. Lee, et al. , 1987
"... Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case ..."
Abstract - Cited by 622 (45 self) - Add to MetaCart
Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case of data flow (either atomic or large grain) in which the number of data samples produced or consumed by each node on each invocation is specified a priori. Nodes can be scheduled statically (at compile time) onto single or parallel programmable processors so the run-time overhead usually associated with data flow evaporates. Multiple sample rates within the same system are easily and naturally handled. Conditions for correctness of SDF graph are explained and scheduling algorithms are described for homogeneous parallel processors sharing memory. A preliminary SDF software system for automatically generating assembly language code for DSP microcomputers is described. Two new efficiency techniques are introduced, static buffering and an extension to SDF to efficiently implement conditionals.

The synchronous approach to reactive and real-time systems

by Albert Benveniste, Gerard Berry - Proceedings of the IEEE , 1991
"... This special issue is devoted to the synchronous approach to reactive and real-time programming. This introductory paper presents and discusses the application fields and the principles of synchronous programming. The major concern of the synchronous approach is to base synchronous programming langu ..."
Abstract - Cited by 436 (15 self) - Add to MetaCart
This special issue is devoted to the synchronous approach to reactive and real-time programming. This introductory paper presents and discusses the application fields and the principles of synchronous programming. The major concern of the synchronous approach is to base synchronous programming languages on math-ematical models. This makes it possible to handle compilation, logical correctness proofs, and verifications of real-time programs in a formal way, leading to a clean and precise methodology for design and programming. 1. INTRODUCTION: REAL-TIME AND REACTIVE SYSTEMS It is commonly accepted to call real-time a program or system that receives external interrupts or reads sensors connected to the physical world and outputs commands to it. Real-time programming is an essential industrial activ-

Hierarchical Finite State Machines with Multiple Concurrency Models

by Alain Girault, Bilung Lee, Edward A. Lee - IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems , 1999
"... This paper studies the semantics of hierarchical finite state machines (FMS's) that are composed using various concurrency models, particularly dataflow, discrete-events, and synchronous/reactive modeling. It is argued that all three combinations are useful, and that the concurrency model can b ..."
Abstract - Cited by 146 (43 self) - Add to MetaCart
This paper studies the semantics of hierarchical finite state machines (FMS's) that are composed using various concurrency models, particularly dataflow, discrete-events, and synchronous/reactive modeling. It is argued that all three combinations are useful, and that the concurrency model can be selected independently of the decision to use hierarchical FSM's. In contrast, most formalisms that combine FSM's with concurrency models, such as Statecharts (and its variants) and hybrid systems, tightly integrate the FSM semantics with the concurrency semantics. An implementation that supports three combinations is described.
(Show Context)

Citation Context

... (SDL) combines process networks with FSM’s [4]. The codesign finite state machine (CFSM) model [16] combines FSM’s with a discrete-event (DE) concurrency model. Pankert et al. combine synchronous=-= DF [31] wit-=-h FSM’s [40], [36]. Program-state machines (PSM) combine imperative semantics with FSM’s [39], [43]. Hybrid systems [1], [24] mix concurrent continuous-time systems (usually given as differential ...

Bounded Scheduling of Process Networks

by Thomas M. Parks, Bythomas M. Parks , 1995
"... ..."
Abstract - Cited by 137 (2 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ueue lengths to be bounded. Thus the questions of termination and boundedness are decidable for computation graphs, a restricted form of process network. 2.3 Synchronous Dataflow Synchronous dataflow =-=[31, 32]-=- is a special case of computation graphs where Tp = Wp for all arcs in the graph. Because the number of tokens consumed and produced by an actor is constant for each firing, we can statically construc...

Exploiting coarse-grained task, data, and pipeline parallelism in stream programs,

by Michael I Gordon , William Thies , Saman Amarasinghe - Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS-XII, , 2006
"... Abstract As multicore architectures enter the mainstream, there is a pressing demand for high-level programming models that can effectively map to them. Stream programming offers an attractive way to expose coarse-grained parallelism, as streaming applications (image, video, DSP, etc.) are naturall ..."
Abstract - Cited by 133 (6 self) - Add to MetaCart
Abstract As multicore architectures enter the mainstream, there is a pressing demand for high-level programming models that can effectively map to them. Stream programming offers an attractive way to expose coarse-grained parallelism, as streaming applications (image, video, DSP, etc.) are naturally represented by independent filters that communicate over explicit data channels. In this paper, we demonstrate an end-to-end stream compiler that attains robust multicore performance in the face of varying application characteristics. As benchmarks exhibit different amounts of task, data, and pipeline parallelism, we exploit all types of parallelism in a unified manner in order to achieve this generality. Our compiler, which maps from the StreamIt language to the 16-core Raw architecture, attains a 11.2x mean speedup over a single-core baseline, and a 1.84x speedup over our previous work.
(Show Context)

Citation Context

...re known at compile time. This enables the compiler to calculate a steady-state for the stream graph: a repetition of each filter that does not change the number of items buffered on any data channel =-=[26, 19]-=-. In combination with a simple program analysis that estimates the number of operations performed on each invocation of a given work function, the steady-state repetitions offer an estimate of the wor...

Advances in dataflow programming languages

by Wesley M. Johnston, J. R. Paul Hanna, Richard J. Millar - ACM COMPUT. SURV , 2004
"... Many developments have taken place within dataflow programming languages in the past decade. In particular, there has been a great deal of activity and advancement in the field of dataflow visual programming languages. The motivation for this article is to review the content of these recent develo ..."
Abstract - Cited by 124 (0 self) - Add to MetaCart
Many developments have taken place within dataflow programming languages in the past decade. In particular, there has been a great deal of activity and advancement in the field of dataflow visual programming languages. The motivation for this article is to review the content of these recent developments and how they came

Exploiting Task and Data Parallelism on a Multicomputer

by Jaspal Subhlok, James M. Stichnoth, David R. O'hallaron, Thomas Gross - In ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming , 1993
"... For many applications, achieving good performance on a private memory parallel computer requires exploiting data parallelism as well as task parallelism. Depending on the size of the input data set and the number of nodes (i.e., processors), different tradeoffs between task and data parallelism are ..."
Abstract - Cited by 103 (23 self) - Add to MetaCart
For many applications, achieving good performance on a private memory parallel computer requires exploiting data parallelism as well as task parallelism. Depending on the size of the input data set and the number of nodes (i.e., processors), different tradeoffs between task and data parallelism are appropriate for a parallel system. Most existing compilers focus on only one of data parallelism and task parallelism. Therefore, to achieve the desired results, the programmer must separately program the data and task parallelism. We have taken a unified approach to exploiting both kinds of parallelism in a single framework with an existing language. This approach eases the task of programming and exposes the tradeoffs between data and task parallelism to the compiler. We have implemented a parallelizing Fortran compiler for the iWarp system based on this approach. We discuss the design of our compiler, and present performance results to validate our approach. 1 Introduction Many applicati...
(Show Context)

Citation Context

...by an FFT, thresholding, and other postprocessing operators [16]. Two widely used styles of parallelism for private memory multicomputers are data parallelism [9, 11, 20, 19, 10] and task parallelism =-=[6, 13, 12, 14]-=-. Data parallelism is typically expressed as a single thread of control operating on data sets distributed over all nodes. It is especially useful when the size of the data sets can be scaled to fit t...

Orchestrating the execution of stream programs on multicore platforms

by Manjunath Kudlur, Scott Mahlke - In Proc. of the SIGPLAN ’08 Conference on Programming Language Design and Implementation , 2008
"... While multicore hardware has become ubiquitous, explicitly parallel programming models and compiler techniques for exploiting parallelism on these systems have noticeably lagged behind. Stream programming is one model that has wide applicability in the multimedia, graphics, and signal processing dom ..."
Abstract - Cited by 82 (9 self) - Add to MetaCart
While multicore hardware has become ubiquitous, explicitly parallel programming models and compiler techniques for exploiting parallelism on these systems have noticeably lagged behind. Stream programming is one model that has wide applicability in the multimedia, graphics, and signal processing domains. Streaming models execute as a set of independent actors that explicitly communicate data through channels. This paper presents a compiler technique for planning and orchestrating the execution of streaming applications on multicore platforms. An integrated unfolding and partitioning step based on integer linear programming is presented that unfolds data parallel actors as needed and maximally packs actors onto cores. Next, the actors are assigned to pipeline stages in such a way that all communication is maximally overlapped with computation on the cores. To facilitate experimentation, a generalized code generation template for mapping the software pipeline onto the Cell architecture is presented. For a range of streaming applications, a geometric mean speedup of 14.7x is achieved on a 16-core Cell platform compared to a single core.
(Show Context)

Citation Context

...t the parallelism expressed in stream graphs. Even though SDF is a powerful explicitly parallel programming model, its niche has been in DSP domain for a long time. Early works from the Ptolemy group =-=[17, 16, 15]-=- has focused on expressing DSP algorithms as stream graphs. Some of their scheduling techniques [21, 9] have focused on scheduling stream graphs to multiprocessor systems. However, they focus on acycl...

MULTIPROCESSOR SCHEDULING TO ACCOUNT FOR INTERPROCESSOR COMMUNICATION

by Gilbert Christopher Sih , 1991
"... Interprocessor communication (PC) overheads have emerged as the major performance limitation in parallel processing systems, due to the transmission delays, synchronization overheads, and conflicts for shared communication resources created by data exchange. Accounting for these overheads is essenti ..."
Abstract - Cited by 79 (11 self) - Add to MetaCart
Interprocessor communication (PC) overheads have emerged as the major performance limitation in parallel processing systems, due to the transmission delays, synchronization overheads, and conflicts for shared communication resources created by data exchange. Accounting for these overheads is essential for attaining efficient hardware utilization. This thesis introduces two new compile-time heuristics for scheduling precedence graphs onto multiprocessor architectures, which account for interprocessor communication overheads and interconnection constraints in the architecture. These algorithms perform scheduling and routing simultaneously to account for irregular interprocessor interconnections, and schedule all communications as well as all computations to eliminate shared resource contention. The first technique, called dynamic-level scheduling, modifies the classical HLFET list scheduling strategy to account for IPC and synchronization overheads. By using dynamically changing priorities to match nodes and processors at each step, this technique attains an equitable tradeoff between load balancing and interprocessor communication cost. This method is fast, flexible, widely targetable, and displays promising perforrnance. The second technique, called declustering, establishes a parallelism hierarchy upon the precedence graph using graph-analysis techniques which explicitly address the tradeoff between exploiting parallelism and incurring communication cost. By systematically decomposing this hierarchy, the declustering process exposes parallelism instances in order of importance, assuring efficient use of the available processing resources. In contrast with traditional clustering schemes, this technique can adjust the level of cluster granularity to suit the characteristics of the specified architecture, leading to a more effective solution.

A formal approach to MpSoC performance verification

by Marek Jersak, Ieee Member, Ieee Member, Rolf Ernst, Ieee Fellow, Kai Richter, Kai Richter - IEEE Computer , 2003
"... Communication-centric Multiprocessor Systems-on-Chip (MpSoC) will dominate future chip architectures. They will be built from heterogeneous HW and SW components integrated around a complex communication infrastructure. Already today, performance verification is a major challenge that simulation can ..."
Abstract - Cited by 78 (9 self) - Add to MetaCart
Communication-centric Multiprocessor Systems-on-Chip (MpSoC) will dominate future chip architectures. They will be built from heterogeneous HW and SW components integrated around a complex communication infrastructure. Already today, performance verification is a major challenge that simulation can hardly meet, and formal techniques have emerged as a serious alternative. The article presents a new technology that extends known approaches to real-time system analysis to heterogeneous MpSoC using event model interfaces and a novel event flow mechanism. Biography Kai Richter is a research staff member and a PhD candidate in the team of Rolf Ernst at the Technical University of Braunschweig, Germany. His research interests include real-time systems, performance analysis, and heterogeneous HW/SW platforms. He recieved a Diploma (Dipl.-Ing.) in electrical engineering from the University of Braunschweig, Germany. Contact him at
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University