Results 1  10
of
85
Exploring tradeoffs in buffer requirements and throughput constraints for synchronous dataflow graphs
 DESIGN AUTOMATION CONFERRENCE, PROC. ACM
, 2006
"... Multimedia applications usually have throughput constraints. An implementation must meet these constraints, while it minimizes resource usage and energy consumption. The compute intensive kernels of these applications are often specified as Synchronous Dataflow Graphs. Communication between nodes in ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
Multimedia applications usually have throughput constraints. An implementation must meet these constraints, while it minimizes resource usage and energy consumption. The compute intensive kernels of these applications are often specified as Synchronous Dataflow Graphs. Communication between nodes in these graphs requires storage space which influences throughput. We present exact techniques to chart the Pareto space of throughput and storage tradeoffs, which can be used to determine the minimal storage space needed to execute a graph under a given throughput constraint. The feasibility of the approach is demonstrated with a number of examples.
ThroughputBuffering TradeOff Exploration for CycloStatic and Synchronous Dataflow Graphs
"... Multimedia applications usually have throughput constraints. An implementation must meet these constraints, while it minimizes resource usage and energy consumption. The compute intensive kernels of these applications are often specified as CycloStatic or Synchronous Dataflow Graphs. Communication ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
Multimedia applications usually have throughput constraints. An implementation must meet these constraints, while it minimizes resource usage and energy consumption. The compute intensive kernels of these applications are often specified as CycloStatic or Synchronous Dataflow Graphs. Communication between nodes in these graphs requires storage space which influences throughput. We present an exact technique to chart the Pareto space of throughput and storage tradeoffs, which can be used to determine the minimal buffer space needed to execute a graph under a given throughput constraint. The feasibility of the exact technique is demonstrated with experiments on a set of realistic DSP and multimedia applications. To increase scalability of the approach, a fast approximation technique is developed that guarantees both throughput and a, tight, bound on the maximal overestimation of buffer requirements. The approximation technique allows to trade off worstcase overestimation versus runtime.
Minimising buffer requirements of synchronous dataflow graphs with model checking
 IN PROCEEDINGS OF THE DESIGN AUTOMATION CONFERENCE
, 2005
"... ..."
(Show Context)
Formal verification and simulation for performance analysis for probabilistic broadcast protocols
 In Proc. 5th Conf. on AdHoc, Mobile, and Wireless Networks (ADHOCNOW’06), volume 4104 of LNCS
, 2006
"... Abstract. This paper describes formal probabilistic models of flooding and gossiping protocols, and explores the influence of different modelling choices and assumptions on the results of performance analysis. We use Prism, a model checker for probabilistic systems, for the formal analysis of protoc ..."
Abstract

Cited by 30 (5 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes formal probabilistic models of flooding and gossiping protocols, and explores the influence of different modelling choices and assumptions on the results of performance analysis. We use Prism, a model checker for probabilistic systems, for the formal analysis of protocols and small network topologies, and use in addition MonteCarlo simulation, implemented in Matlab, to establish if the results and effects found during formal analysis extend to larger networks. This combination of approaches has several advantages. The formal model has well defined synchronization primitives with clear semantics for modelling synchronous and asynchronous communication between nodes. Model checking of the probabilistic model determines exact probabilities and performance bounds, results that cannot be obtained by simulation, and even if the model is nondeterministic. The MonteCarlo simulation can then be used to study effects that only emerge in larger networks, such as phase transition. 1
2012a. From Verification to Implementation: A Model Translation Tool and a Pacemaker Case Study
 In 18th IEEE RealTime and Embedded Technology and Applications Symposium (RTAS
"... ModelDriven Design (MDD) of cyberphysical systems advocates for design procedures that start with formal modeling of the realtime system, followed by the model’s verification at an early stage. The verified model must then be translated to a more detailed model for simulationbased testing and fi ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
(Show Context)
ModelDriven Design (MDD) of cyberphysical systems advocates for design procedures that start with formal modeling of the realtime system, followed by the model’s verification at an early stage. The verified model must then be translated to a more detailed model for simulationbased testing and finally translated into executable code in a physical implementation. As later stages build on the same core model, it is essential that models used earlier in the pipeline are valid approximations of the more detailed models developed downstream. The focus of this effort is on the design and development of a model translation tool, UPP2SF, and how it integrates system modeling, verification, modelbased WCET analysis, simulation, code generation and testing into an MDD based framework. UPP2SF facilitates automatic conversion of verified timed automatabased models (in UPPAAL) to models that may be simulated and tested (in Simulink/Stateflow). We describe the design rules to ensure the conversion is correct, efficient and applicable to a large class of models. We show how the tool enables MDD of an implantable cardiac pacemaker. We demonstrate that UPP2SF preserves behaviors of the pacemaker model from UPPAAL to Stateflow. The resultant Stateflow
Assurance cases in modeldriven development of the pacemaker software
 In Proceedings of International Symposium on Leveraging Applications of Formal Methods, Verification, and Validation (ISoLA 2010), volume 6416 of LNCS
, 2010
"... of the Pacemaker Software ⋆ ..."
(Show Context)
Model Driven Scheduling Framework for Multiprocessor SoC Design
 in &quot;Workshop on Scheduling for Parallel Computing, SPC 2005
, 2005
"... Abstract. The evolution of technologies is enabling the integration of complex platforms in a single chip, called a SystemonChip (SoC). Modern SoCs may include several CPU subsystems to execute software and sophisticated interconnect in addition to specific hardware subsystems. Designing such mixe ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The evolution of technologies is enabling the integration of complex platforms in a single chip, called a SystemonChip (SoC). Modern SoCs may include several CPU subsystems to execute software and sophisticated interconnect in addition to specific hardware subsystems. Designing such mixed hardware and software systems requires new methodologies and tools or to enhance old tools. These design to ols must be able to satisfy many relative tradeoffs (realtime, performance, low power consumption, time to market, reusability, cost, area, etc). It is recognized that the decisions taken for scheduling and mapping at a high level of abstraction have a major impact on the global design flow. They can help in satisfying different tradeoffs before proceeding to lower level refinements. To provide good potential to scheduling and mapping decisions we propose in this paper a static scheduling framework for MpSoC design. We will show why it is necessary to and how to integrate different scheduling techniques in such a framework in order to compare and to combine them. This framework is integrated in a model driven approach in order to keep it open and extensible. 1
Resources in process algebra
 J. Logic and Algebraic Programming
, 2007
"... The Algebra of Communicating Shared Resources (ACSR) is a timed process algebra which extends classical process algebras with the notion of a resource. It takes the view that the timing behavior of a realtime system depends not only on delays due to process synchronization, but also on the availabi ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
The Algebra of Communicating Shared Resources (ACSR) is a timed process algebra which extends classical process algebras with the notion of a resource. It takes the view that the timing behavior of a realtime system depends not only on delays due to process synchronization, but also on the availability of shared resources. Thus, ACSR employs resources as a basic primitive and it represents a realtime system as a collection of concurrent processes which may communicate with each other by means of instantaneous events and compete for the usage of shared resources. Resources are used to model physical devices such as processors, memory modules, communication links, or any other reusable resource of limited capacity. Additionally, they provide a convenient abstraction mechanism for capturing a variety of aspects of system behavior. In this paper we give an overview of ACSR and its probabilistic extension, PACSR, where resources can fail with associated failure probabilities. We present associated analysis techniques for performing qualitative analysis (such as schedulability analysis) and quantitative analysis (such as resource utilization analysis) of processalgebraic descriptions. We also discuss mappings between probabilistic and nonprobabilistic models,