Results 1 -
2 of
2
Performance prediction of configurable software systems by Fourier learning
- in Proceedings of the International Conference on Automated Software Engineering (ASE). IEEE
, 2015
"... Abstract—Understanding how performance varies across a large number of variants of a configurable software system is important for helping stakeholders to choose a desirable variant. Given a software system with n optional features, measuring all its 2n possible configurations to determine their per ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Understanding how performance varies across a large number of variants of a configurable software system is important for helping stakeholders to choose a desirable variant. Given a software system with n optional features, measuring all its 2n possible configurations to determine their performances is usually infeasible. Thus, various techniques have been proposed to predict software performances based on a small sample of measured configurations. We propose a novel algorithm based on Fourier transform that is able to make predictions of any con-figurable software system with theoretical guarantees of accuracy and confidence level specified by the user, while using minimum number of samples up to a constant factor. Empirical results on the case studies constructed from real-world configurable systems demonstrate the effectiveness of our algorithm. I.
Generating Performance Distributions via Probabilistic Symbolic Execution
"... ABSTRACT Analyzing performance and understanding the potential bestcase, worst-case and distribution of program execution times are very important software engineering tasks. There have been model-based and program analysis-based approaches for performance analysis. Model-based approaches rely on a ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT Analyzing performance and understanding the potential bestcase, worst-case and distribution of program execution times are very important software engineering tasks. There have been model-based and program analysis-based approaches for performance analysis. Model-based approaches rely on analytical or design models derived from mathematical theories or software architecture abstraction, which are typically coarsegrained and could be imprecise. Program analysis-based approaches collect program profiles to identify performance bottlenecks, which often fail to capture the overall program performance. In this paper, we propose a performance analysis framework PerfPlotter. It takes the program source code and usage profile as inputs and generates a performance distribution that captures the input probability distribution over execution times for the program. It heuristically explores highprobability and low-probability paths through probabilistic symbolic execution. Once a path is explored, it generates and runs a set of test inputs to model the performance of the path. Finally, it constructs the performance distribution for the program. We have implemented PerfPlotter based on the Symbolic PathFinder infrastructure, and experimentally demonstrated that PerfPlotter could accurately capture the bestcase, worst-case and distribution of program execution times. We also show that performance distributions can be applied to various important tasks such as performance understanding, bug validation, and algorithm selection.