Results 11 - 20
of
397
Taming Reflection -- Aiding Static Analysis in the Presence of Reflection and Custom Class Loaders
, 2011
"... Static program analyses and transformations for Java face many problems when analyzing programs that use reflection or custom class loaders: How can a static analysis know which reflective calls the program will execute? How can it get hold of classes that the program loads from remote locations or ..."
Abstract
-
Cited by 47 (9 self)
- Add to MetaCart
(Show Context)
Static program analyses and transformations for Java face many problems when analyzing programs that use reflection or custom class loaders: How can a static analysis know which reflective calls the program will execute? How can it get hold of classes that the program loads from remote locations or even generates on the fly? And if the analysis transforms classes, how can these classes be re-inserted into a program that uses custom class loaders? In this paper, we present TamiFlex, a tool chain that offers a partial but often effective solution to these problems. With TamiFlex, programmers can use existing staticanalysis tools to produce results that are sound at least with respect to a set of recorded program runs. TamiFlex inserts runtime checks into the program that warn the user in case the program executes reflective calls that the analysis did not take into account. TamiFlex further allows programmers to re-insert offline-transformed classes into a program. We evaluate TamiFlex in two scenarios: benchmarking with the DaCapo benchmark suite and analysing large-scale interactive applications. For the latter, TamiFlex significantly improves code coverage of the static analyses, while for the former our approach even appears complete: the inserted runtime checks issue no warning. Hence, for the first time, TamiFlex enables sound static whole-program analyses on DaCapo. During this process, TamiFlex usually incurs less than 10 % runtime overhead.
Finding Programming Errors Earlier by Evaluating Runtime Monitors Ahead-of-Time
- In FSE
, 2008
"... Runtime monitoring allows programmers to validate, for instance, the proper use of application interfaces. Given a property specification, a runtime monitor tracks appropriate runtime events to detect violations and possibly execute recovery code. Although powerful, runtime monitoring inspects only ..."
Abstract
-
Cited by 44 (19 self)
- Add to MetaCart
(Show Context)
Runtime monitoring allows programmers to validate, for instance, the proper use of application interfaces. Given a property specification, a runtime monitor tracks appropriate runtime events to detect violations and possibly execute recovery code. Although powerful, runtime monitoring inspects only one program run at a time and so may require many program runs to find errors. Therefore, in this paper, we present ahead-of-time techniques that can (1) prove the absence of property violations on all program runs, or (2) flag locations where violations are likely to occur. Our work focuses on tracematches, an expressive runtime monitoring notation for reasoning about groups of correlated objects. We describe a novel flow-sensitive static analysis for analyzing monitor states. Our abstraction captures both positive information (a set of objects could be in a particular monitor state) and negative information (the set is known not to be in a state). The analysis resolves heap references by combining the results of three points-to and alias analyses. We also propose a machine learning phase to filter out likely false positives. We applied a set of 13 tracematches to the DaCapo benchmark suite and SciMark2. Our static analysis rules out all potential points of failure in 50 % of the cases, and 75 % of false positives on average. Our machine learning algorithm correctly classifies the remaining potential points of failure in all but three of 461 cases. The approach revealed defects and suspicious code in three benchmark programs.
Cork: Dynamic memory leak detection for garbage-collected languages
- IN POPL
, 2007
"... A memory leak in a garbage-collected program occurs when the program inadvertently maintains references to objects that it no longer needs. Memory leaks cause systematic heap growth, degrading performance and resulting in program crashes after perhaps days or weeks of execution. Prior approaches for ..."
Abstract
-
Cited by 44 (2 self)
- Add to MetaCart
(Show Context)
A memory leak in a garbage-collected program occurs when the program inadvertently maintains references to objects that it no longer needs. Memory leaks cause systematic heap growth, degrading performance and resulting in program crashes after perhaps days or weeks of execution. Prior approaches for detecting memory leaks rely on heap differencing or detailed object statistics which store state proportional to the number of objects in the heap. These overheads preclude their use on the same processor for deployed long-running applications. This paper introduces a dynamic heap-summarization technique based on type that accurately identifies leaks, is space efficient (adding less than 1 % to the heap), and is time efficient (adding 2.3% on average to total execution time). We implement this approach in Cork which utilizes dynamic type information and garbage collection to summarize the live objects in a type points-from graph (TPFG) whose nodes (types) and edges (references between types) are annotated with volume. Cork compares TPFGs across multiple collections, identifies growing data structures, and computes a type slice for the user. Cork is accurate: it identifies systematic heap growth with no false positives in 4 of 15 benchmarks we tested. Cork’s slice report enabled us (non-experts) to quickly eliminate growing data structures in SPECjbb2000 and Eclipse, something their developers had not previously done. Cork is accurate, scalable, and efficient enough to consider using online.
Tracking bad apples: reporting the origin of null and undefined value errors
- In Proc. of the ACM SIGPLAN Conference on Object-Oriented Programming Systems and Applications
, 2007
"... Despite extensive testing, deployed software still crashes. Other than a stack trace, these crashes offer little guidance to developers, making them hard to reproduce and fix. This work seeks to ease error correction by providing diagnostic information about the origins of null pointer exceptions an ..."
Abstract
-
Cited by 39 (5 self)
- Add to MetaCart
(Show Context)
Despite extensive testing, deployed software still crashes. Other than a stack trace, these crashes offer little guidance to developers, making them hard to reproduce and fix. This work seeks to ease error correction by providing diagnostic information about the origins of null pointer exceptions and undefined variables. The key idea is to use value piggybacking to record and report useful debugging information in undefined memory. For example, instead of storing zero at a null store, store the origin program location, then correctly propagate this value through assignment statements and comparisons. If the program dereferences this value, report the origin. We describe, implement, and evaluate low-overhead value piggybacking for origin tracking of null pointer exceptions in deployed Java programs. We show that the reported origins add useful debugging information over a stack trace. We also describe, implement, and evaluate origin tracking of undefined values in C, C++, and Fortran programs in a memory error testing tool built with Valgrind. Together these implementations demonstrate that value piggybacking yields useful debugging information that can ease bug diagnosis and repair.
Method-specific dynamic compilation using logistic regression
- of ACM SIGPLAN Conferences on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA'06
, 2006
"... Abstract Determining the best set of optimizations to apply to a programhas been a long standing problem for compiler writers. To reduce ..."
Abstract
-
Cited by 38 (7 self)
- Add to MetaCart
(Show Context)
Abstract Determining the best set of optimizations to apply to a programhas been a long standing problem for compiler writers. To reduce
Finding low-utility data structures
- In ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI
, 2010
"... Many opportunities for easy, big-win, program optimizations are missed by compilers. This is especially true in highly layered Java applications. Often at the heart of these missed optimization opportunities lie computations that, with great expense, produce data values that have little impact on th ..."
Abstract
-
Cited by 37 (15 self)
- Add to MetaCart
(Show Context)
Many opportunities for easy, big-win, program optimizations are missed by compilers. This is especially true in highly layered Java applications. Often at the heart of these missed optimization opportunities lie computations that, with great expense, produce data values that have little impact on the program’s final output. Constructing a new date formatter to format every date, or populating a large set full of expensively constructed structures only to check its size: these involve costs that are out of line with the benefits gained. This disparity between the formation costs and accrued benefits of data structures is at the heart of much runtime bloat. We introduce a run-time analysis to discover these low-utility data structures. The analysis employs dynamic thin slicing, which naturally associates costs with value flows rather than raw data flows. It constructs a model of the incremental, hop-to-hop, costs
A scalable technique for characterizing the usage of temporaries in framework-intensive java applications
- In Proceedings of the ACM SIGSOFT symposium and the European conference on Foundations of Software Engineering, SIGSOFT ’08/FSE-16
, 2008
"... Framework-intensive applications (e.g., Web applications) heavily use temporary data structures, often resulting in performance bot-tlenecks. This paper presents an optimized blended escape analysis to approximate object lifetimes and thus, to identify these tempo-raries and their uses. Empirical re ..."
Abstract
-
Cited by 36 (7 self)
- Add to MetaCart
(Show Context)
Framework-intensive applications (e.g., Web applications) heavily use temporary data structures, often resulting in performance bot-tlenecks. This paper presents an optimized blended escape analysis to approximate object lifetimes and thus, to identify these tempo-raries and their uses. Empirical results show that this optimized analysis on average prunes 37 % of the basic blocks in our bench-marks, and achieves a speedup of up to 29 times compared to the original analysis. Newly defined metrics quantify key properties of temporary data structures and their uses. A detailed empirical eval-uation offers the first characterization of temporaries in framework-intensive applications. The results show that temporary data struc-tures can include up to 12 distinct object types and can traverse through as many as 14 method invocations before being captured.
Component-Based Lock Allocation
"... The allocation of lock objects to critical sections in concurrent programs affects both performance and correctness. Recent work explores automatic lock allocation, aiming primarily to minimize conflicts and maximize parallelism by allocating locks to individual critical section interferences. We in ..."
Abstract
-
Cited by 34 (1 self)
- Add to MetaCart
(Show Context)
The allocation of lock objects to critical sections in concurrent programs affects both performance and correctness. Recent work explores automatic lock allocation, aiming primarily to minimize conflicts and maximize parallelism by allocating locks to individual critical section interferences. We investigate component-based lock allocation, which allocates locks to entire groups of interfering critical sections. Our allocator depends on a thread-based side effect analysis, and benefits from precise points-to and may happen in parallel information. Thread-local object information has a small impact, and dynamic locks do not improve significantly on static locks. We experiment with a range of small and large Java benchmarks on 2-way, 4-way, and 8-way machines, and find that a single static lock is sufficient for mtrt, that performance degrades by 10 % for hsqldb, that jbb2000 becomes mostly serialized, and that for lusearch, xalan, and jbb2005, component-based lock allocation recovers the performance of the original program. 1.
Dynamic Memory Balancing for Virtual Machines
- In Proceedings of the 2009 ACM SIGPLAN/SIGOPS international conference on Virtual Execution Environments
, 2009
"... Virtualization essentially enables multiple operating systems and applications to run on one physical computer by multiplexing hardware resources. A key motivation for applying virtualization is to improve hardware resource utilization while maintaining reasonable quality of service. However, such a ..."
Abstract
-
Cited by 34 (3 self)
- Add to MetaCart
Virtualization essentially enables multiple operating systems and applications to run on one physical computer by multiplexing hardware resources. A key motivation for applying virtualization is to improve hardware resource utilization while maintaining reasonable quality of service. However, such a goal cannot be achieved without efficient resource management. Though most physical resources, such as processor cores and I/O devices, are shared among virtual machines using time slicing and can be scheduled flexibly based on priority, allocating an appropriate amount of main memory to virtual machines is more challenging. Different applications have different memory requirements. Even a single application shows varied working set sizes during its execution. An optimal memory management strategy under a virtualized environment thus needs to dynamically adjust memory allocation for each virtual machine, which further requires a prediction model that forecasts its host physical memory needs on the fly. This paper introduces MEmory Balancer (MEB) which dynamically monitors the memory usage of each virtual machine, accurately predicts its memory needs, and periodically reallocates host memory. MEB uses two effective memory predictors which, respectively, estimate the amount of memory available for reclaiming without a notable performance drop, and additional memory required for reducing the virtual machine paging penalty. Our experimental results show that our prediction schemes yield high accuracy and low overhead. Furthermore, the overall system throughput can be significantly improved with MEB.
CDx: A Family of Real-time Java Benchmarks
"... Java is becoming a viable platform for hard real-time computing. There are production and research real-time Java VMs, as well as applications in both military and civil sector. Technological advances and increased adoption of Real-time Java contrast significantly with the lack of real-time benchmar ..."
Abstract
-
Cited by 34 (7 self)
- Add to MetaCart
(Show Context)
Java is becoming a viable platform for hard real-time computing. There are production and research real-time Java VMs, as well as applications in both military and civil sector. Technological advances and increased adoption of Real-time Java contrast significantly with the lack of real-time benchmarks. The few benchmarks that exist are either low-level synthetic micro-benchmarks, or benchmarks used internally by companies, making it difficult to independently verify and repeat reported results. This paper presents the CDx (Collision Detector) benchmark suite, an open source application benchmark suite that targets different hard and soft real-time virtual machines. CDx is, at its core, a real-time benchmark with a single periodic task, which implements aircraft collision detection based on simulated radar frames. The benchmark can be configured to use different sets of real-time features and comes with a number of workloads. We describe the architecture of the benchmark and characterize the workload based on input parameters. 1.