Results 1 -
6 of
6
F.: Self-healing of operational workflow incidents on distributed computing infrastructures. CCGrid’12
, 2012
"... ha l-0 ..."
(Show Context)
Desprez, Self-healing of workflow activity incidents on distributed computing infrastructures, Future Generation Computer Systems 29 (8
, 2013
"... Abstract Distributed computing infrastructures are commonly used through scientific gateways, but operating these gateways requires important human intervention to handle operational incidents. This paper presents a self-healing process that quantifies incident degrees of workflow activities from m ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Abstract Distributed computing infrastructures are commonly used through scientific gateways, but operating these gateways requires important human intervention to handle operational incidents. This paper presents a self-healing process that quantifies incident degrees of workflow activities from metrics measuring long-tail effect, application efficiency, data transfer issues, and site-specific problems. These metrics are simple enough to be computed online and they make little assumptions on the application or resource characteristics. From their degree, incidents are classified in levels and associated to sets of healing actions that are selected based on association rules modeling correlations between incident levels. We specifically study the long-tail effect issue, and propose a new algorithm to control task replication. The healing process is parametrized on real application traces acquired in production on the European Grid Infrastructure. Experimental results obtained in the Virtual Imaging Platform show that the proposed method speeds up execution up to a factor of 4, consumes up to 26% less resource time than a control execution and properly detects unrecoverable errors.
S.: Reverse Engineering Utility Functions Using Genetic Programming to Detect Anomalous Behavior in Software
- In: Proceedings of the 2010 17th Working Conference on Reverse Engineering
, 2010
"... Abstract—Recent studies have shown the promise of using utility functions to detect anomalous behavior in software sys-tems at runtime. However, it remains a challenge for software engineers to hand-craft a utility function that achieves both a high precision (i.e., few false alarms) and a high reca ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Recent studies have shown the promise of using utility functions to detect anomalous behavior in software sys-tems at runtime. However, it remains a challenge for software engineers to hand-craft a utility function that achieves both a high precision (i.e., few false alarms) and a high recall (i.e., few undetected faults). This paper describes a technique that uses genetic programming to automatically evolve a utility function for a specific system, set of resource usage metrics, and precision/recall preference. These metrics are computed using sensor values that monitor a variety of system resources (e.g., memory usage, processor usage, thread count). The technique allows users to specify the relative importance of precision and recall, and builds a utility function to meet those requirements. We evaluated the technique on the open source Jigsaw web server using ten resource usage metrics and five anomalous behaviors in the form of injected faults in the Jigsaw code and a security attack. To assess the effectiveness of the technique, the precision and recall of the evolved utility function was compared to that of a hand-crafted utility function that uses a simple thresholding scheme. The results show that the evolved function outperformed the hand-crafted function by 10 percent. Index Terms—autonomic computing, utility function, genetic programming, software fault tolerance I.
Inoculation Against Malware Infection Using Kernel-level Software Sensors
"... ABSTRACT We present a technique for dynamic malware detection that relies on a set of sensors that monitor the interaction of applications with the underlying operating system. By monitoring the requests that each process makes to kernel-level operating system functions, we build a statistical mode ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT We present a technique for dynamic malware detection that relies on a set of sensors that monitor the interaction of applications with the underlying operating system. By monitoring the requests that each process makes to kernel-level operating system functions, we build a statistical model that describes both clean and infected systems in terms of the distribution of data collected from each sensor. The model parameters are learned from labeled training data gathered from machines infected with canonical samples of malware. We present a technique for detecting malware using the Neyman-Pearson test from classical detection theory. This technique classifies a system as either clean or infected at runtime as measurements are collected from the sensors. We provide experimental results that illustrate the effectiveness of this technique for a selection of malware samples. Additionally, we provide a performance analysis of our sensing and detection techniques in terms of the overhead they introduce to the system. Finally, we show this method to be effective in detecting previously unknown malware when trained to detect similar malware under similar load conditions.
Quantum Inf Process DOI 10.1007/s11128-012-0506-4 Quantum adiabatic machine learning
, 2012
"... Abstract We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. This approach consists of two quantum phases, with some amount of classical preprocessing to set up the quantum problems. In the training phase we identify an optimal set of weak classifiers, t ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. This approach consists of two quantum phases, with some amount of classical preprocessing to set up the quantum problems. In the training phase we identify an optimal set of weak classifiers, to form a single strong classifier. In the testing phase we adiabatically evolve one or more strong classifiers on a superposition of inputs in order to find certain anomalous elements in the classification space. Both the training and testing phases are executed via quantum adiabatic evolution. All quantum processing is strictly limited to two-qubit interactions so as to ensure physical feasibility. We apply and illustrate this approach in detail to the problem of software verification and validation, with a specific example of the learning phase applied to a problem of interest in flight control systems. Beyond this example, the algorithm can be used to attack a broad class of anomaly detection problems.