• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Folding@Home and Genome@Home: Using distributed computing to tackle previously intractable problems in computational biology," (2003)

by S M Larson, C Snow, V S Pande
Venue:Modern Methods in Computational
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 103
Next 10 →

Measuring and Understanding User Comfort with Resource Borrowing

by Ashish Gupta, Bin Lin, Peter A. Dinda - In Proceedings of the 13th IEEE International Symposium on High Performance Distributed Computing , 2004
"... Resource borrowing is a common underlying approach in grid computing and thin-client computing. In both cases, external processes borrow resources that would otherwise be delivered to the interactive processes of end-users, creating contention that slows these processes and decreases the comfort of ..."
Abstract - Cited by 48 (23 self) - Add to MetaCart
Resource borrowing is a common underlying approach in grid computing and thin-client computing. In both cases, external processes borrow resources that would otherwise be delivered to the interactive processes of end-users, creating contention that slows these processes and decreases the comfort of the end-users. How resource borrowing and user comfort are related is not well understood and thus resource borrowing tends to be extremely conservative. To address this lack of understanding, we have developed a sophisticated distributed application for directly measuring user comfort with the borrowing of CPU time, memory space, and disk bandwidth. Using this tool, we have conducted a controlled user study with qualitative and quantitative results that are of direct interest to the designers of grid and thin-client systems. We have found that resource borrowing can be quite aggressive without creating user discomfort, particularly in the case of memory and disk. We also describe an on-going Internet-wide study using our tool.

Aneka: Next-Generation Enterprise Grid Platform for e-Science and e-Business

by Xingchen Chu, Krishna Nadiminti, Chao Jin, Srikumar Venugopal, Rajkumar Buyya - Applications, Proceedings of the 3 rd IEEE International Conference and Grid Computing , 2007
"... In this paper, we present the design of Aneka, a.NET based service-oriented platform for desktop grid computing that provides: (i) a configurable service container hosting pluggable services for discovering, scheduling and balancing various types of workloads and (ii) a flexible and extensible frame ..."
Abstract - Cited by 38 (18 self) - Add to MetaCart
In this paper, we present the design of Aneka, a.NET based service-oriented platform for desktop grid computing that provides: (i) a configurable service container hosting pluggable services for discovering, scheduling and balancing various types of workloads and (ii) a flexible and extensible framework/API supporting various programming models including threading, batch processing, MPI and dataflow. Users and developers can easily use different programming models and the services provided by the container to run their applications over desktop Grids managed by Aneka. We present the implementation of both the essential and advanced services within the platform. We evaluate the system with applications using the grid task and dataflow models on top of the infrastructure and conclude with some future directions of the current system. 1.
(Show Context)

Citation Context

...ed networked PCs for performing computational tasks is well-established and there are several projects in this area. Some of the more well-known ones are the @Home projects (SETI@Home[2], Folding@Home=-=[13]-=-), Entropia[3], XtremeWeb[5], Alchemi[6] and SZTAKI Desktop Grid [7]. The approach followed by SETI@Home and other related projects is to dispatch workloads consisting of data to be analysed, from a c...

Idletime scheduling with preemption intervals

by Lars Eggert - 20th ACM Symposium on Operating Systems Principles , 2005
"... ABSTRACT * This paper presents the idletime scheduler; a generic, kernel-level mechanism for using idle resource capacity in the background without slowing down concurrent foreground use. Many operating systems fail to support transparent background use and concurrent foreground performance can decr ..."
Abstract - Cited by 31 (0 self) - Add to MetaCart
ABSTRACT * This paper presents the idletime scheduler; a generic, kernel-level mechanism for using idle resource capacity in the background without slowing down concurrent foreground use. Many operating systems fail to support transparent background use and concurrent foreground performance can decrease by 50 % or more. The idletime scheduler minimizes this interference by partially relaxing the work conservation principle during preemption intervals, during which it serves no background requests even if the resource is idle. The length of preemption intervals is a controlling parameter of the scheduler: short intervals aggressively utilize idle capacity; long intervals reduce the impact of background use on foreground performance. Unlike existing approaches to establish prioritized resource use, idletime scheduling requires only localized modifications to a limited number of system schedulers. In experiments, a FreeBSD implementation for idletime network scheduling maintains over 90 % of foreground TCP throughput, while allowing concurrent, high-rate UDP background flows to consume up to 80 % of remaining link capacity. A FreeBSD disk scheduler implementation maintains 80 % of foreground read performance, while enabling concurrent background operations to reach 70% throughput.
(Show Context)

Citation Context

... the V System [33] and Condor [19]. Another category is data migration systems, which push data to remote machines that execute a common process, such as SETI@home [14], Folding@home, and Genome@home =-=[17]-=-. Other data migration systems exploit idle remote memory as secondary storage [21][26]. Most existing systems that try to exploit idle capacity do not establish background processing as a separate, l...

Scheduling Task Parallel Applications For Rapid Turnaround on Desktop Grids

by Derrick Kondo , 2005
"... ..."
Abstract - Cited by 28 (3 self) - Add to MetaCart
Abstract not found

Folding@home: Lessons from eight years of volunteer distributed computing

by Adam L Beberg , Daniel L Ensign , Guha Jayachandran , Siraj Khaliq , Vijay S Pande - In 8th IEEE International Workshop on High Performance Computational Biology (HiCOMB , 2009
"... Abstract ..."
Abstract - Cited by 28 (0 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

... power for bridging the time scale gap, but it introduces many new problems with respect to security, reliability, and coordination. The architecture of Stanford University’s Folding@home project [2] =-=[3]-=- additionally involves a combination of load balancing, result feedback, and redundancy that are only required on volunteer systems of this scale. 2. Related work The Distributed Computing System (DCS...

Lottery trees: motivational deployment of networked systems

by John R. Douceur - in SIGCOMM ’07: Proceedings of the 2007 conference on Applications, technologies, architectures, and , 2007
"... We address a critical deployment issue for network systems, namely motivating people to install and run a distributed service. This work is aimed primarily at peer-to-peer systems, in which the decision and effort to install a service falls to individuals rather than to a central planner. This probl ..."
Abstract - Cited by 27 (1 self) - Add to MetaCart
We address a critical deployment issue for network systems, namely motivating people to install and run a distributed service. This work is aimed primarily at peer-to-peer systems, in which the decision and effort to install a service falls to individuals rather than to a central planner. This problem is relevant for bootstrapping systems that rely on the network effect, wherein the benefits are not felt until deployment reaches a significant scale, and also for deploying asymmetric systems, wherein the set of contributors is different than the set of beneficiaries. Our solution is the lottery tree (lottree), a mechanism that probabilistically encourages both participation in the system and also solicitation of new participants. We define the lottree mechanism and formally state seven properties that encourage contribution, solicitation, and fair play. We then present the Pachira lottree scheme, which satisfies five of these seven properties, and we prove this to be a maximal satisfiable subset. Using simulation, we determine optimal parameters for the Pachira lottree scheme, and we determine how to configure a lottree system for achieving various deployment scales based on expected installation effort. We also present extensive sensitivity analyses, which bolster the generality of our conclusions.
(Show Context)

Citation Context

...ootstrap, as evidenced by the numerous developed peer-to-peer systems [5], few of which have become popular. Asymmetric distributed systems, such as BOINC [4], GPU [25] and Folding@Home / Genome@Home =-=[21]-=-, are even more problematic. Because potential contributors are asked to provide computation, storage, or bandwidth toward a goal that does not directly benefit them, they have little or no incentive ...

Designing a runtime system for volunteer computing

by David P. Anderson, Carl Christensen, Bruce Allen - IEEE Computer , 2006
"... Volunteer computing is a form of distributed computing in which the general public volunteers processing and storage to scientific research projects. BOINC, a middleware system for volunteer computing, is currently used by about 20 projects, to which over 600,000 volunteers and 1,000,000 computers s ..."
Abstract - Cited by 22 (2 self) - Add to MetaCart
Volunteer computing is a form of distributed computing in which the general public volunteers processing and storage to scientific research projects. BOINC, a middleware system for volunteer computing, is currently used by about 20 projects, to which over 600,000 volunteers and 1,000,000 computers supply 350 TeraFLOPS of processing power. A BOINC client program runs on the volunteered hosts and manages the execution of applications. Together with a library linked to applications, it implements a runtime system providing process management, graphics control, checkpointing, file access, and other functions. This runtime system must handle widely varying applications, must provide features and properties desired by volunteers, and must work on many platforms. This paper describes the problems in designing a runtime system having these properties, and how these problems are solved in BOINC. 1.
(Show Context)

Citation Context

...ing and storage resources to scientific research projects. Early volunteer computing projects include the Great Internet Mersenne Prime Search [9], SETI@home [1], Distributed.net [6] and Folding@home =-=[10]-=-. Today the approach is being used in many areas, including highenergy physics, molecular biology, medicine, astrophysics, and climate dynamics. This type of computing can provide great power (SETI@ho...

QR Factorization of Tall and Skinny Matrices in a Grid Computing Environment

by Emmanuel Agullo, Camille Coti, Jack Dongarra, Thomas Herault, Julien Langou
"... Previous studies have reported that common dense linear algebra operations do not achieve speed up by using multiple geographical sites of a computational grid. Because such operations are the building blocks of most scientific applications, conventional supercomputers are still strongly predominant ..."
Abstract - Cited by 21 (7 self) - Add to MetaCart
Previous studies have reported that common dense linear algebra operations do not achieve speed up by using multiple geographical sites of a computational grid. Because such operations are the building blocks of most scientific applications, conventional supercomputers are still strongly predominant in high-performance computing and the use of grids for speeding up large-scale scientific problems is limited to applications exhibiting parallelism at a higher level. We have identified two performance bottlenecks in the distributed memory algorithms implemented in ScaLAPACK, a state-of-the-art dense linear algebra library. First, because ScaLAPACK assumes a homogeneous communication network, the implementations of ScaLAPACK algorithms lack locality in their communication pattern. Second, the number of messages sent in the ScaLAPACK algorithms is significantly greater than other algorithms that trade flops for communication. In this paper, we present a new approach for computing a QR factorization – one of the main dense linear algebra kernels – of tall and skinny matrices in a grid computing environment that overcomes these two bottlenecks. Our contribution is to articulate a recently proposed algorithm (Communication Avoiding QR) with a topology-aware middleware (QCG-OMPI) in order to confine intensive communications (ScaLAPACK calls) within the different geographical sites. An experimental study conducted on the Grid’5000 platform shows that the resulting performance increases linearly with the number of geographical sites on large-scale problems (and is in particular consistently higher than ScaLAPACK’s).
(Show Context)

Citation Context

...fic problems have been successfully solved thanks to the use of computational grids (or, simply, grids). These problems cover a wide range of scientific disciplines including biology (protein folding =-=[28]-=-), medicine (cure muscular dystrophy [9]), financial modeling, earthquake simulation, and climate/weather modeling. Such scientific breakthroughs have relied on the tremendous processing power provide...

Human-aided computing: Utilizing implicit human processing to classify images

by Pradeep Shenoy, Desney S. Tan - In ACM CHI , 2008
"... In this paper, we present Human-Aided Computing, an approach that uses an electroencephalograph (EEG) device to measure the presence and outcomes of implicit cognitive processing, processing that users perform automatically and may not even be aware of. We describe a classification system and presen ..."
Abstract - Cited by 20 (2 self) - Add to MetaCart
In this paper, we present Human-Aided Computing, an approach that uses an electroencephalograph (EEG) device to measure the presence and outcomes of implicit cognitive processing, processing that users perform automatically and may not even be aware of. We describe a classification system and present results from two experiments as proof-ofconcept. Results from the first experiment showed that our system could classify whether a user was looking at an image of a face or not, even when the user was not explicitly trying to make this determination. Results from the second experiment extended this to animals and inanimate object categories as well, suggesting generality beyond face recognition. We further show that we can improve classification accuracies if we show images multiple times, potentially to multiple people, attaining well above 90% classification accuracies with even just ten presentations.
(Show Context)

Citation Context

...t 2008 ACM 978-1-60558-011-1/08/04…$5.00 Figure 1: Artist’s depiction of multiple people connected to electroencephalograph devices, implicitly classifying images they see. to find cures for diseases =-=[17]-=-, World Community Grid (www.worldcommunitygrid.org) aims to create the world’s largest public computing grid, and distributed.net focuses on using these cycles to break cryptographic ciphers. We belie...

Computing Low Latency Batches with Unreliable Workers in Volunteer Computing Environments

by Eric Martin Heien , David P. Anderson, Kenichi Hagihara - J GRID COMPUTING , 2009
"... Internet based volunteer computing projects such as SETI@home are currently restricted to performing coarse grained, embarrassingly parallel master-worker style tasks. This is partly due to the “pull” nature of task distribution in volunteer computing environments, where workers request tasks from ..."
Abstract - Cited by 20 (1 self) - Add to MetaCart
Internet based volunteer computing projects such as SETI@home are currently restricted to performing coarse grained, embarrassingly parallel master-worker style tasks. This is partly due to the “pull” nature of task distribution in volunteer computing environments, where workers request tasks from the master rather than the master assigning tasks to arbitrary workers. In this paper we propose algorithms for computing batches of medium grained tasks with deadlines in pull-style volunteer computing environments. We develop models of unreliable workers based on analysis of trace data from an actual volunteer computing project. These models are used to develop algorithms for task distribution in volunteer computing systems with a high probability of meeting batch deadlines. We develop algorithms for perfectly reliable workers, computation-reliable workers and unreliable
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University