DRYAD DISTRIBUTED DATA-PARALLEL PROGRAMS FROM SEQUENTIAL BUILDING BLOCKS PDF

The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multi- ple computers, or on multiple CPU cores within a computer. Dryad is designed to scale from powerful multi-core sin- gle computers, through small clusters of computers, to data centers with thousands of computers. Categories and Subject Descriptors D.

Author:Grojora Shaktizshura
Country:Tanzania
Language:English (Spanish)
Genre:Automotive
Published (Last):12 October 2004
Pages:374
PDF File Size:12.9 Mb
ePub File Size:12.2 Mb
ISBN:115-9-14952-768-1
Downloads:76683
Price:Free* [*Free Regsitration Required]
Uploader:Digrel



The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multi- ple computers, or on multiple CPU cores within a computer.

Dryad is designed to scale from powerful multi-core sin- gle computers, through small clusters of computers, to data centers with thousands of computers.

Categories and Subject Descriptors D. We are motivated both by the emergence of large-scale internet services that depend on clusters of hundreds or thousands of general- purpose servers, and also by the prediction that future ad- vances in local computing power will come from increas- ing the number of cores on a chip rather than improving the speed or instruction-level parallelism of a single core [3]. Both of these scenarios involve resources that are in a single administrative domain, connected using a known, high-performance communication topology, under central- ized management and control.

In such cases many of the hard problems that arise in wide-area distributed systems may be sidestepped: these include high-latency and unre- liable networks, control of resources by separate federated or competing entities, and issues of identity for authentica- tion and access control. For many resource-intensive applications, the simplest way to achieve scalable performance is to exploit data paral- lelism.

There has historically been a great deal of work in the parallel computing community both on systems that automatically discover and exploit parallelism in sequential programs, and on those that require the developer to explic- itly expose the data dependencies of a computation. There are still limitations to the power of fully-automatic paral- lelization, and so we build mainly on ideas from the latter research tradition.

All three have demonstrated great success, in that large numbers of developers have been able to write con- current software that is reliably executed in a distributed fashion. We believe that a major reason for the success of GPU shader languages, MapReduce and parallel databases is that the developer is explicitly forced to consider the data paral- lelism of the computation.

Once an application is cast into this framework, the system is automatically able to provide the necessary scheduling and distribution. In- stead the system runtime abstracts these issues from the developer, and also deals with many of the hardest dis- tributed computing problems, most notably resource alloca- tion, scheduling, and the transient or permanent failure of a subset of components in the system.

Finally, developers now work at a suitable level of abstraction for writing scalable applications since the resources available at execution time are not generally known at the time the code is written. MapReduce was designed to be accessible to the widest possible class of developers, and therefore aims for simplicity at the expense of generality and performance. Parallel databases were de- signed for relational algebra manipulations e. SQL where the communication graph is implicit. Dryad is notable for allowing graph vertices and compu- tations in general to use an arbitrary number of inputs and outputs.

MapReduce restricts all computations to take a single input set and generate a single output set. In order to get the best performance from a native Dryad application, the developer must understand the struc- ture of the computation and the organization and properties of the system resources. Dryad was however designed to be a suitable infrastructure on which to layer simpler, higher- level programming models. These rely on Dryad to manage the complexities of distribution, schedul- ing, and fault-tolerance, but hide many of the details of the underlying system from the application developer.

They use heuristics to automatically select and tune appropriate Dryad features, and thereby get good performance for most simple applications. The next three sections describe the abstract form of a Dryad application and outline the steps involved in writ- ing one.

The Dryad scheduler is described in Section 5; it handles all of the work of deciding which physical resources to schedule work on, routing data between computations, and automatically reacting to computer and network fail- ures.

We conclude in Sections 8 and 9 with a discussion of the related literature and of future research directions. A job is a directed acyclic graph where each vertex is a program and edges represent data channels. It is a logical computation graph that is automat- ically mapped onto physical resources by the runtime. In particular, there may be many more vertices in the graph than execution cores in the computing cluster.

As far as the program in each vertex is concerned, channels produce and consume heap objects that inherit from a base type. The Dryad sys- tem does not include any native data model for serializa- tion and the concrete type of an item is left entirely up to applications, which can supply their own serialization and deserialization routines.

In practice most applications use one of a small set of library item types that we supply such as newline-terminated text strings and tuples of base types. A schematic of the Dryad system organization is shown in Figure 1. All data is sent directly be- tween vertices and thus the job manager is only responsible for control decisions and is not a bottleneck for any data transfers.

The job manager JM consults the name server NS to discover the list of available com- puters. It maintains the job graph and schedules running vertices V as computers become available using the daemon D as a proxy. The shaded bar indicates the vertices in the job that are currently running. The cluster has a name server NS that can be used to enumerate all the available computers. The name server also exposes the position of each computer within the net- work topology so that scheduling decisions can take account of locality.

There is a simple daemon D running on each computer in the cluster that is responsible for creating pro- cesses on behalf of the job manager. The daemon acts as a proxy so that the job man- ager can communicate with the remote vertices and monitor the state of the computation and how much data has been read and written on its channels.

It is straightforward to run a name server and a set of daemons on a user workstation to simulate a cluster and thus run an entire job locally while debugging. A simple task scheduler is used to queue batch jobs. We chose the most time consuming query Q18 from a published study based on this database [23]. The query can be expressed in SQL as: select distinct p.

EL ANDAMIAJE DE LA RETRICA PDF

Dryad: Distributed Data-Parallel Programs from Sequentia Building Blocks

Here the main focus is on throughput, automatic management of scheduling distribution, fault tolerance and not on latency. The computations are being expressed as a graph in the following ways. The vertices are of computations, edges are communication channels and each of the vertex has several input and output edges. When a program can be expressed as an acyclic graph or distributed dataflow graph, Dryad will run them for you. Runtime of Dryad: There are many components which integrate together to run a program.

JANET ABBATE INVENTING THE INTERNET PDF

Dryad: Distributed Data- Parallel Programs from Sequential Building Blocks

Proceedings of the Eurosys Conference March Download BibTex Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. Dryad runs the application by executing the vertices of this graph on a set of available computers, communicating as appropriate through files, TCP pipes, and shared-memory FIFOs. The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multiple computers, or on multiple CPU cores within a computer. The application can discover the size and placement of data at run time, and modify the graph as the computation progresses to make efficient use of the available resources.

Related Articles