what is data parallel algorithm model

parallel algorithms in advanced data structure a data distributed parallel algorithm for non rigid image registration
ImogenCameron Profile Pic
Published Date:14-07-2017
Your Website URL(Optional)
Thinking in Parallel: Some Basic Data-Parallel Algorithms and Techniques ∗ Uzi Vishkin October 12, 2010 ∗ Copyright 1992-2009, Uzi Vishkin. These class notes reflect the theorertical part in the Parallel AlgorithmscourseatUMD. Theparallelprogrammingpartanditscomputerarchitecturecontextwithin thePRAM-On-ChipExplicitMulti-Threading(XMT)platformisprovidedthroughtheXMThomepage www.umiacs.umd.edu/users/vishkin/XMT and the class home page. Comments are welcome: please write to me using my last name at umd.edu 012 Bibliographic Notes 95 13 Acknowledgement 97 14 Index 102 1. Preface - A Case for Studying Parallel Algorithmics 1.1. Summary We start with two kinds of justification, and proceed to a suggestion: • Basic Need. Technological difficulties coupled with fundamental physical limita- tionswillcontinuetoleadcomputerdesignersintointroducinganincreasingamount of parallelism to essentially all the machines that they are building. Given a com- putationaltask, oneimportantperformancegoalisfastercompletiontime. Inaddi- tion to this “single task completion time” objective, at least one other performance objective is also very important. Namely, increasing the number of computational tasksthatcanbecompleted withinagiven timewindow. The latter“taskthrough- put”objectiveisnotaddressedinthecurrentnotes. Thereareseveralwaysinwhich machine parallelism can help in improving single task completion time. It would be ideal if an existing program could be translated, using compiler methods, into effectively utilizing machine parallelism. Following decades of research, and some significant yet overall limited accomplishments, it is quite clear that, in general, such compiler methods are insufficient. Given a standard serial program, written in a serial performance language such as C, a fundamental problem for which com- piler methods have been short handed is the extraction of parallelism. Namely, deriving from a program many operations that could be executed concurrently. An effective way for getting around this problem is to have the programmer concep- tualize the parallelism in the algorithm at hand and express the concurrency the algorithm permits in a computer program that allows such expression. • Methodology - the system of methods and principles is new. Parallelism isaconcern that is missing from “traditional” algorithmic design. Unfortunately, it turns out that most efficient serial data structures and quite a few serial algorithms provide rather inefficient parallel algorithms. The design of parallel algorithms and data structures, or even the design of existing algorithms and data structures for par- allelism, require new paradigms and techniques. These notes attempt to provide a short guided tour of some of the new concepts at a level and scope which make it possible for inclusion as early as in an undergraduate curriculum in computer science and engineering. 3• Suggestion - where to teach this? We suggest to incorporate the design for paral- lelism of algorithms and data structures in the computer science and engineering basic curriculum. Turing award winner N. Wirth entitled one of his books: algo- rithms+datastructures=programs. Insteadofthecurrent practicewherecomputer science and engineering students are taught to be in charge of incorporating data structures in order to serialize an algorithms, they will be in charge of expressing its parallelism. Even this relatively modest goal of expressing parallelism which is inherent in an existing (“serial”) algorithm requires non-trivial understanding. The current notes seek to provide such understanding. Since algorithmic design for parallelism involves “first principles” that cannot be derived from other areas, we further suggest to include this topic in the standard curriculum for a bachelor degree in computer science and engineering, perhaps as a component in one of the courses on algorithms and data-structures. Tosharpentheabovestatements onthebasicneed, weconsider two notions: machine parallelism and algorithm parallelism. Machine parallelism - Each possible state of a computer system, sometimes called its instantaneous description, can be presented by listing the contents of all its data cells, where data cells include memory cells and registers. For instance, pipelining with, say s, single cycle stages, may be described by associating a data cell with each stage; all s cells may change in a single cycle of the machine. More generally, a transition function may describe all possible changes of data cells that can occur in a single cycle; the set of data cells that change in a cycle define the machine parallelism of the cycle; a machine is literally serial if the size of this set never exceeds one. Machine parallelism comes in such forms as: (1) processor parallelism (a machine with several processors); (2) pipelining; or (3) in connection with the Very-Long Instruction Word (VLIW) technology, to mention just a few. We claim that literally serial machines hardly exist and that considerable increase in machine parallelism is to be expected. Parallel algorithms-Wewill focusourattentiononthedesignandanalysis ofefficient parallel algorithms within theWork-Depth (WD) model of parallel computation. The main methodological goal of these notes is to cope with the ill-defined goal of educating the reader to “think in parallel”. For this purpose, we outline an informal model of computation, called Informal Work-Depth (IWD). The presentation reaches this importantmodelofcomputationatarelatively latestage, whenthereaderisreadyforit. There is no inconsistency between the centrality of the IWD and the focus on the WD, as explained next. WD allows to present algorithmic methods and paradigms including their complexity analysis and the their in a rigorous manner, while IWD will be used for outlining ideas and high level descriptions. The following two interrelated contexts may explain why the IWD model may be more robust than any particular WD model. 4(i) Direct hardware implementation of some routines. It may be possible to implement some routines, such as performing the sum or prefix-sum of n variables, within the same performance bounds as simply “reading these variables”. A reasonable rule-of-thumb for selecting a programmer’s model for parallel computation might be to start with some model that includes primitives which are considered essential, and then augment it with useful primitives, as long as the cost of implementing them effectively does not increase the cost of implementing the original model. (ii) The ongoing evolution of programming languages. Development of facilities for ex- pressing parallelism is an important driving force there; popular programming languages (such as C and Fortran) are being augmented with constructs for this purpose. Con- structs offered by a language may affect the programmer’s view on his/her model of computation, and are likely to fit better the more loosely defined IWD. See reference to Fetch-and-Add and Prefix-Sum constructs later. 1.2. More background and detail A legacy of traditional computer science has been to seek appropriate levels of abstrac- tion. But, why have abstractions worked? To what extent does the introduction of abstractions between the user and the machine reduce the available computational ca- pability? Following the ingenious insights of Alan Turing, in 1936, where he showed the existence of a universal computing machine that can simulate any computing machine, we emphasize high-level computer languages. Such languages are much more convenient to human beings than are machine languages whose instructions consists of sequences of zeros and ones that machine can execute. Programs written in the high-level lan- guages can be translated into machine languages to yield the desired results without sacrificing expression power. Usually, the overheads involved are minimal and could be offset only by very sophisticated machine language programs, and even then only after an overwhelming investment in human time. In a nutshell, this manuscript is all about seeking and developing proper levels of abstractions for designing parallel algorithms and reasoning about their performance and correctness. We suggest that based on the state-of-the-art, the Work-Depth model has to be a standard programmer’s model for any successful general-purpose parallel machine. In other words, our assertion implies that a general-purpose parallel machine cannot be successful unless it can be effectively programmed using the Work-Depth programmer’s model. Thisdoesnotmeanthattherewillnotbeothersstylesofprogramming,ormodels of parallel computation, which some, or all, of these computer systems will support. The author predicted in several position papers since the early 1980’s that the strongest non parallel machine will continue inthe future to outperform, as a general-purpose machine, any parallel machine that does not support the Work-Depth model. Indeed, currently there is no other parallel programming models which is a serious contender primarily since no other model enables solving nearly as many problems as theWork-Depthmodel. 5However, a skeptical reader may wonder, why should Work-Depth be a preferred pro- grammer’s model? We base our answer to this question on experience. For nearly thirty years, numerous researchers have asked this very question, and quite a few alternative models of parallel computation have been suggested. Thousands of research papers were published with algorithms for these models. This exciting research experience can be summarized as follows: • Unique knowledge-base. The knowledge-base on Work-Depth (or PRAM) algo- rithms exceeds in order of magnitude any knowledge-base of parallel algorithms within any other model. Paradigms and techniques that have been developed led to efficient and fast parallel algorithms for numerous problems. This applies to a diversity of areas, including data-structures, computational geometry, graph prob- lems, pattern matching, arithmetic computations and comparison problems. This provides an overwhelming circumstantial evidence for the unique importance of Work-Depth algorithms. • Simplicity. A majority of the users of a future general-purpose parallel computer would like, and/or need, the convenience of a simple programmer’s model, since they will not have the time to master advanced, complex computer science skills. Designing algorithms and developing computer programs is an intellectually de- manding and time consuming job. Overall, the time for doing those represents the most expensive component in using computers for applications. This truism applies to parallel algorithms, parallel programming and parallel computers, as well. The relative simplicity of the Work-Depth model is one of the main reasons for its broad appeal. The Work-Depth (or PRAM) model of computation strips away levels of algorithmic complexity concerning synchronization, reliability, data locality, machine connectivity, and communication contention and thereby allows the algorithm designer to focus on the fundamental computational difficulties of the problem at hand. Indeed, the result has been a substantial number of efficient algorithms designed in this model, as well as of design paradigms and utilities for designing such algorithms. • Reusability. All generations of an evolutionary development of parallel machines must support a single robust programmer’s model. If such a model cannot be promised, the whole development risks immediate failure, because of the follow- ing. Imagine a computer industry decision maker that considers whether to invest several human-years in writing code for some computer application using a certain parallel programming language (or stick to his/her existing serial code). By the timethecodedevelopment willhavebeenfinished, thelanguageislikelytobecome, or about to become, obsolete. The only reasonable business decision under this cir- cumstances is simply not to do it. Machines that do not support a robust parallel programming language are likely to remain an academic exercise, since from the 6industry perspective, the test for successful parallel computers is their continued usability. At the present time, “Work-Depth-related” programming languageis the only serious candidate for playing the role of such a robust programming language. • To get started. Some sophisticated programmers of parallel machines are willing to tune an algorithm that they design to the specifics of a certain parallel machine. The following methodology has become common practice among such program- mers: start with a Work-Depth algorithm for the problem under consideration and advance from there. • Performance prediction. This point of performance prediction needs clarification, sincetheusetheWork-Depthmodelforperformancepredictionofabuildablearchi- tectureisbeing developed concurrently withthecurrent version ofthispublication. To make sure that the current paragraph remains current, we refer the interested reader tothehomepageforthePRAM-On-Chipproject attheUniversity ofMary- land. A pointer is provided in the section. • Formal emulation. Early work has shown Work-Depth algorithms to be formally emulatableonhighinterconnectmachines,andformalmachinedesignsthatsupport a large number of virtual processes can, in fact, give a speedup that approaches the number of processors for some sufficiently large problems. Some new machine designsareaimedatrealizingidealizationsthatsupportpipelined, virtualunittime access of the Work-Depth model. Notethatinthecontext ofserialcomputation, whichhasofcoursebeenatremendous success story (the whole computer era), all the above points can be attributed to the serial random-access-machine (RAM) model of serial computation, which is arguably, a “standard programmer’s model” for a general-purpose serial machine. We finish with commenting on what appears to be a common misconception: • Misconception: The Work-Depth model, or the closely related PRAM model, are machine models. These model are onlymeant to beconvenient programmer’s mod- els; in other words, design your algorithms for the Work-Depth, or the PRAM, model; use the algorithm to develop a computer program in an appropriate lan- guage; the machine software will later take your code and translate it to result in an effective use of a machine. Otherapproaches Theapproachadvocatedherefortakingadvantageofmachinepar- allelismiscertainlynottheonlyonethathasbeenproposed. Belowtwomoreapproaches arenoted: (i)Let compilersdo it. Awidelystudiedapproachfortakingadvantageofsuch parallelism isthroughautomaticparallelization, where acompiler attemptstofindparal- lelism, typically in programs written in a conventional language, such as C. As appealing 7as it may seem, this approach has not worked well in practice even for simpler languages such as Fortran. (ii) Parallel programming not through parallel algorithms. This hands- on mode of operation has been used primarily for the programming of massively parallel processors. A parallel program is often derived from a serial one through a multi-stage effort by the programmer. This multi-stage effort tends to be rather involved since it targets a “coarse-grained” parallel system that requires decomposition of the execution into relatively large “chunk”. See, for example, Culler and Singh’s book on parallel com- puter architectures CS99. Many attribute the programming difficulty of such machines to this methodology. In contrast, the approach presented in this text is much more sim- ilar to the serial approach as taught to computer science and engineering students. As many readers of this text would recognize, courses on algorithms and data structures are standard in practically any undergraduate computer science and engineering curriculum and are considered a critical component in the education of programming. Envisioning a curriculum that addresses parallel computing, this manuscript could provide its basic algorithms component. However, it should be noted that the approach presented in the current text does not necessarily provide a good alternative for parallel systems which are too coarse-grained. 2. Introduction We start with describing a model of computation which is called the parallel random- access machine (PRAM). Besides its historical role, asthe model for which many parallel algorithms were originally written, it is easy to understand its assumption. We then proceed to describe the Work-Depth (WD) model, which is essentially equivalent to the PRAM model. The WD model, which is more convenient for describing parallel algorithms, is the principal model for presenting parallel algorithms in these notes. 2.1. The PRAM Model We review the basics of the PRAM model. A PRAM employs p synchronous processors, all having unit time access to a shared memory. Each processor has also a local memory. See Figure 1. At each time unit a processor can write into the shared memory (i.e., copy one of its local memory registers into a shared memory cell), read into shared memory (i.e., copy a shared memory cell into one of its local memory registers ), or do some computation with respect to its local memory. We will avoid entering this level of detail in describing PRAM algorithms. That is, an instruction of the form: processor i: c := a + b wherea,b andc are shared memory locations, will be a short form for instructing proces- sor i to: first, copy locationa into its local memory, second, copy locationb into its local memory, third, add them, and, fourth, write the result into locationc. This paragraph is 8P1 P2 Pn Shared memory Figure 1: Processors and shared memory a first example for selecting a level of abstraction, which as noted before, is an important theme in this manuscript. There are a variety of rules for resolving access conflicts to the same shared memory location. The most common are exclusive-read exclusive-write (EREW), concurrent-read exclusive-write (CREW), and concurrent-read concurrent-write (CRCW), giving rise to several PRAM models. An EREW PRAM does not allow simultaneous access by more than one processor to the same memory location for read or write purposes, while a CREW PRAM allows concurrent access for reads but not for writes, and a CRCW PRAM allows concurrent access for both reads and writes. We shall assume that in a concurrent-write model, anarbitraryprocessor amongtheprocessorsattempting towrite into a common memory location, succeeds. This is called the Arbitrary CRCW rule. There are two alternative CRCW rules: (i) By the Priority CRCW rule, the smallest numbered, among the processors attempting to write into a common memory location, actually succeeds. (ii) The Common CRCW rule allows concurrent writes only when all the processors attempting to write into a common memory location are trying to write the same value. For concreteness, we proceed to an example of a PRAM algorithm. However, before doingthiswepresent thepardo“programmingconstruct”, whichisheavilyusedinthese notes to express operations that are performed in parallel: - for P , 1≤i≤n pardo i - A(i) := B(i) This means that the following n operations are performed concurrently: processor P 1 assigns B(1) into A(1), processor P assigns B(2) into A(2), and so on. 2 2.1.1. Example of a PRAM algorithm The summation problem Input: An array A =A(1)...A(n) of n numbers. The problem is to compute A(1) +...+ A(n). 9The summation algorithm below works in rounds. In each round, add, in parallel, pairs of elements as follows: add each odd-numbered element and its successive even- numbered element. For example, assume that n = 8; then the outcome of the first round is A(1)+A(2), A(3)+A(4), A(5)+A(6), A(7)+A(8) the outcome of the second round is A(1)+A(2)+A(3)+A(4), A(5)+A(6)+A(7)+A(8) and the outcome of the third round is A(1)+A(2)+A(3)+A(4)+A(5)+A(6)+A(7)+A(8) which is the sum that we seek. A detailed PRAM description of this “pairwise summa- tion” algorithm follows. Forsimplicity, assumethat: (i)wearegivenatwodimensional arrayB (whoseentries 1 h are B(h,i), 0 ≤ h ≤ logn and 1 ≤ i ≤ n/2 ) for storing all intermediate steps of the l computation, and (ii) n = 2 for some integer l. ALGORITHM 1 (Summation) 1. for P , 1≤i≤n pardo i 2. B(0,i) := A(i) 3. for h := 1 to log n do h 4. if i≤n/2 5. then B(h,i) := B(h−1,2i−1) + B(h−1,2i) 6. else stay idle 7. for i = 1: output B(log n,1); for i 1: stay idle See Figure 2. Algorithm 1 uses p = n processors. Line 2 takes one round, line 3 defines a loop taking logn rounds, and line 7 takes one round. Since each round takes constant time, Algorithm 1 runs in O(log n) time. So, an algorithm in the PRAM model is presented in terms of a sequence of parallel time units (or “rounds”, or “pulses”); we allowp instructions to beperformedat eachtime unit, oneper processor; this means that a time unit consists of a sequence of exactly p instructions to be performed concurrently. See Figure 3 1 Unless otherwise stated the base of logarithms is always assumed to be 2 10B(3,1) B(2,1) B(2,2) B(1,1) B(1,2) B(1,3) B(1,4) B(0,1)=A(1) B(0,2)=A(2) B(0,3)=A(3) B(0,4)=A(4) B(0,5)=A(5) B(0,6)=A(6) B(0,7)=A(7) B(0,8)=A(8) Figure 2: Summation on an n = 8 processor PRAM Time t . . 3 2 1 p Number of operations Figure 3: Standard PRAM mode: in each of the t steps of an algorithm, exactly p operations, arranged in a sequence, are performed We refer to such a presentation, as the standard PRAM mode. The standard PRAM mode has a few drawbacks: (i) It does not reveal how the algorithm will run on PRAMs with different number of processors; specifically, it does not tell to what extent more processors will speed the computation, or fewer processors will slow it. (ii) Fully specifying the allocation of instructions to processors requires a level of detail which might be unnecessary (since a compiler can extract it automatically - see the WD-presentation sufficiency theorem below). 112.2. Work-Depth presentation of algorithms An alternative model which is actually an alternative presentation mode, called Work- Depth,isoutlinednext. Work-Depthalgorithmsarealsopresentedin terms of a sequence of parallel time units (or “rounds”, or “pulses”); however, each time unit consists of a sequence of instructions to be performed concurrently; the sequence of instructions may include any number. See Figure 4. Time t . . 4 3 2 1 Number of operations Figure 4: WD mode: in each of thet steps of an algorithm as many operations as needed by the algorithm, arranged in a sequence, are performed Comment on rigor. Strictly speaking, WD actually defines a slightly different model of computation. Consider an instruction of the form - for i , 1≤i≤α pardo - A(i) := B(C(i)) where the time unit under consideration, consists of a sequence of α concurrent instruc- tions, for some positive integer α. Models such as Common CRCW WD, Arbitrary CRCW WD, or Priority CRCW WD, aredefined astheir PRAM respective counterparts with α processors. We explain below why these WD models are essentially equivalent to their PRAM counterpart and therefore treat the WD only as a separate presentation mode and suppress the fact that these are actually (slightly) different models. The only additional assumption, which we make for proving these equivalences, is as follows. In case the same variable is accessed for both reads and write in the same time unit, all the reads precede all the writes. The summation example (cont’d). We give a WD presentation for a summation algorithm. It is interesting to note that Algorithm 2, below, highlights the following 12“greedy-parallelism” attribute in the summation algorithm: At each point in time the summation algorithm seeks to break the problem into as many pairwise additions as possible, or, in other words, into the largest possible number of independent tasks that can performed concurrently. We make the same assumptions as for Algorithm 1, above. ALGORITHM 2 (WD-Summation) 1. for i , 1≤i≤n pardo 2. B(0,i) := A(i) 3. for h := 1 to log n h 4. for i , 1≤i≤n/2 pardo 5. B(h,i) := B(h−1,2i−1) + B(h−1,2i) 6. for i = 1 pardo output B(log n,1) The first round of the algorithm (lines 1 and 2) has n operations. The second round (lines 4 and 5 for h = 1) has n/2 operations. The third round (lines 4 and 5 for h = 2) has n/4 operations. In general, the kth round of the algorithm, 1 ≤ k ≤ logn+1, has k−1 n/2 operations andround logn +2 (line 6) has one more operation(note that the use of a pardo instruction in line 6 is somewhat artificial). The total number of operations is 2n and the time is logn +2. We will use this information in the corollary below. The next theorem demonstrates that the WD presentation mode does not suffer from the same drawbacks as the standard PRAM mode, and that every algorithm in the WD mode can be automatically translated into a PRAM algorithm. The WD-presentation sufficiency Theorem. Consider an algorithm in the WD mode that takes a total of x =x(n) elementary operations and d =d(n) time. The algorithm can be implemented by anyp =p(n)-processor within O(x/p + d) time, using the same concurrent-write convention as in the WD presentation. Note that the theorem actually represents five separate theorems for five respective concurrent-read and concurrent-write models: EREW, CREW, Common CRCW, Ar- bitrary CRCW and Priority CRCW. Proof of the Theorem. After explaining the notion of a round-robin simulation, we advance to proving the theorem. Given a sequence of y instructions inst ,...,inst , a 1 y round-robin simulation of these instructions by p processors, P ...P , means the fol- 1 p lowing ⌈y/p⌉ rounds. See also Figure 5. In round 1, the first group of p instructions inst ...inst are performed in parallel, as follows. For each j, 1 ≤ j ≤ p, processor P 1 p j performs instruction inst , respectively. In round 2, the second group of p instructions j inst ...inst is performed in parallel, as follows. For each j, 1 ≤ j ≤ p, processor p+1 2p P performs instruction inst , respectively. And so on, where in round ⌈y/p⌉ (the last j j+p round) some of the processors may stay idle if there are not enough instructions for all of them. The reader can also find a concrete demonstration of this important notion of round-robin simulation in Algorithm 2’ below. We are ready to proceed with the proof of the theorem. Let x denote the number i 13Last Round w y z . . . y instructions 3p 2p Round 3 . . . p Round 2 . . . Round 1 Figure 5: Round robin emulation of y instructions by p processors in ⌈y/p⌉ rounds. In eachofthefirst⌈y/p⌉−1rounds,pinstructionsareemulatedforatotalofz = p(⌈y/p⌉− 1) instructions. In round ⌈y/p⌉, the remaining y−z instructions are emulated, each by a processor, while the remaining w−y processor stay idle, where w = p(⌈y/p⌉). of instructions to be performed by the algorithm at round i. Note that by the definition P d of x, x = x. The p processors can ”simulate” round i of the algorithm in two i i=1 stages. In the first stage, only the read instructions of round i are emulated; each read instruction causes storage into a temporary variable. This is done in ⌈x/p⌉≤x/p + 1 i i time units, using a round-robin simulation of the x reads of round i by p processors. i In the second stage, all the other instructions of round i (including write instructions) are performed in additional ⌈x/p⌉ ≤ x/p + 1 time units, again using a round-robin i i simulations by p processors. The theorem follows. We leave it as an exercise to verify the above proof for the various concurrent-read and concurrent-write models. It can also be readily seen that a converse of the theorem holds true: simply consider a p-processor PRAM algorithm as being in the WD mode, with a sequence of p instructions at each time unit. A Corollary (The summation example (cont’d)). As a corollary of the theorem, we conclude that the summation algorithm, Algorithm 2, would run in O(n/p + log n) time on a p-processor PRAM. For p ≤ n/log n, this implies a running time of O(n/p), while for p ≥ n/log n, the implied running time is O(log n). Since the algorithm does not involve concurrent reads or writes, the p-processors algorithm can run on an EREW PRAM. For concreteness, we demonstrate below the spirit of the above proof (of the WD- presentation sufficiency theorem) with respect to Algorithm 2 that was given in the WD- presentation mode. We show how to run it on a p-processor PRAM. Note that, unlike the proof, we emulate the read instructions in the same time unit as other instructions. 14The reader may better appreciate the WD-mode and the WD-presentation sufficiency theorem in view of Algorithm 2’: the theorem saves the need for the tedious effort of manually producing the PRAM-mode. ALGORITHM 2’ (Summation on a p-processor PRAM) 1. for P , 1≤i≤p pardo i 2. for j := 1 to ⌈n/p⌉ −1 do - B(0,i + (j−1)p) := A(i + (j−1)p) 3. for i , 1≤i≤n − (⌈n/p⌉−1)p - B(0,i + (⌈n/p⌉ −1)p) := A(i + (⌈n/p⌉ −1)p) - for i ,n − (⌈n/p⌉−1)p≤i≤p - stay idle 4. for h := 1 to log n h 5. for j := 1 to ⌈n/(2 p)⌉ −1 do (an instruction j := 1 to 0 do means: - “do nothing”) - B(h,i+(j−1)p) := B(h−1,2(i+(j−1)p)−1) + B(h−1,2(i+(j−1)p)) h 6. for i , 1≤i≤n − (⌈n/(2 p)⌉−1)p h h - B(h,i+(⌈n/(2 p)⌉−1)p) := B(h−1,2(i+(⌈n/(2 p)⌉−1)p)−1) + h - B(h−1,2(i+(⌈n/(2 p)⌉−1)p)) h - for i ,n − (⌈n/(2 p)⌉−1)p≤i≤p - stay idle 7. for i = 1 output B(log n,1); for i 1 stay idle Algorithm 2’ simply emulates in a round-robin fashion each sequence of concurrent instruction of Algorithm 2. For instance, lines 1 and 2 in Algorithm 2 have a sequence of n instructions. These instructions are emulated in Algorithm 2’ in ⌈n/p⌉ −1 rounds by line 2, and in an additional round by line 3. The running time of Algorithm 2’ is as follows. Line 2 take ⌈n/p⌉ −1 rounds, and line 3 takes one round, for a total of ⌈n/p⌉ h rounds. The loop for h =k of lines 5-6 takes a total ⌈n/(2 p)⌉ rounds. Line 7 takes one more round. Summing up the rounds gives logn logn X X h h ⌈n/p⌉ + ⌈n/(2 p)⌉+1≤ (n/(2 p) +1) +1 = O(n/p +logn) i=1 i=0 rounds for the PRAM algorithms. This is the same as implied by the WD-presentation sufficiency theorem without going through the hassle of Algorithm 2’. Measuring the performance of parallel algorithms. Consider a problem whose input length is n. Given a parallel algorithm in the WD mode for the problem, we can measure its performance in terms of worst-case running time, denoted T(n), and total number of operations, denoted W(n) (where W stands for work). The following are alternative ways for measuring the performance of a parallel algorithm. We have actually shown that they are all asymptotically equivalent. 151. W(n) operations and T(n) time. 2. P(n) = W(n)/T(n) processors and T(n) time (on a PRAM). 3. W(n)/p time using any number of p≤W(n)/T(n) processors (on a PRAM). 4. W(n)/p + T(n) time using any number of p processors (on a PRAM). Exercise 1: The above four ways for measuring performance of a parallel algorithms form six pairs. Prove that the pairs are all asymptotically equivalent. 2.3. Goals for Designers of Parallel Algorithms Our maingoalindesigning parallel algorithmsisefficiency. We intentionally refrainfrom giving a strict formal definition for when an algorithm is to be considered efficient, but give several informal guidelines. Consider two parallel algorithms for the same problem. One performs a total of W (n) operations in T (n) time and the other performs a total of W (n) operations in 1 1 2 T (n) time. Generally, we would like to consider the first algorithm more efficient than 2 the second algorithm if W (n) = o(W (n)), regardless of their running times; and if 1 2 W (n) and W (n) grow asymptotically the same, then the first algorithm is considered 1 2 more efficient if T (n) = o(T (n)). A reason for not giving a categoric definition is the 1 2 following example. Consider an extreme case where W (n) = O(n) and T (n) = O(n), 1 1 and W (n) = O(nlogn) and T (n) = O(logn). It is hard to decide which algorithm is 2 2 more efficient since the first algorithm needs less work, but the second is much faster. In this case, both algorithms are probably interesting. It is not hard to imagine two users, each interested in different input sizes and in different target machines (i.e., with a different number of processors), where for one user the first algorithm performs better, while for the second user the second algorithm performs better. Consider a problem, and let T(n) be the best known worst case time upper bound on a serial algorithm for an input of length n. Assume further that we can prove that this upper bound cannot be asymptotically improved. (Using complexity theory terms, T(n) is the serial time complexity of the problem.) Consider a parallel algorithm for the same problem that performs W(n) op- erations in T (n) time. The parallel algorithm is said to be work-optimal, if W(n) par grows asymptotically the same as T(n). A work-optimal parallel algorithm is work- time-optimal if its running time T(n) cannot be improved by another work-optimal algorithm. Theproblemwiththedefinitionsaboveisthatoftentheserial complexity ofproblems isnotknown. Weseea needtocoinatermthatwill capturethesense ofaccomplishment for such cases, as well. Assume that we do not know whether T(n) can be improved 16asymptotically, and consider a parallel algorithm for the same problem that performs W(n) work in T (n) time. The parallel algorithm is said to achieve linear speed-up, par ifW(n) grows asymptotically the same asT(n). The problem with this definition is that it may not withstand the test of time: someday, an improved serial algorithm can change the best serial upper bound, and an algorithm that once achieved linear speed-up no longer does that. Perspective Earlier we used some analogies to serial computation in order to justify the choice of the Work-Depth (or PRAM) model of computation. Still, it is crucial to recognize the following difference between parallel and serial computation in trying to draw further analogies. Serial computing opened the era of computing. To some extent the job was easy because there was no previous computing paradigm that had to be challenged. The situation with parallel computing is different. Serial computing is an enormoussuccess. The problemwiththisparadigmareasfollows: (i)Parallel computing can offer better performance, and (ii) Whether it is already correct to conclude that the serialparadigmhasreachedadead-endwhenitcomestobuildingmachinesthataremuch stronger than currently available, or too early, physical and technological limitations suggest that it is just a matter of time till this happens. 2.4. Some final Comments 2.4.1. Default assumption regarding shared assumption access resolution These notes will focus on the the Arbitrary CRCW PRAM and its respective WD model. A partial explanation for this is provided by the following comments regarding the rel- ative power of PRAM models: (1) The Priority CRCW PRAM is most powerful, then come the Arbitrary CRCW PRAM, the Common CRCW PRAM, the CREW PRAM, and finally the EREW PRAM. (2) Some formal simulation results of the most powerful Priority CRCW PRAM on the least powerful EREW PRAM show that the models do not differ substantially. (3) Since all these PRAM models are only virtual models of real machines, amorepracticalperspectiveistocomparetheiremulationsonapossibletarget machine. It turns out that the models differ even less from this perspective. In view of theabovepoints, itremains tojustifythechoiceoftheArbitraryCRCWPRAMover the stronger Priority CRCW PRAM. The reason is that its implementation considerations favor relaxing the order in which concurrent operations can be performed. The home page for the UMD PRAM-On-Chip project provides more information. 2.4.2. NC: A Related, Yet Different, Efficiency Goal In stating the goals for parallel computing, one has to remember that its primary objective is to challenge the supremacy of serial computing. For this reason, our definitions of either linear speed-up oroptimalspeed-upareverysensitivetorelationshipbetweenaparallelalgorithmandits the serial alternative. The theoretical parallel computing literature had been motivated 17by other goals as well. A complexity theory approach, which opined that “a problem is amenable to parallel computing if it can be solved in poly-logarithmic time using a polynomial number of processors”, has received a lot of attention in the literature. Such problems arewidely known astheclass NC, or Nick’s Class, inhonor ofNick Pippenger). Thiscomplexitytheoryapproachisoffundamentalimportance: forinstance, ifoneshows that a problem is “log-space complete for P” then the problem is unlikely to be in the class NC - a very useful insight. Therefore, a next step, beyond these notes, for a reader who wants to know more about parallel algorithms, will have to include learning the basic concepts of this theory. 2.4.3. On selection of material for these notes The goal of these notes is to familiarize the reader with the elementary routines, techniques and paradigms in the current knowledge base on parallel algorithm. In planning these notes, we were guided by the following two principles. The first principle is actually a compromise between two conflicting approaches: (1) “Breadth-first search”: Present the theory as a “bag of tricks”, describing one trick at a time. Assuming that the curriculum can allocate only a limited time to the topics covered in these notes, this will maximize the number of tricks learned at the expense of not exposing the reader to more involved parallel algorithms. However, since designing parallel algorithms may bea rather involved task, animportant aspect may bemissed by this approach. This brings us to the second approach. (2) “Depth-first search”: Present in full a few relatively involved parallel algorithms. The second principle applies a “prefix rule of thumb”. At each point, we tried to evaluate what would be the next most important thing that the reader should know assuming that he/she has time to learn only one more section. These two principles may explain to the more knowledgeable reader our selection of material. 2.4.4. Level of material I taught this material on several occasions to beginning graduatestudents; recently, undergraduateswereallowedtotakethecourseasanelective andconstitutedaconsiderablefractionoftheclass. Thefirsttensections(i.e., everything excluding the section on graph connectivity) were taught in 21 lecture hours, where a lecture hour is defined as 50 minutes net time of teaching. My impression is that this material is suitable for undergraduate computer science students in their junior or senior year, and can cover about a half of an undergraduate course. This can be combined into afull coursewithanother topicfortheremaining part ofthecourse. Another optionisto teachnexttheconnectivity section, whichtooktheauthor6-7morehours. Togetherwith some additional material (e.g., introduction to parallel computing, or even introduction to NP-Completeness) this can make a junior or senior level undergraduate course or a beginning graduate course. 183. Technique: Balanced Binary Trees; Problem: Prefix-Sums Parallel prefix-sums might be the most heavily used routine in parallel algorithms. We already saw how balanced binary trees are used in a parallel summation algorithm. This section demonstrates a slightly more involved use of balanced binary trees. The prefix-sums problem Input: An array A of n elements drawn from some domain. Assume that a binary operation, denoted ∗, is defined on the set. Assume also that ∗ is associative; namely, for any elements a,b,c in the set, a∗(b∗c) = (a∗b)∗c. (The operation ∗ is pronounced “star” and often referred to as “sum” because addition, relative to real numbers, is a common example for ∗.) P i The n prefix-sums of array A are defined as A(j), for i, 1≤i≤n, or: j=1 A(1) A(1)∗A(2) ... A(1)∗A(2)∗...∗A(i) ... A(1)∗A(2)∗ ... ∗A(n) k The prefix sums algorithm below assumes that n = 2 for some integer k, and that h the arrays B(h,i) and C(h,i) for 0≤ h ≤ logn and 1 ≤ i ≤ n/2 , are given for storing intermediate steps of the computation. The algorithm works in two stages each taking logarithmic time. The first stage (lines 1-3 below) is similar to the summation algorithm of the previous section, where the computation advances from the leaves of a balanced binary tree to the root and computes, for each internal node h,i of the tree, the sum of its descendant leaves into B(h,i). The second stage (lines 4-7 below) advances in the opposite direction, from the root to the leaves. For each internal node h,i the prefix sum of its rightmost leaf is entered into C(h,i). Namely, C(h,i) is the prefix-sum A(1)∗A(2)∗...∗A(α), where0,αistherightmostdescendant leafofh,i. Inparticular, C(0,1),C(0,2),...,C(0,n) have the desired prefix-sums. Figure 6 describes the B and C arrays, and the drilling example below. describes a balanced binary tree, Figure 7. 193,1 2,1 2,2 1,2 1,3 1,4 1,1 0,1 0,2 0,3 0,4 0,5 0,6 0,7 0,8 Figure 6: Balanced binary tree B(3,1)=17 C(3,1)=17 B(2,1)=7 B(2,2)=10 C(2,1)=7 C(2,2)=17 B(1,1)=2 B(1,2)=5 B(1,3)=7 B(1,4)=3 C(1,1)=2 C(1,2)=7 C(1,3)=14 C(1,4)=17 B(0,1)=1 B(0,2)=1 B(0,3)=2 B(0,4)=3 B(0,5)=5 B(0,6)=2 B(0,7)=1 B(0,8)=2 C(0,1)=1 C(0,2)=2 C(0,3)=4 C(0,4)=7 C(0,5)=12 C(0,6)=14 C(0,7)=15 C(0,8)=17 Figure 7: The prefix-sums algorithm ALGORITHM 1 (Prefix-sums) 1. for i , 1≤i≤n pardo - B(0,i) := A(i) 2. for h := 1 to log n h 3. for i , 1≤i≤n/2 pardo - B(h,i) := B(h−1,2i−1) ∗ B(h−1,2i) 4. for h := logn to 0 h 5. for i even, 1≤i≤n/2 pardo - C(h,i) := C(h+1,i/2) 6. for i = 1 pardo - C(h,1) := B(h,1) h 7. for i odd, 3≤i≤n/2 pardo - C(h,i) := C(h+1,(i−1)/2) ∗ B(h,i) 8. for i , 1≤i≤n pardo - Output C(0,i) 20A drilling example. Let us run Algorithm 1 on the following instance. See also Figure 7. n = 8 and A = 1,1,2,3,5,2,1,2 and ∗ is the + operation. Line 1 implies B(0;1...8) = 1,1,2,3,5,2,1,2. Lines 2-3 imply: B(1;1,2,3,4) = 2,5,7,3 for h = 1, B(2;1,2) = 7,10 forh = 2, andB(3,1) = 17 forh= 3. Lines 4-7 imply C(3,1) = 17 for h = 3, C(2;1,2) = 7,17 for h = 2, C(1;1,2,3,4) = 2,7,14,17 for h = 1, and finally C(0;1...8) = 1,2,4,7,12,14,15,17 for h = 0. Complexity. Theoperationsoftheprefix-sumsalgorithmcanbe“charged”tonodes of the balanced binary tree, so that no node will be charged by more than a constant number of operations. These charges can be done as follows. For each assignment into either B(h,i) or C(h,i), the operations that led to it are charged to node i,j. Since the number of nodes of the tree is 2n−1, W(n), the total number of operations of the algorithm is O(n). The time is O(logn) since it is dominated by the loops of lines 2 and 4, each requiring logn rounds. Theorem 3.1: The prefix-sums algorithm runs in O(n) work and O(logn) time. 3.1. Application - the Compaction Problem We already mentioned that the prefix-sums routine is heavily used in parallel algorithms. One trivial application, which is needed later in these notes, follows: Input. An array A = A(1),...,A(n) of (any kind of) elements and another array B = B(1),...,B(n) of bits (each valued zero or one). Thecompaction problemis tofind aone-to-onemapping fromthesubset ofelements of A(i), for which B(i) = 1, 1≤i≤n, to the sequence (1,2,...,s), where s is the (a priori unknown) numbers of ones in B. Below we assume that the mapping should be order preserving. That is, if A(i) is mapped to k and A(j) is mapped to k +1 then i j. However, quite a few applications of compaction do not need to be order preserving. For computing this mapping, simply compute all prefix sums with respect to array B. Consider an entry B(i) = 1. If the prefix sum ofi is j then map A(i) into j. See also Figure 8. Theorem 3.2: The compaction algorithm runs in O(n) work and O(logn) time. Exercise 2: Let A be a memory address in the shared memory of a PRAM. Suppose that all p processors of the PRAM need to “know” the value stored in A. Give a fast EREW algorithm for broadcasting the value ofA to allp processors. How much time will this take? Exercise 3: The minimum problem is defined as follows. Input: An array A of n ele- mentsdrawnfromsometotallyorderedset. Theminimum problemistofindthesmallest 21

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.