Lecture notes in Algorithm Design and Analysis

lecture notes on design and analysis of algorithms pdf and algorithms design techniques and analysis solution manual pdf free download
NathanBenett Profile Pic
NathanBenett,Germany,Researcher
Published Date:11-07-2017
Your Website URL(Optional)
Comment
ALGORITHMS DESIGN TECHNIQUES AND ANALYSIS M. H. Alsuwaiyel Information & Computer Science Department KFUPM July, 1999Chapter 1 Basic Concepts in Algorithmic Analysis 1.1 Introduction ¤ The most general intuitive idea of an algorithm is a procedure that consists of a ¯nite set of instructions which, given an input from some set of possible inputs, enables us to obtain an output if such an output exists or else obtain nothing at all if there is no output for that particular input through a systematic execution of the instructions. The set of possible inputs consists of all inputs to which the algorithm gives an output. If there is an output for a particular input, then we say that the algorithm can be applied to this input and process it to give the corresponding output. We require that an algorithm halts on every input, which implies that each instruction requires a ¯nite amount of time, and each input has a ¯nite length. We also require that the output of a legal input to be unique, that is, the algorithm is deterministic in the sense that the same set of instructions are executed when the algorithm is initiated on a particular input more than once. In Chapter 14, we will relax this condition when we study randomized algorithms. The design and analysis of algorithms are of fundamental importance in the ¯eld of computer science. As Donald E. Knuth stated \Computer science is the study of algorithms". This should not be surprising, as every area in computer science depends heavily on the design of e±cient algo- ¤ According to the American Heritage Dictionary, the word \algorithm" is derived from the name of Muhammad ibn Mus¹ ¹ a al-Khw¹ arizm¹ ³ (780?-850?), a Muslim mathematician whose works introduced Arabic numerals and algebraic concepts to Western mathemat- ics. The word \algebra" stems from the title of his book Kitab al jabr wa'l-muq¹ abala. 56 Basic Concepts in Algorithmic Analysis rithms. As simple examples, compilers and operating systems are nothing but direct implementations of special purpose algorithms. The objective of this chapter is twofold. First, it introduces some simple algorithms, particularly related to searching and sorting. Second, it covers the basic concepts used in the design and analysis of algorithms. We will cover in depth the notion of \running time" of an algorithm, as it is of fundamental importance to the design of e±cient algorithms. After all, time is the most precious measure of an algorithm's e±ciency. We will also discuss the other important resource measure, namely the space required by an algorithm. Although simple, the algorithms presented will serve as the basis for many of the examples in illustrating several algorithmic concepts. It is instructive to start with simple and useful algorithms that are used as building blocks in more complex algorithms. 1.2 Historical Background The question of whether a problem can be solved using an e®ective pro- cedure, which is equivalent to the contemporary notion of the algorithm, received a lot of attention in the ¯rst part of this century, and especially in the 1930's. The focus in that period was on classifying problems as being solvable using an e®ective procedure or not. For this purpose, the need arose for a model of computation by the help of which a problem can be classi¯ed as solvable if it is possible to construct an algorithm to solve that problem using that model. Some of these models are the recursive func- tions of GÄ odel,¸-calculus of Church, Post machines of Post and the Turing machines of Turing. The RAM model of computation was introduced as a theoretical counterpart of existing computing machines. By Church The- sis, all these models have the same power, in the sense that if a problem is solvable on one of them, then it is solvable on all others. Surprisingly, it turns out that \almost all" problems are unsolvable. This can be justi¯ed easily as follows. Since each algorithm can be thought of as a function whose domain is the set of nonnegative integers and whose range is the set of real numbers, the set of functions to be computed is uncountable. Since any algorithm, or more speci¯cally a program, can be encoded as a binary string, which corresponds to a unique positive integer, the number of functions that can be computed is countable. So, infor-Historical Background 7 mally, the number of solvable problems is equinumerous with the set of integers (which is countable), while the number of unsolvable problems is equinumerous with the set of real numbers (which is uncountable). As a simple example, no algorithm can be constructed to decide whether seven consecutive 1's occur in the decimal expansion of ¼. This follows from the de¯nition of an algorithm, which stipulates that the amount of time an algorithm is allowed to run must be ¯nite. Another example is the prob- lem of deciding whether a given equation involving a polynomial with n variables x ;x ;:::;x has integer solutions. This problem is unsolvable, 1 2 n no matter how powerful the computing machine used is. That ¯eld which is concerned with decidability and solvability of problems is referred to as computability theory or theory of computation, although some computer scientists advocate the inclusion of the current ¯eld of algorithms as part of this discipline. After the advent of digital computers, the need arose for investigat- ing those solvable problems. In the beginning, one was content with a simple program that solves a particular problem without worrying about the amount of resources, in particular time, that this program requires. Then the need for e±cient programs that use as few resources as possi- ble evolved as a result of the limited resources available and the need to develop complex algorithms. This led to the evolution of a new area in computing, namely computational complexity. In this area, a problem that is classi¯ed as solvable is studied in terms of its e±ciency, that is, the time and space needed to solve that problem. Later on, other resources were introduced, e.g. communication cost and the number of processors if the problem is analyzed using a parallel model of computation. Unfortunately, some of the conclusions of this study turned out to be negative: There are many natural problems that are practically unsolvable due to the need for huge amount of resources, and in particular time. On the other hand, e±cient algorithms have been devised to solve many problems. Not only that; it was also proven that those algorithms are optimal in the sense that if any new algorithm to solve the same problem is discovered, then the gain in terms of e±ciency is virtually minimal. For example, the problem of sorting a set of elements has been studied extensively; and as a result, several e±cient algorithms to solve this problem have been devised, and it was proven that these algorithms are optimal in the sense that no substantially better algorithm can ever be devised in the future.8 Basic Concepts in Algorithmic Analysis 1.3 Binary Search Henceforth, in the context of searching and sorting problems, we will as- sume that the elements are drawn from a linearly ordered set, for example the set of integers. This will also be the case for similar problems, like ¯nding the median, the kth smallest element, and so forth. Let A1::nbe a sequence of n elements. Consider the problem of determining whether a given element x is in A. This problem can be rephrased as follows. Find an index j; 1· j· n, such that x = Ajif x is in A, and j = 0 other- wise. A straightforward approach is to scan the entries in A and compare each entry with x. If after j comparisons, 1·j·n, the search is success- ful, i.e., x =Aj,j is returned; otherwise a value of 0 is returned indicating an unsuccessful search. This method is referred to as sequential search.It is also called linear search, as the maximum number of element compar- isons grows linearly with the size of the sequence. The algorithm is shown as Algorithm linearsearch. Algorithm 1.1 linearsearch Input: An array A1::nof n elements and an element x. Output:j if x =Aj; 1·j·n, and 0 otherwise. 1. jà 1 2. while (jn) and (x =6 Aj) 3. jÃj +1 4. end while 5. if x =Aj then return j else return 0 Intuitively, scanning all entries ofA is inevitable if no more information about the ordering of the elements in A is given. If we are also given that the elements in A are sorted, say in nondecreasing order, then there is a much more e±cient algorithm. The following example illustrates this e±cient search method. Example 1.1 Consider searching the array A1::14 = 1 4 5 7 8 9 10 12 15 22 23 27 32 35 : In this instance, we want to search for elementx = 22. First, we comparex with the middle element Ab(1 + 14)=2c=A7 = 10. Since 22A7, and since it isBinary Search 9 known that Ai·Ai+1; 1·i 14, x cannot be in A1::7, and therefore this portion of the array can be discarded. So, we are left with the subarray A8::14 = 12 15 22 23 27 32 35 : Next, we comparex with the middle of the remaining elementsAb(8 + 14)=2c= A11 = 23. Since 22A11, and since Ai·Ai+1; 11·i 14, x cannot be inA11::14, and therefore this portion of the array can also be discarded. Thus, the remaining portion of the array to be searched is now reduced to A8::10 = 12 15 22 : Repeating this procedure, we discard A8::9, which leaves only one entry in the array to be searched, that isA10 = 22. Finally, we ¯nd thatx =A10, and the search is successfully completed. In general, let Alow::high be a nonempty array of elements sorted in nondecreasing order. Let Amid be the middle element, and suppose that xAmid. We observe that ifx is inA, then it must be one of the elements Amid +1;Amid +2;:::;Ahigh. It follows that we only need to search for x in Amid +1::high. In other words, the entries in Alow::mid are discarded in subsequent comparisons since, by assumption, A is sorted in nondecreasing order, which implies thatx cannot be in this half of the array. Similarly, ifxAmid, then we only need to search forx inAlow::mid¡1. This results in an e±cient strategy which, because of its repetitive halving, is referred to as binary search. Algorithm binarysearch gives a more formal description of this method. Algorithm 1.2 binarysearch Input: An array A1::nof n elements sorted in nondecreasing order and an element x. Output:j if x =Aj; 1·j·n; and 0 otherwise. 1. lowà 1; highÃn; jà 0 2. while (low· high) and (j =0) 3. midÃb(low + high)=2c 4. if x =Amid then jà mid 5. else ifxAmid then highà mid¡ 1 6. else lowà mid +1 7. end while 8. return j10 Basic Concepts in Algorithmic Analysis 1.3.1 Analysis of the binary search algorithm Henceforth, we will assume that each three-way comparison (if-then-else) counts as one comparison. Obviously, the minimum number of comparisons is 1, and it is achievable when the element being searched for, x,isinthe middle position of the array. To ¯nd the maximum number of comparisons, let us ¯rst consider applying binary search on the array 2 3 5 8 : If we search for 2 or 5, we need two comparisons, whereas searching for 8 costs three comparisons. Now, in the case of unsuccessful search, it is easy to see that searching for elements such as 1, 4, 7 or 9 takes 2, 2, 3 and 3 comparisons, respectively. It is not hard to see that, in general, the algorithm always performs the maximum number of comparisons whenever x is greater than or equal to the maximum element in the array. In this example, searching for any element greater than or equal to 8 costs three comparisons. Thus, to ¯nd the maximum number of comparisons, we may assume without loss of generality that x is greater than or equal to An. Example 1.2 Suppose that we want to search for x=35 or x = 100 in A1::14 = 1 4 5 7 8 9 10 12 15 22 23 27 32 35 : In each iteration of the algorithm, the bottom half of the array is discarded until there is only one element: 12 15 22 23 27 32 35 27 32 35 35 : Therefore, to compute the maximum number of element comparisons performed by Algorithm binarysearch, we may assume that x is greater than or equal to all elements in the array to be searched. To compute the number of remaining elements in A1::n in the second iteration, there are two cases to consider according to whether n is even or odd. If n is even, then the number of entries in Amid +1::nis n=2; otherwise it is (n¡ 1)=2. Thus, in both cases, the number of elements in Amid +1::nis exactlybn=2c. Similarly, the number of remaining elements to be searched in the third iteration isbbn=2c=2c =bn=4c (see Eq. 2.3 on page 71). In general, in the jth pass through the while loop, the number of re- j¡1 maining elements isbn=2 c. The iteration is continued until either x is found or the size of the subsequence being searched reaches 1, whicheverBinary Search 11 occurs ¯rst. As a result, the maximum number of iterations needed to search for x is that value of j satisfying the condition j¡1 bn=2 c =1: By the de¯nition of the °oor function, this happens exactly when j¡1 1·n=2 2; or j¡1 j 2 ·n 2 ; or y j¡ 1· lognj: Since j is integer, we conclude that j =blognc+1: Alternatively, the performance of the binary search algorithm can be described in terms of a decision tree, which is a binary tree that exhibits the behavior of the algorithm. Figure 1.1 shows the decision tree corresponding to the array given in Examples 1.1. The darkened nodes are those compared against x in Examples 1.1. 10 5 23 1 8 15 32 4 7 12 22 27 9 35 Fig. 1.1 A decision tree that shows the behavior of binary search. Note that the decision tree is a function of the number of the elements in the array only. Figure 1.2 shows two decision trees corresponding to two arrays of sizes 10 and 14, respectively. As implied by the two ¯gures, the y Unless otherwise stated, all logarithms in this book are to the base 2. The natural logarithm of x will be denoted by lnx.12 Basic Concepts in Algorithmic Analysis maximum number of comparisons in both trees is 4. In general, the max- imum number of comparisons is one plus the height of the corresponding decision tree (see Sec. 3.5 for the de¯nition of height). Since the height of such a tree isblognc, we conclude that the maximum number of com- parisons isblognc + 1. We have in e®ect given two proofs of the following theorem: Theorem 1.1 The number of comparisons performed by Algorithm bi- narysearch on a sorted array of size n is at mostblognc+1. (a) (b) 5 7 2 8 3 11 1 1 5 9 13 3 6 9 2 4 12 6 8 10 14 4 10 7 Fig. 1.2 Two decision trees corresponding to two arrays of sizes 10 and 14. 1.4 Merging two Sorted Lists Suppose we have an array A1::m and three indices p;q and r, with 1· p ·qr · m, such that both the subarrays Ap::q and Aq +1::r are individually sorted in nondecreasing order. We want to rearrange the elements in A so that the elements in the subarray Ap::r are sorted in nondecreasing order. This process is referred to as merging Ap::q with Aq +1::r. An algorithm to merge these two subarrays works as follows. We maintain two pointerss andt that initially point toAp andAq + 1, respectively. We prepare an empty array Bp::r which will be used as a temporary storage. Each time, we compare the elementsAs andAt and append the smaller of the two to the auxiliary arrayB; if they are equal we will choose to append As. Next, we update the pointers: If As· At, then we increment s, otherwise we increment t. This process ends when s =q+1 ort =r + 1. In the ¯rst case, we append the remaining elements At::rto B, and in the second case, we append As::qto B. Finally, theMerging two Sorted Lists 13 arrayBp::r is copied back toAp::r. This procedure is given in Algorithm merge. Algorithm 1.3 merge Input: An array A1::m of elements and three indices p;q and r, with 1·p·qr·m, such that both the subarrays Ap::q and Aq +1::r are sorted individually in nondecreasing order. Output:Ap::r contains the result of merging the two subarrays Ap::q and Aq +1::r. 1. comment:Bp::r is an auxiliary array. 2. sÃp; tÃq +1; kÃp 3. while s·q and t·r 4. if As·At then 5. BkÃAs 6. sÃs+1 7. else 8. BkÃAt 9. tÃt+1 10. end if 11. kÃk +1 12. end while 13. if s =q +1 then Bk::rÃAt::r 14. else Bk::rÃAs::q 15. end if 16. Ap::rÃBp::r Let n denote the size of the array Ap::r in the input to Algorithm merge, i.e., n = r¡p + 1. We want to ¯nd the number of comparisons that are needed to rearrange the entries ofAp::r. It should be emphasized that from now on when we talk about the number of comparisons performed by an algorithm, we mean element comparisons, i.e., the comparisons in- volving objects in the input data. Thus, all other comparisons, e.g. those needed for the implementation of the while loop, will be excluded. Let the two subarrays be of sizes n and n , where n +n = n. The 1 2 1 2 least number of comparisons happens if each entry in the smaller subarray is less than all entries in the larger subarray. For example, to merge the two subarrays 2 3 6 and 7 11 13 45 57 ; the algorithm performs only three comparisons. On the other hand, the14 Basic Concepts in Algorithmic Analysis number of comparisons may be as high as n¡ 1. For example, to merge the two subarrrays 2 3 66 and 7 11 13 45 57 ; seven comparisons are needed. It follows that the number of comparisons done by Algorithm merge is at least n and at most n¡ 1. 1 Observation 1.1 The number of element comparisons performed by Al- gorithm merge to merge two nonempty arrays of sizes n and n , respec- 1 2 tively, where n ·n , into one sorted array of size n =n +n is between 1 2 1 2 n andn¡ 1. In particular, if the two array sizes arebn=2c anddn=2e, the 1 number of comparisons needed is betweenbn=2c and n¡ 1. How about the number of element assignments (again here we mean assignments involving input data)? At ¯rst glance, one may start by looking at the while loop, the if statements, etc. in order to ¯nd out how the algorithm works and then compute the number of element assignments. However, it is easy to see that each entry of array B is assigned exactly once. Similarly, each entry of arrayA is assigned exactly once, when copying B back into A. As a result, we have the following observation: Observation 1.2 The number of element assignments performed by Al- gorithm merge to merge two arrays into one sorted array of size n is exactly 2n. 1.5 Selection Sort Let A1::n be an array of n elements. A simple and straightforward algo- rithm to sort the entries inA works as follows. First, we ¯nd the minimum element and store it in A1. Next, we ¯nd the minimum of the remaining n¡ 1 elements and store it in A2. We continue this way until the second largest element is stored inAn¡1. This method is described in Algorithm selectionsort. It is easy to see that the number of element comparisons performed by the algorithm is exactly n¡1 n¡1 X X n(n¡ 1) (n¡i)=(n¡1)+(n¡ 2) +:::+1 = i = : 2 i=1 i=1Insertion Sort 15 Algorithm 1.4 selectionsort Input: An array A1::nof n elements. Output:A1::n sorted in nondecreasing order. 1. for ià 1 to n¡ 1 2. kÃi 3. for jÃi+1 to n fFind the ith smallest element.g 4. if AjAk then kÃj 5. end for 6. if k =6 i then interchange Ai and Ak 7. end for It is also easy to see that the number of element interchanges is between 0 and n¡ 1. Since each interchange requires three element assignments, the number of element assignments is between 0 and 3(n¡ 1). Observation 1.3 The number of element comparisons performed by Al- gorithm selectionsort isn(n¡1)=2. The number of element assignments is between 0 and 3(n¡ 1). 1.6 Insertion Sort As stated in Observation 1.3 above, the number of comparisons performed by Algorithm selectionsort is exactly n(n¡ 1)=2 regardless of how the elements of the input array are ordered. Another sorting method in which the number of comparisons depends on the order of the input elements is the so-called insertionsort. This algorithm, which is shown below, works as follows. We begin with the subarray of size 1, A1, which is already sorted. Next, A2 is inserted before or after A1 depending on whether it is smaller than A1 or not. Continuing this way, in the ith iteration, Ai is inserted in its proper position in the sorted subarray A1::i¡ 1. This is done by scanning the elements from index i¡1downto1, each time comparing Ai with the element at the current position. In each iteration of the scan, an element is shifted one position up to a higher index. This process of scanning, performing the comparison and shifting continues until an element less than or equal to Ai is found, or when all the sorted sequence so far is exhausted. At this point, Ai is inserted in its proper position, and the process of inserting element Ai in its proper16 Basic Concepts in Algorithmic Analysis place is complete. Algorithm 1.5 insertionsort Input: An array A1::nof n elements. Output:A1::n sorted in nondecreasing order. 1. for ià 2 to n 2. xÃAi 3. jÃi¡ 1 4. while (j 0) and (Ajx) 5. Aj +1ÃAj 6. jÃj¡ 1 7. end while 8. Aj +1Ãx 9. end for Unlike Algorithm selectionsort, the number of element comparisons done by Algorithm insertionsort depends on the order of the input ele- ments. It is easy to see that the number of element comparisons is mini- mum when the array is already sorted in nondecreasing order. In this case, the number of element comparisons is exactly n¡ 1, as each element Ai, 2· i· n, is compared with Ai¡ 1 only. On the other hand, the maxi- mum number of element comparisons occurs if the array is already sorted in decreasing order and all elements are distinct. In this case, the number of element comparisons is n n¡1 X X n(n¡ 1) i¡1= i = ; 2 i=2 i=1 as each elementAi, 2·i·n, is compared with each entry in the subarray A1::i¡1. This number coincides with that of Algorithm selectionsort. Observation 1.4 The number of element comparisons performed by Al- gorithm insertionsort is betweenn¡1 andn(n¡1)=2. The number of ele- ment assignments is equal to the number of element comparisons plusn¡1. Notice the correlation of element comparisons and assignments in Al- gorithm insertionsort. This is in contrast to the independence of the number of element comparisons in Algorithm selectionsort related to data arrangement.Bottom-up Merge Sorting 17 1.7 Bottom-up Merge Sorting The two sorting methods discussed thus far are both ine±cient in the sense that the number of operations required to sort n elements is proportional 2 to n . In this section, we describe an e±cient sorting algorithm that per- forms much fewer element comparisons. Suppose we have the following array of eight numbers that we wish to sort: 9 4 5 2 1 7 4 6 : Consider the following method for sorting these numbers (see Fig. 1.3). 1 2 4 4 5 6 7 9 1 4 6 2 4 5 9 7 497 21 5 4 6 9 4 5 2 1 7 6 4 Fig. 1.3 Example of bottom-up merge sorting. First, we divide the input elements into four pairs and merge each pair into one 2-element sorted sequence. Next, we merge each two consecutive 2-element sequences into one sorted sequence of size four. Finally, we merge the two resulting sequences into the ¯nal sorted sequence as shown in the ¯gure. In general, letA be an array ofn elements that is to be sorted. We ¯rst mergebn=2c consecutive pairs of elements to yieldbn=2c sorted sequences of size 2. If there is one remaining element, then it is passed to the next iteration. Next, we mergebn=4c pairs of consecutive 2-element sequences to yieldbn=4c sorted sequences of size 4. If there are one or two remaining elements, then they are passed to the next iteration. If there are three elements left, then two (sorted) elements are merged with one element to form a 3-element sorted sequence. Continuing this way, in thejth iteration, j j¡1 j we mergebn=2c pairs of sorted sequences of size 2 to yieldbn=2c sorted j j¡1 sequences of size 2 . If there arek remaining elements, where 1·k· 2 ,18 Basic Concepts in Algorithmic Analysis then they are passed to the next iteration. If there arek remaining elements, j¡1 j where 2 k 2 , then these are merged to form a sorted sequence of size k. 1 2 3 4 5 6 7 11 8 9 10 3 4 5 6 8 9 10 2 11 1 7 5 6 3 4 8 11 2 9 10 1 7 611 53 91 4 8 2 1 7 6 10 9 5 3 11 4 8 7 12 Fig. 1.4 Example of bottom-up merge sorting when n is not a power of 2. Algorithm bottomupsort implements this idea. The algorithm main- tains the variables which is the size of sequences to be merged. Initially,s is set to 1, and is doubled in each iteration of the outer while loop. i+1, i +s and i +t de¯ne the boundaries of the two sequences to be merged. Step 8 is needed in the case whenn is not a multiple oft. In this case, if the number of remaining elements, which is n¡i, is greater than s, then one more merge is applied on a sequence of size s and the remaining elements. Example 1.3 Figure 1.4 shows an example of the working of the algorithm when n is not a power of 2. The behavior of the algorithm can be described as follows. (1) In the ¯rst iteration, s = 1 and t = 2. Five pairs of 1-element sequences are merged to produce ¯ve 2-element sorted sequences. After the end of the inner while loop, i +s = 10+16n = 11, and hence no more merging takes place. (2) In the second iteration,s = 2 andt = 4. Two pairs of 2-element sequences are merged to produce two 4-element sorted sequences. After the end of the inner while loop, i +s = 8+2n = 11, and hence one sequence of size s =2is merged with the one remaining element to produce a 3-element sorted sequence. (3) In the third iteration, s = 4 and t = 8. One pair of 4-element sequences are merged to produce one 8-element sorted sequence. After the end of the innerBottom-up Merge Sorting 19 Algorithm 1.6 bottomupsort Input: An array A1::nof n elements. Output:A1::n sorted in nondecreasing order. 1. tà 1 2. whiletn 3. sÃt; tà 2s; ià 0 4. while i +t·n 5. merge(A;i+1;i +s;i +t) 6. iÃi +t 7. end while 8. if i +sn then merge(A;i+1;i +s;n) 9. end while while loop, i +s = 8+46n = 11 and hence no more merging takes place. (4) In the fourth iteration, s = 8 and t = 16. Since i +t = 0+166·n = 11, the inner while loop is not executed. Since i +s = 0+8n = 11, the condition of the if statement is satis¯ed, and hence one merge of 8-element and 3-element sorted sequences takes place to produce a sorted sequence of size 11. (5) Since now t =16n, the condition of the outer while loop is not satis¯ed, and consequently the algorithm terminates. 1.7.1 Analysis of bottom-up merge sorting Now, we compute the number of element comparisons performed by the algorithm for the special case whenn is a power of 2. In this case, the outer while loop is executed k = logn times, once for each level in the sorting tree except the topmost level (see Fig. 1.3). Observe that sincenisapower of 2,i =n after the execution of the inner while loop, and hence Algorithm merge will never be invoked in Step 8. In the ¯rst iteration, there aren=2 comparisons. In the second iteration,n=2 sorted sequences of two elements each are merged in pairs. The number of comparisons needed to merge each pair is either 2 or 3. In the third iteration, n=4 sorted sequences of four elements each are merged in pairs. The number of comparisons needed to merge each pair is between 4 and 7. In general, in the jth iteration of the j j¡1 while loop, there are n=2 merge operations on two subarrays of size 2 and it follows, by Observation 1.1, that the number of comparisons needed j j¡1 j j in the jth iteration is between (n=2 )2 and (n=2 )(2 ¡ 1). Thus, if we20 Basic Concepts in Algorithmic Analysis let k = logn, then the number of element comparisons is at least k k ³ ´ X X n n kn n logn j¡1 2 = = = ; j 2 2 2 2 j=1 j=1 and is at most k k ³ ´ X X ¡ ¢ n n j 2 ¡ 1 = n¡ j j 2 2 j=1 j=1 k X 1 = kn¡n j 2 j=1 µ ¶ 1 = kn¡n 1¡ (Eq. 2.11, page 78) k 2 µ ¶ 1 = kn¡n 1¡ n = n logn¡n+1: As to the number of element assignments, there are, by Observation 1.2 applied to each merge operation, 2n element assignments in each iteration of the outer while loop for a total of 2n logn. As a result, we have the following observation: Observation 1.5 The total number of element comparisons performed by Algorithm bottomupsort to sort an array ofn elements, wheren is a powerof2,isbetween (n logn)=2 andn logn¡n + 1. The total number of element assignments done by the algorithm is exactly 2n logn. 1.8 Time Complexity In this section we study an essential component of algorithmic analysis, namely determining the running time of an algorithm. This theme belongs to an important area in the theory of computation, namely computational complexity, which evolved when the need for e±cient algorithms arose in the 1960's and °ourished in the 1970's and 1980's. The main objects of study in the ¯eld of computational complexity include the time and space needed by an algorithm in order to deliver its output when presented withTime Complexity 21 legal input. We start this section with an example whose sole purpose is to reveal the importance of analyzing the running time of an algorithm. Example 1.4 We have shown before that the maximum number of element comparisons performed by Algorithm bottomupsort when n isapower of2is n logn¡n + 1, and the number of element comparisons performed by Algorithm selectionsort isn(n¡1)=2. The elements may be integers, real numbers, strings of characters, etc. For concreteness, let us assume that each element comparison ¡6 takes 10 seconds on some computing machine. Suppose we want to sort a small number of elements, say 128. Then the time taken for comparing elements ¡6 using Algorithm bottomupsort is at most 10 (128£ 7¡ 128+1) = 0:0008 ¡6 seconds. Using Algorithm selectionsort, the time becomes 10 (128£127)=2= 0:008 seconds. In other words, Algorithm bottomupsort uses one tenth of the time taken for comparison using Algorithm selectionsort. This, of course, is not noticeable, especially to a novice programmer whose main concern is to come up with a program that does the job. However, if we consider a larger 20 number, say n =2 =1; 048; 576 which is typical of many real world problems, we ¯nd the following: The time taken for comparing elements using Algorithm ¡6 20 20 bottomupsort is at most 10 (2 £ 20¡ 2 + 1) = 20 seconds, whereas, using ¡6 20 20 Algorithm selectionsort, the time becomes 10 (2 £(2 ¡1))=2=6:4 days The calculations in the above example reveal the fact that time is un- doubtedly an extremely precious resource to be investigated in the analysis of algorithms. 1.8.1 Order of growth Obviously, it is meaningless to say that an algorithm A, when presented with input x, runs in time y seconds. This is because the actual time is not only a function of the algorithm used: it is a function of numerous factors, e.g. how and on what machine the algorithm is implemented and in what language or even what compiler or programmer's skills, to mention a few. Therefore, we should be content with only an approximation of the exact time. But, ¯rst of all, when assessing an algorithm's e±ciency, do we have to deal with exact or even approximate times? It turns out that we really do not need even approximate times. This is supported by many factors, some of which are the following. First, when analyzing the running time of an algorithm, we usually compare its behavior with another algo-22 Basic Concepts in Algorithmic Analysis rithm that solves the same problem, or even a di®erent problem. Thus, our estimates of times are relative as opposed to absolute. Second, it is desir- able for an algorithm to be not only machine independent, but also capable of being expressed in any language, including human languages. Moreover, it should be technology independent, that is, we want our measure of the running time of an algorithm to survive technological advances. Third, our main concern is not in small input sizes; we are mostly concerned with the behavior of the algorithm under investigation on large input instances. In fact, counting the number of operations in some \reasonable" imple- mentation of an algorithm is more than what is needed. As a consequence of the third factor above, we can go a giant step further: A precise count of the number of all operations is very cumbersome, if not impossible, and since we are interested in the running time for large input sizes, we may talk about the rate of growth or the order of growth of the running time. For instance, if we can come up with some constantc 0 such that the running time of an algorithm A when presented with an input of size n is at most 2 cn , c becomes inconsequential as n gets bigger and bigger. Furthermore, specifying this constant does not bring about extra insight when comparing 3 this function with another one of di®erent order, say dn for an algorithm B that solves the same problem. To see this, note that the ratio between the two functions is dn=c and, consequently, the ratio d=c has virtually no e®ect as n becomes very large. The same reasoning applies to lower order 2 2 terms as in the functionf(n)=n logn+10n +n. Here, we observe that the larger the value ofn the lesser the signi¯cance of the contribution of the 2 lower order terms 10n and n. Therefore, we may say about the running times of algorithms A and B above to be \of order"or\in the order of " 2 3 n and n , respectively. Similarly, we say that the function f(n) above is 2 of order n logn. Once we dispose of lower order terms and leading constants from a function that expresses the running time of an algorithm, we say that we are measuring the asymptotic running time of the algorithm. Equivalently, in the analysis of algorithms terminology, we may refer to this asymptotic time using the more technical term \time complexity". Now, suppose that we have two algorithmsA andA of running times 1 2 in the order of n logn. Which one should we consider to be preferable to the other? Technically, since they have the same time complexity, we say that they have the same running time within a multiplicative constant, that is, the ratio between the two running times is constant. In someTime Complexity 23 cases, the constant may be important and more detailed analysis of the algorithm or conducting some experiments on the behavior of the algorithm may be helpful. Also, in this case, it may be necessary to investigate other factors, e.g. space requirements and input distribution. The latter is helpful in analyzing the behavior of an algorithm on the average. 3 60 n 2 n 50 40 n log n 30 20 n 10 log n (1,0) 2 3 4 5 6 7 8 9 10 input size Fig. 1.5 Growth of some typical functions that represent running times. Figure 1.5 shows some functions that are widely used to represent the running times of algorithms. Higher order functions and exponential and hyperexponential functions are not shown in the ¯gure. Exponential and hyperexponential functions grow much faster than the ones shown in the ¯g- k 2 3 ure, even for moderate values ofn. Functions of the form log n;cn;cn ;cn are called, respectively, logarithmic, linear, quadratic andcubic. Functions k c c of the form n or n log n; 0c 1; are called sublinear. Functions 1:5 that lie between linear and quadratic, e.g. n logn and n , are called sub- quadratic. Table 1.1 shows approximate running times of algorithms with running time

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.