Analysis and Design of Computer Algorithms

numerical methods design analysis and computer, implementation of algorithms solution manual, object oriented software development for enterprise pdf free download
Dr.DavisHardy Profile Pic
Dr.DavisHardy,Germany,Teacher
Published Date:22-07-2017
Your Website URL(Optional)
Comment
  Subject: Analysis and Design of Computer Algorithms Paper Code: MCA 403 Author: Sh. Ganesh Kumar Lesson: Divide And Conquer Strategy Vetter: Ms Jyoti Lesson No. : 01 STRUCTURE 1.1 Introduction 1.2 Binary Search 1.3 Finding Maximum and Minimum 1.4 Merge Sort 1.5 Quick Sort 1.6 Strassen’s Matrix Multipliction 1.7 Summary 1.8 Keywords 1.9 Review Questions 1.10 Further Readings 1.1 INTRODUCTION Given a function that has to compute on ‘n’ input the divide and conquer strategy suggest splitting k distinct subsets, 1 K ≤ n, that yields K sub problems. Each of these sub problems must be solved and the solutions are obtained. A method must be found to combine the sub solutions into a solution of a whole. If the sub problems are still relatively large, the divide and conquer strategy can be reapplied. The sub problems are of the same type as the original problem. We can write a control abstraction that gives a flow of the control but whose primary operations are specified by other procedures. The meaning of these procedures is left undefined. D and C (P) where P is the problem to b e solved. A Boolean valued function that determines if the input size is small enough that the answer can be computed without splitting. If it is so the function S is invoked else the problem P is divided into smaller sub problems. The sub problems P , P ,…..P can be solved by recursive applications of D and 1 2 k C. Combine is a function that determines the solution to P, using the solution the K sub problems. Algorithm D and C (P) if small (p) the return S (P); else divide P into smaller instances P , P ,…..P K ≥ l 1 2 k Apply D and C to each to these sub problems. Return combine (D and C (P ), D and C (P ) ….D and C (P )); 1 2 K if the size of P is n and the size of K sub problems. are n , n …n respectively the 1 2 k computing time of D and C is described by the recurrence relation. T(n) = g(n) where n is the small T(n ) + T(n )…….+t(n ) + f(n) 1 2 k T(n) is the time for D and C is any input n. g(n) is the time to compute the sum directly for small inputs. f(n) is the time for dividing D and for combining the solutions for the sub problems. When the sub problems are of the same type as the original problems. We describe the algorithm using recursion. 1.2 BINARY SEARCH We assume that we have n ≥ 1 distinct integers which are stored in the increasing order. We are required to determine whether a given integer x is present in the list if x is present then determine a value J such that x = Aj. If x is not in the list then j is to be set to O. Let A = n, i,…..a ,x), where n is the number of element in the list (a ,….a ) is the e x e list of element and x is the element searched for. By making use of the divide and conquer that the set is stored, let A(mid) be the middle element. There are three possibilities- i) x A (mid) In this case x can only occur as A(1) to A(mid-1). ii) x A(mid) In this case x can only occur as A(mid +1) to A(n) iii) x = A(mid) In this case set j to mid and return continue in this way by keeping two pointer, lower and upper, to indicate the range of elements. Algorithm BINSRCH (A, n, x, j) 1- Lower ←1, i upper ← n 2- While lower ≤ upper do 3- Mid ← lower + upper) 12 4- Case 5- : x A(mid): lower ← mid + 1 6- : x A(mid): upper ← mid – 1 7- : else : j ← mid; return 8- end 9- end 10- J ← O end In the binary search method, it is always the key in the middle of the subfile currently being searched that is used for comparison. This splitting process can be described by drawing a binary decision tree. Suppose there are 31 records then the first key tested is K since (1+31)/2 =16. If K is 16 less than K since K is tested, next (1+15)/2 = 8 or if K is greater than K then K is 16, 8 16 24 tested. The binary tree is : 16 8 24 4 12 20 28 2 6 10 14 18 22 26 30 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 1 1.3 Finding Maximum and Minimum The Divide-and-Conquer Strategy e.g. find the maximum of a set S of n numbers time complexity: 2T(n/2)+1 , n2 ⎧ 1 T(n)= , n≤2 ⎨ ⎩ k assume n = 2 T(n) = 2T(n/2)+1 = 2(2T(n/4)+1)+1 = 4T(n/4)+2+1 : k-1 k-2 =2 T(2)+2 +…+4+2+1 k-1 k-2 =2 +2 +…+4+2+1 k =2 -1 = n-1 A general divide-and-conquer algorithm: Step 1: If the problem size is small, solve this problem directly; otherwise, split the original problem into 2 sub-problems with equal sizes. Step 2: Recursively solve these 2 sub-problems by applying this algorithm. Step 3: Merge the solutions of the 2 sub-problems into a solution of the original problem. time complexity: 2T(n/2)+S(n)+M(n) , n ≥ c ⎧ T(n)= ⎨ b , n c ⎩ where S(n): time for splitting M(n): time for merging b: a constant c: a constant. z 2-D maxima finding problem Def: A point (x , y ) dominates (x , y ) if x x and y y . A point is called a maxima if 1 1 2 2 1 2 1 2 no other point dominates it. Straightforward method: compare every pair of points 2 time complexity: O(n ). The maximal of S and S L R Algorithm 5.1 A Divide-and-Conquer Approach to Find Maximal Points in the Plane Input: A set of n planar points. Output: The maximal points of S. Step 1. If S contains only one point, return it as the maxima. Otherwise, find a line L perpendicular to the X-axis which separates the set of points into two subsets S and S , each of which consisting of n/2 points. L R Step 2. Recursively find the maximal points of S and S . L R Step 3. Project the maximal points of S and S onto L and sort these points according L R to their y-values. Conduct a linear scan on the projections and discard each of the maximal points of S if its y-value is less than the y-value of some L maximal point of S . R time complexity: 2T(n/2)+O(n)+O(n log n) , n 1 ⎧ T(n)= ⎨ 1 , n = 1 ⎩ k Assume n = 2 2 T(n) = O(n log n) + O(n log n) 2 = O(n log n) Improvement: (1)Step 3: Find the largest y-value of S . R time complexity: 2T(n/2)+O(n)+O(n) , n 1 ⎧ T(n)= ⎨ 1 , n = 1 ⎩ ⇒T(n) = O(n log n) (2)The sorting of y-values need be done only once( only one presorting). No sorting is needed in Step3. time complexity: O(n log n) + T(n) = O(n log n) where 2T(n/2)+O(n)+O(n) , n 1 ⎧ T(n)= ⎨ 1 , n = 1 ⎩ 1.4 MERGE SORT Using the divide and conquer strategy, sorting algorithm called the merge sort, we split a given set of numbers into two equal sized sets and a combining operation is the merging of two sorted sets into one. Lets consider a sequence of n elements (a 1 to a n/2) and (a n/2 to an. Each set has to be individually sorted and the resulting sorted sequences are merged which produce a single sorted sequence of n elements. The merge sort algorithms uses recursion and a function merge, which merges two sorted sets. Algorithm:- merge sort (low, high) if (low high) then mid: = (low +high)/2 mergesort (low, mid); mergesort (mid +1, high); mergesort (low, mid, high) ; Algorithm merge (low, mid, high) I: = low; j: = low; k: = mid + 1; While ((n ≤ mid) and (k ≤ high)) do if (an ≤ ak) then b j : = an; n : = n + 1; else b j : =a k ; k: = k + 1; j : = j + 1; if (i mid) then for L : = k to high do b j : = aL; j : = j + 1; else for L: = i to mid do b j : = a L; j: = j + 1; for L: = low to high do aL: = bL; Sorted Sequence 1 2 3 4 5 6 7 merge 2457 1 2 3 6 merge merge 2 5 1 3 26 4 7 merge merge merge merge 5 2 4 7 1 3 2 6 Initial sequence Figure : The operation of merge sort on the array A = ( 5, 2, 4, 7, 1, 3, 2, 6). The length of the sorted sequences being merged increase as the algorithm progresses from bottom to top. The time for merging operation is proportional to n. The computing time for merge sort is : T(n) = a n = 1, ‘a’ is constant 2T(n/2) + Cn; n1, C is constant. k When n is a power of 2 is n = 2 using Successive substitutions: T(n) = 22T(n/4) +(n/2) +Cn = 4T(n/4) + 2Cn 2 2 = 2 T (n/2 ) +2Cn K k K =2 T(n/2 )+K Cn 2 = n K = 2 T(1) + K Cn K = 2 a + K Cn = na + Cn log n K = log n 2 Hence merge sort is o n log n. 1.5 QUICK SORT Quick sort, like merge sort, is based on the divide and conquer paradigm. Here is the three step divide and conquer process for sorting a subarray AP…..r. Divide : Partition (rearrange) the array AP…..r into two (possibly empty) subarrays AP….q-1 and Aq+1…..r such that each element of AP….q-1 is less than or equal to Aq, which is less than or equal to each element of Aq+1….r. Compute the index q as part of this partitioning procedure. Conquer : Sort the two subarray AP….q-1 and Aq+1….r by recursive calls to quicksort. Combine : The subarrays are sorted in place. There is no needed to combine them; the entire array AP….r is now sorted. Algorithm : Quicksort (A, P, r) 1- if P r 2- then q ← partition (A, P, r) 3- Quicksort (A, P, q-1) 4- Quicksort (A, q+1, r To sort an entire array A, the initial call is quicksort (A, q, length A). Partitioning the array Partition (A, P, r) 1- x ← A r 2- i ← P-1 3- for j ← P to r – 1 4- do if Aj ≤ x 5- then i ← i + 1 6- enchage Ai ↔ Aj 7- exchange Ai + 1 ↔ Ar 8- return i + 1 1.6 STRASSEN’S MATRIX MULTIPLICATION Let A and B be two n x n matrices. The product matrix C = AB is also an n th x n matrix. Whose i and j elements is formed by taking i, j the elements in the th i column of B and multiplying them to give c i j = ∑ A i k B k,j, 1 ≤ k n for all 3 i and j between 1 and n. To compute ci, j using this formula we require n multiplications. The divide and conquer strategy suggest another way to compute the product of two n x n matrices. We will assume that n is a power of 2. in case n is not a power of 2 then enough rows and columns of zeros may be added to both A and B so that the resulting dimensions are a power of 2. 2 Imagine that A and B are each partitioned into 4 sub matrices each having dimension n/2 x n/2 then the product AB can be computed by using the above formula for the product of 2 x 2 matrices. A A B B C C 11 12 11 12 11 12 A A B B = C C 21 22 21 22 21 22 Then C = A B + A B 11 11 11 12 21 C A B + A B 12 = 11 12 12 22 C = A B + A A 21 21 11 22 21 C = A B + A B 22 21 12 22 22 if n = 2 then the above formula is computed using a multiplication operation for the elements of A and B. For n 2 the elements of C can be computed using matrix multiplication and addition operations applied to matrices of size n/2 x n/2. Since n is a recursively computed by the same algorithm for n x n case. Strassen formulated a method to reduce the number of matrix multiplication so as to reduce the complexity of the algorithm. This method uses only 7 multiplications and 18 addition or subtractions. The method involves first computing 7 n/2 x n/2 matrices p, q, r, s, t, u and v as below : P = (A + A ) (B + B ) 11 22 11 22 Q = (A + A ) B 21 22 11 R = A (B - B ) 11 12 22 S = A (B – B ) 22 21 11 T = (A + A ) B 11 12 22 U = (A - A ) (B + B ) 21 11 11 12 V = (A – A ) (B + B ) 12 22 21 22 C = P+S-T+V 11 C = R + T 12 C = P+R-Q+U 21 The resulting time complexity is bn ≤ 2 T(n) = 7T(n/2) +an2n 2 Where a and b are constants. Limitations of Strassen’s Algorithm From a practical point of view , Strassen’s algorithm is often not the method of choice for matrix multiplication, for the following four reason: 1. The constant factor hidden in the running time of Strassen’s algorithm is larger 3 than the constant factor in the native θ(n ) method. 2. When the matrices are sparse, methods tailored for sparse matrices are faster. 3. Strassen’s algorithm is not quite as numerically stable as the native method. 4. The sub matrices formed at the levels of recursion consume space. 1.7 SUMMARY The unit discusses various issues in respect of the technique viz., Divede and Conquer for designing and analyzing algorithms for solving problems. First , the general plan of the Divide and conquer technique is explained . the issue of whether at some stage to solve a problem directly or whether to further subdivide it, is discussed in terms of the relative efficiencies in the two alternative cases. The technique is illustrated with examples of its applications to solving problems of Binary Search, Sorting , of finding maximum, of finding minimum of given data and matrix multiplication. Under sorting , the well- known techniques viz., Merge-sort and quick-sort are discussed in detail. 1.8 KEYWORDS Divide: the problem into a number of sub problems. Conquer: the sub problems by solving them recursively. If the sub problem sizes are small enough , however , just solve the sub problems in a straightforward manner. Combine: the solutions to the sub problems into the solution for the original problem. 1.9 REVIEW QUESTIONS 1. A sorting method is said to be stable if at the end of the method, identical elements occur in the same order as in the original unsorted. Is merge sort a stable sorting method? 2. Apply the Quick sort Algorithm to sort the numbers. 65, 70, 75, 80, 85, 60, 55, 50, 45. 3. Explain the method used by Strassen to reduce the complexity for matrix multiplication. 4. What is the worst case complexity of Quick sort and when does it occur? 5. Apply the divide and conquer strategy to select a particular number x from a array of 10 numbers. 6. Multiply the following two matrices using Strassen’s algorithm 5 6 and -7 6 -5 3 5 9 1.10 FURTHER READINGS 1. Horowitz E and Sahni S., “Fundamental of Computer Algorithm” Galgotia Publications. 2. Aho A. V. Hoperoft, J.E. and Ullman, J.D., “Design and Analysis of Algorithm” Addison Wesley. 3. D. Harel., “ Algorithmics : The Spirit of Computing” Addison Wesley. Subject: Analysis and Design of Computer Algorithms Paper Code: MCA 403 Author: Sh. Ganesh Kumar Lesson: Greedy Method Vetter: Jyoti Lesson No. : 02 STRUCTURE 2.1 Introduction 2.2 Optimal Storage on Tap 2.3 Knapsack Problem 2.4 Making Change (Money) 2.5 Minimum Spanning Trees 2.5.1 Prim’s Algorithm 2.5.2 Kruskal’s Algorithm 2.6 Single source/Shortest Path 2.6.1 Dijkastra’s Algorithm 2.7 Summary 2.8 Keywords 2.9 Review Questions 2.10 Further Readings 2.1 INTRODUCTION The Greedy method is a designed technique that can be applied to a wide variety of problems. Most of these problems have n inputs and require us to obtain a subset that satisfies these constraints. Any subset that satisfies these constraints is called a ‘feasible solution’. Feasible solutions either maximize or minimize a given ‘objective’ function. ‘A feasible solution that does this is called an optimal solution’. The Greedy method devices an algorithm that works in stages considering one input at a time. At each stage a decision is made regarding whether a particular input is in an optimal solution. That is done by considering the inputs in an order determined by some selection procedure. If adding that input leads to an infeasible solution then that input is not added to the partial solution. Else its added. Algorithm Greedy (a, n) Solution : = Ø ; for l : = 1 to n do x : = select (a) ; if feasible (solution, x) then solution : = Union (solution, x) ; Return solution; The function select, selects an input from a and removes it. The selected input value is assigned to x. Feasible is a Boolean valued function that determines whether x can be included into the solution vector. The function union combines x with solution and update the objective function. Problems that call for the selection of an optimal subset use the version of the Greedy technique called as the subset paradigm. Problems that make decision by considering the inputs in some order and where each decision is made using an optimization criteria that can be computed using decisions already made, use the version of Greedy method called as the ordering paradigm. Example 2.1: Let us suppose that we have to go from city A to city E through either city B or city C or city D with costs of reaching between pairs of cities as shown below : A 3000 5000 4000 D B C 5000 8000 4000 Figure 2.1.1 Then the Greedy technique suggests that we take the route from A to B, the cost of which Rs. 3,000 is the minimum among the tree costs, (viz., Rs. 3,000, Rs. 4,000 and Rs. 5,000) of available routes. However, at B there is only one route available to reach E. Thus, Greedy algorithms suggest the route from A to B to E, which costs Rs.11000. But, the route from A to C to E, costs only Rs. 9000. Also the route from A to D to E costs also Rs.9000. Thus locally better solution, at some stage, suggested by Greedy technique yields overall (or globally) costly solution. 2.2 OPTIMAL STORAGE ON TAPES There are n programs that are to be sorted on a computer tape of length I. Associated with each program i is a length li where 1 ≤ i ≤ n. Programs can be stored on the tape if and only if the sum of the lengths of the programs is at the most I. if the programs are stored in the order I = i i i …….i in the time tj needed to retrieve the program ij is proportional to 1 2 3 n ∑l if all programs are retrieved i ≤ k ≤ j ik i ≤ k ≤ j Equally often then the expected or mean retrieved time (MRT) is 1/n ∑t j i ≤ j ≤ n In the optimal storage on tape problem we are required to find a permutation for the n programs so that when they are stored on the tape in this order the MRT is minimized. This problem fits the ordering paradigm. Minimizing the MRT is equivalent to minimizing d (l) ∑ ∑ l ik i ≤j≤n i≤k≤j Using a Greedy approach we choose the next program on the basis of some optimization measure. The next program to be stored on the tape would be the one that minimizes the increases in ‘d’ if we have already constructed the permutation i , i ……i 1 2 r then appending program j gives the permutaion i i i …..i = j. 1 2 3 r+1 This increases the d value by ∑ l + l . Since ∑l is fixed an independent ik i ik i ≤ i ≤ n of j, we trivially observe that the increase in d is minimized if the next program chosen is the one with the least length from among the remaining programs. The tape storage problem can be extended to several tapes T , T …..T 0 1 m-1 Then the programs are to be distributed over these tapes The total retrieval time (TD) is TD = ∑ d (l ) j 0 ≤j ≤ m-1 The objective is to store the programs in such away so as to minimize (TD). If the j initially ordered so that l ≤l ≤…≤l . Then the first n programs are assigned to tapes T , i 2 n 0 T ….T respectively. The next m programs will be assigned to tapes T , T …T 1 m-1 0 1 m-1 respectively. The general rule is that program is that program i is stored on tape Ti mod m on any given tape the programs are stored in increasing ordered of their length. 2.3 KNAPSACK PROBLEM The Knapsack problem calls for selecting a subset of the objects and hence fit the subset paradigm. We are given n objects and a knapsack or a bag. Object ‘i’ has a weight w, and a knapsack has a capacity ‘m’. If the fraction x , 0≤ x ≤ 1 of object i is i i placed into the knapsack then a profit of p x is earned. i i The objective is to obtain a filling of the knapsack that maximizes the total profit earned. Since the knapsack capacity is m, we require the total weight of all chosen objects to be at most m formally the problem can be stated as: ∑ p x i i Maximize 1 ≤ i ≤ n ∑ w x i i Subject to ≤m 1 ≤ i ≤ n And 0 ≤ x ≤ 1, 1 ≤ i ≤ n i In addition to selecting a subset the problem also involves the selection of an x i for each object. Some of the Greedy strategies to obtain feasible solutions are : 1- Try to fill the knapsack by including the next object with the largest profit. If the object doesn’s fit, then a fraction of its included to fill the knapsack. Thus each time an object is included into a knapsack we obtain the largest possible increase in profit. 2- Alternatively considering the objects in order of decreasing profit values does not yield an optimal solution because even though the objective function value takes on large increases at each step, the no. of steps is few as the knapsack capacity is used up at a rapid stage. 3- The next attempt strikes to achieve a balance between the rate at which profit increases and the rate at which capacity is used. At each step we include that object which has the maximum profit per unit of capacity are considered in order of the ratio p /w i i Algorithm Greedy Knapsack (m,n) For i : = 1 to n do n i : = 0.0; u : = m; for i : = 1 to n do if (wi u) then break; n i : =1.0; u : = u-w i; if (i ≤ n) then x i : = u / w i; 2.4 MAKING CHANGE (MONEY) First of all, we state a special case of the problem of making change, and then discuss the problem in its general form. We, in India, have currency notes or coins of denominations of Rupees 1, 2, 5, 10, 20, 50, 100, 500 and 1000. Suppose a person has to pay an amount of Rs. 5896 after having decided to purchase an article. Then, the problem is about how to pay the amount using minimum number of coins/notes. A simple, frequently and unconsciously used algorithm based on Greedy technique is that after having collected an amount A 5896, choose a note of denomination D, which is s.t. (i) A + D ≤ 5896 and (ii) D is of maximum denomination for which (i) is satisfied, i.e., if E D then A + E 5896. In general, the Change Problem may be stated as follow : , d ,……d with d 0 for i = 1, 2,…...,k, be the only coins that are available Let d 1 2 k i such that each coin with denomination di is available in sufficient quantity for the purpose of making payments. Further, let A, a positive integer, be the amount to be paid using the above-mentioned coins. The problem is to use the minimum number of coins for the purpose. The problem with above mentioned algorithms based on greedy technique is that in some cases, it may either fail or may yield suboptimal solutions. In order to establish inadequacy of Greedy technique based algorithms, we consider the following two examples. Example 1.2.1 : Let us assume a hypothetical situation in which we have supply of rupee-notes of denominations 5, 3 and 2 and we are to collect an amount of Rs. 9. Then using Greedy technique, first we choose a note of Rupees 5. Next we choose a 3- Rupee note to make a total amount of Rupees 8. But then according to Greedy technique, we can not go ahead in the direction of collecting Rupees 9. The failure of Greedy technique is because of the fact that there is a solution otherwise, as it is possible to make payment of Rupees 9 using notes of denominations of Rupees 5, Rupees 3 and Rupees 2, viz., 9 = 5+2+2. 2.5 MINIMUM SPANNING TREES

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.