Lecture notes on Data Structures pdf

lecture notes on data structures and algorithms what is difference between data structure and file structure. and hashing and file structure in data structure pdf free
ImogenCameron Profile Pic
ImogenCameron,France,Teacher
Published Date:14-07-2017
Your Website URL(Optional)
Comment
LECTURE NOTES ON DATA AND FILE STRUCTURE rd B. Tech. 3 Semester Computer Science & Engineering and Information Technology Prepared by Dr. Rakesh Mohanty Dr. Manas Ranjan Kabat Mr. Sujaya Kumar Sathua VEER SURENDRA SAI UNIVERSITY OF TECHNOLOGY, BURLA SAMBALPUR, ODISHA, INDIA – 768018 3RD SEMESTER B.Tech.(CSE, IT) BCS-202 DATA AND FILE STRUCTURE – ( 3-0-0 )Cr.-3 Proposed Lecture Plan Lecture 1 : Motivation, Objective of studying the subject, overview of Syllabus Lecture 2 : Module I : Introduction to Data & file structures. Lecture 3 : Linear data Structures – Linked list and applications Lecture 4 : Stack and Queue Lecture 5 : Module II : Introduction to Non- Linear data structures Lecture 6 : General Trees , Binary Trees, Conversion of general tree to binary Lecture 7 : Binary Search Tree Lecture 8 : Red-Black trees Lecture 9 : Multi linked structures Lecture 10 : Heaps Lecture 11: Spanning Trees, Application of trees Lecture 12 : Module III Introduction to Sorting Lecture 13, 14 : Growth of function , ‘O’ notation, Complexity of algorithms, Lecture 15 : Internal sorting, Insertion sorting, Selection Sort Lecture 16 : Bubble Sort, Quick sort, Heap sort Lecture 17 : Radix sort, External sort, Multi way merge Lecture 18 : Module IV : Introduction to Searching, Sequential Search, Binary Search Lecture 19 : Search trees traversal Lecture 20 : Threaded Binary search trees Lecture 21 : AVL Tree – concept and construction Lecture 22 : Balancing AVL trees - RR, LL, LR and RL Rotations Lecture 23 : Module V : Introduction to Hashing Lecture 24 : Hashing techniques, Hash function Lecture 25 : Address calculation techniques- common hashing functions Lecture 26 : Collision resolution Lecture 27 : Linear probing, quadratic probing Lecture 28 : Double hashing Lecture 29 : Bucket addressing Lecture 30 : Module VI- Introduction to file Structures Lecture 31 : External storage devices Lecture 32 : Records - Concepts and organization Lecture 33 : Sequential file – structures and processing Lecture 34 : Indexed sequential files – strictures and processing Lecture 35 : Direct files Lecture 36 : Multi Key access INTRODUCTION DATA STRUCTURE: -Structural representation of data items in primary memory to do storage & retrieval operations efficiently. FILE STRUCTURE: Representation of items in secondary memory. While designing data structure following perspectives to be looked after. i. Application(user) level: Way of modeling real-life data in specific context. ii. Abstract(logical) level: Abstract collection of elements & operations. iii. Implementation level: Representation of structure in programming language. Data structures are needed to solve real-world problems. But while choosing implementations for it, its necessary to recognize the efficiency in terms of TIME and SPACE. TYPES: i. Simple: built from primitive data types like int, char & Boolean. eg: Array & Structure ii. Compound: Combined in various ways to form complex structures. 1:Linear: Elements share adjacency relationship& form a sequence. Eg: Stack, Queue , Linked List 2: Non-Linear: Are multi-level data structure. eg: Tree, Graph. ABSTRACT DATA TYPE :  Specifies the logical properties of data type or data structure.  Refers to the mathematical concept that governs them.  They are not concerned with the implementation details like space and time efficiency.  They are defined by 3 components called Triple =(D,F,A) D=Set of domain F=Set of function A=Set of axioms / rules LINKED LIST:  A dynamic data structure.  Linear collection of data items.  Direction is associated with it.  Logical link exits b/w items. Pointers acts as the logical link.  Consists of nodes that has two fields. - Data field : info of the element. - Next field: next pointer containing the address of next node. TYPES OF LINKED LIST: i. Singly or chain: Single link b/w items. ii. Doubly: There are two links, forward and backward link. iii. Circular: The last node is again linked to the first node. These can be singly circular & doubly circular list. ADVANTAGES:  Linked list use dynamic memory allocation thus allocating memory when program is initialised. List can grow and shrink as needed. Arrays follow static memory allocation .Hence there is wastage of space when less elements are declared. There is possibility of overflow too bcoz of fixed amount of storage.  Nodes are stored incontiguously thus insertion and deletion operations are easily implemented.  Linear data structures like stack and queues are easily implemented using linked list. DISADVANTAGES:  Wastage of memory as pointers requirextra storage.  Nodes are incontiguously stored thereby increasing time required to access individual elements. To access nth item arrays need a single operation while linked list need to pass through (n-1) items.  Nodes must be read in order from beginning as they have inherent sequential access.  Reverse traversing is difficult especially in singly linked list. Memory is wasted for allocating space for back pointers in doubly linked list. DEFINING LINKED LIST: struct node int info; struct node next; \\next field. An eg of self referencetial structure.() ptr; ()Self Referencetial structure: A structure that is referencing to another structure of same type. Here “next” is pointing to structure of type “node”. -ptr is a pointer of type node. To access info n next the syntax is: ptr-info; ptr-next; OPERATIONS ON SINGLY LINKED LIST: i. Searching ii. Insertion iii. Deletion iv. Traversal v. Reversal vi. Splitting vii. Concatenation Some operations: a: Insertion : void push(struct node headref, int data) (1) struct node newnode = malloc(sizeof(struct node)); newnode-data= data; newnode-next= headref; headref = newnode; (1) : headref is a pointer to a pointer of type struct node. Such passing of pointer to pointer is called Reference pointer. Such declarations are similar to declarations of call by reference. When pointers are passed to functions ,the function works with the original copy of the variable. i. Insertion at head: struct node head=NULL; for(int i=1; i6;i++) push(&head,i); \\ push is called here. Data pushed is 1.’&’ used coz references are passed in function arguments. return(head); :o\p: 5 4 3 2 1 :Items appear in reverse order.(demerit) ii. Insertion at tail: struct node add1() struct node head=NULL; struct node tail; push(&head,1); tail = head; for(int i=2 ;i6; i++) push(&(tail-next),i); tail= tail-next; return(head); : o\p: 1 2 3 4 5 b. Traversal: int count( struct node p) int count =0; struct node q; current = q; while(q-next = NULL) q=q-next; count++; return(count); c. Searching: struct node search( struct node list, int x) struct node p; for(p= list; p -next = NULL; p= p-next ) if(p-data==x) return(p); return(NULL); IMPLEMENTATION OF LISTS: i : Array implementation: define NUMNODES 100 structnodetype int info ,next ; ; structnodetype nodeNUMNODES; :100 nodes are declared as an array node. Pointer to a node is represented by an array index. Thus pointer is an integer b/w 0 & NUMNODES-1 . NULL pointer is represented by -1. nodep is used to reference node(p) , info(p) is referenced by nodep.info & next by nodep.next. ii : Dynamic Implementation : This is the same as codes written under defining of linked lists. Using malloc() and freenode() there is the capability of dynamically allocating & freeing variable. It is identical to array implementation except that the next field is an pointer rather than an integer. NOTE : Major demerit of dynamic implementation is that it may be more time consuming to call upon the system to allocate & free storage than to manipulate a programmer- managed list. Major advantage is that a set of nodes is not reserved in advance for use. SORTING Introduction · Sorting is the process of arranging items in a certain sequence or in different sets . · The main purpose of sorting information is to optimize it's usefulness for a specific tasks. · Sorting is one of the most extensively researched subject because of the need to speed up the operations on thousands or millions of records during a search operation. Types of Sorting : · Internal Sorting An internal sort is any data sorting process that takes place entirely within the main memory of a computer. This is possible whenever the data to be sorted is small enough to all be held in the main memory. For sorting larger datasets, it may be necessary to hold only a chunk of data in memory at a time, since it won’t all fit. The rest of the data is normally held on some larger, but slower medium, like a hard-disk. Any reading or writing of data to and from this slower media can slow the sorting process considerably · External Sorting Many important sorting applications involve processing very large files, much too large to fit into the primary memory of any computer. Methods appropriate for such applications are called external methods, since they involve a large amount of processing external to the central processing unit. There are two major factors which make external algorithms quite different: ƒ First, the cost of accessing an item is orders of magnitude greater than any bookkeeping or calculating costs. ƒ Second, over and above with this higher cost, there are severe restrictions on access, depending on the external storage medium used: for example, items on a magnetic tape can be accessed only in a sequential manner Well Known Sorting methods : -Insertion sort, Merge sort, Bubble sort, Selection sort, Heap sort, Quick sort INSERTION SORT Insertion sort is the simple sorting algorithm which sorts the array by shifting elements one by one. -OFFLINE sorting-This is the type of sorting in which whole input sequence is known. The number of inputs is fixed in offline sorting. -ONLINE sorting-This is the type of sorting in which current input sequence is known and future input sequence is unknown i.e in online sort number inputs may increase. INSERTION SORT ALGORITHM: int a6=5,1,6,2,4,3; int i,j,key; for(i=1;i6;i++) key=ai; j=i-1; while(j=0 && keyaj) aj+1=aj; j ; aj+1=key; HOW INSERTION SORT WORKS: Lets consider this array 5 1 6 2 4 3 5ss In insertion always start with 2nd element as key sort we As we can see here, in insertion sort, we pick up a key, and compares it with elements before of it. 5 In first step, there is one comparison. 5 1 6 2 4 3 5 ( partially sorted list) 6 2 4 3 1 5 1 is compared with 5 and is inserted before 5. 6 is greater than 1 and 5, so there is no swap. 1 1 5 5 6 6 2 4 3 2 is smaller than 5 and 6 but greater than 1, so it is inserted 4 3 1 2 5 6 after 1. And this goes on... 1 2 4 5 6 1 2 3 4 5 6 ( complete sorted list) In insertion sort there is a single pass ,but there are many steps. Number of steps utmost in Insertion sort is: 1+2+3......................+n , where n is number of elements in the list. =((n-1)n)/2 =O (nn) (TIME COMPLEXITY) Pseudo code for insertion sort: Input: n items Output: sorted items in ascending order. -compare item1 and item2 if (item1item2 ),then SWAP else , no swapping A partially sorted list is generated -Then scan next item and compare with item1 and item2 and then continue . ADVANTAGES OF INSERTION SORT: Simple implementation. Efficient for small data sets. Stable i.e does not change the relative order of elements with same values. Online i.e can sort a list as it receives it. LIMITATIONS OF INSERTION SORT: The insertion sort repeatedly scans the list of items, so it takes more time. With n squared steps required for every n elements to be sorted , the insertion sort does not deal well with a huge list. Therefore insertion sort is particularly useful when sorting a list of few items. REAL LIFE APPLICATIONS OF INSERTION SORT: Insertion sort can used to sort phone numbers of the customers of a particular company. It can used to sort bank account numbers of the people visiting a particular bank. MERGE SORT Merge sort is a recursive algorithm that continually splits a list .In merge sort parallel comparisons between the elements is done. It is based on "divide and conquer" paradigm. Algorithm for merge sort: void mergesort(int a,int lower,int upper) int mid; if(upper lower) mid=(lower+upper)/2; mergesort(a,lower,mid); mergesort(a,mid+1,upper); merge(a,lower,mid,mid+1,upper); void merge(int a,int lower1,int upper1,int lower2,int upper2) int p,q,j,n; int d100; p=lower1; q=lower2; n=0; while((p=upper1)&& (q=upper2)) dn++=(apaq)?ap++:aq++; while(p=upper1) dn++=ap++; while(q=upper2) dn++=aq++; for(q=lower1,n=0;q=upper1;q++,n++) aq=dn; for(q=lower2,j=0;q=upper2;q++,n++) aq=dj; Illustration 8 2 9 4 5 3 1 6 6 Divide 8 2 9 4 5 3 1 6 Divide 8 2 9 4 5 3 1 6 Divide 2 9 4 5 3 1 6 Merge 2 8 4 9 3 5 1 6 Merge 2 4 8 9 1 3 5 6 Merge 1 2 3 4 5 6 8 9 Divide it in two at the midpoint. Conquer each side in turn by recursive sorting. Merge two halves together. TIME COMPLEXITY: There are total log n passes in merge sort and in each pass there are n comparisons atmost. 2 Therefore total number of comparisons=O (n log n). 2 In 3 way merge sorting time complexity is O (n log n). 3 k way merge is possible in merge sort where k is utmost n/2 i.e kn/2. ADVANTAGES OF MERGE SORT: It is always fast and less time consuming in sorting small data. Even in worst case its runtime is O(n logn) It is stable i.e it maintains order of elements having same value. LIMITATIONS OF MERGE SORT: It uses a lot of memory. It uses extra space proportional to n. It slows down when attempting to sort very large data. APPLICATIONS OF MERGE SORT: Merge sort operation is useful in online sorting where list to be sorted is received a piece at a time instead of all at the beginning, for example sorting of bank account numbers. Binary Search Trees and AVL Trees Introduction · Binary search trees are an excellent data structure to implement associative arrays, maps, sets, and similar interfaces. · The main difficulty, as discussed in last lecture, is that they are efficient only when they are balanced. · Straightforward sequences of insertions can lead to highly unbalanced trees with poor asymptotic complexity and unacceptable practical efficiency. · The solution is to dynamically rebalance the search tree during insert or search operations. · We have to be careful not to destroy the ordering invariant of the tree while we rebalance. · Because of the importance of binary search trees, researchers have developed many different algorithms for keeping trees in balance, such as AVL trees, red/black trees, splay trees, or randomized binary search trees. · · In this lecture we discuss AVL trees, which is a simple and efficient data structure to maintain balance. · It is named after its inventors, G.M. Adelson-Velskii and E.M. Landis, who described it in 1962. Height Invariant . Ordering Invariant. · At any node with key k in a binary search tree, all keys of the elements in the left subtree are strictly less than k, while all keys of the elements in the right subtree are strictly greater than k. · To describe AVL trees we need the concept of tree height, which we define as the maximal length of a path from the root to a leaf. So the empty tree has height 0, the tree with one node has height 1, a balanced tree with three nodes has height 2. · If we add one more node to this last tree is will have height 3. Alternatively, we can define it recursively by saying that the empty tree has height 0, and the height of any node is one greater than the maximal height of its two children. · · AVL trees maintain a height invariant (also sometimes called a balance invariant). · Height Invariant. · At any node in the tree, the heights of the left and right subtrees differs by at most 1. Rotations: How they work Left Rotation (LL) Imagine we have this situation: To fix this, we must perform a left rotation, rooted at A. This is done in the following steps: b becomes the new root. a takes ownership of b's left child as its right child, or in this case, null. b takes ownership of a as its left child. The tree now looks like this: Right Rotation (RR) A right rotation is a mirror of the left rotation operation described above. Imagine we have this situation: Figure 1-3 c / b / a To fix this, we will perform a single right rotation, rooted at C. This is done in the following steps: b becomes the new root. c takes ownership of b's right child, as its left child. In this case, that value is null. b takes ownership of c, as it's right child. The resulting tree: Double Rotations (left-right) Our initial reaction here is to do a single left rotation. Let's try that Our left rotation has completed, and we're stuck in the same situation. If we were to do a single right rotation in this situation, we would be right back where we started. What's causing this?The answer is that this is a result of the right subtree having a negative balance. In other words,because the right subtree was left heavy, our rotation was not sufficient. What can we do? The answer is to perform a right rotation on the right subtree. Read that again. We will perform a right rotation on the right subtree. We are not rotating on our current root. We are rotating on our right child. Think of our right subtree, isolated from our main tree, and perform a right rotation on it: After performing a rotation on our right subtree, we have prepared our root to be rotated left. Here is our tree now: Looks like we're ready for a left rotation. Double Rotations (right-left) A double right rotation, or right-left rotation, or simply RL, is a rotation that must be performed when attempting to balance a tree which has a left subtree, that is right heavy. This is a mirror operation of what was illustrated in the section on Left-Right Rotations, or double left rotations. Let's look at an example of a situation where we need to perform a Right-Left rotation. In this situation, we have a tree that is unbalanced. The left subtree has a height of 2, and the right subtree has a height of 0. This makes the balance factor of our root node, c, equal to -2. What do we do? Some kind of right rotation is clearly necessary, but a single right rotation will not solve our problem. a \ c / b Looks like that didn't work. Now we have a tree that has a balance of 2. It would appear that we did not accomplish much. That is true. What do we do? Well, let's go back to the original tree, before we did our pointless right rotation:

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.