data structures using c lecture notes and lecture notes on data structures and algorithms using c++ pdf free download
LECTURE NOTES ON
DATA STRUCTURES USING C
1 December, 2014
L. V. NARASIMHA PRASAD
Department of Computer Science and Engineering
E. KRISHNA RAO PATRO
Department of Computer Science and Engineering
INSTITUTE OF AERONAUTICAL ENGINEERING
DUNDIGAL – 500 043, HYDERABAD
The term data structure is used to describe the way data is stored, and the term
algorithm is used to describe the way data is processed. Data structures and
algorithms are interrelated. Choosing a data structure affects the kind of algorithm
you might use, and choosing an algorithm affects the data structures we use.
An Algorithm is a finite sequence of instructions, each of which has a clear meaning
and can be performed with a finite amount of effort in a finite length of time. No
matter what the input values may be, an algorithm terminates after executing a
finite number of instructions.
1.1. Introduction to Data Structures:
Data structure is a representation of logical relationship existing between individual elements of
data. In other words, a data structure defines a way of organizing all data items that considers
not only the elements stored but also their relationship to each other. The term data structure
is used to describe the way data is stored.
To develop a program of an algorithm we should select an appropriate data structure for that
algorithm. Therefore, data structure is represented as:
Algorithm + Data structure = Program
A data structure is said to be linear if its elements form a sequence or a linear list. The linear
data structures like an array, stacks, queues and linked lists organize data in linear order. A
data structure is said to be non linear if its elements form a hierarchical classification where,
data items appear at various levels.
Trees and Graphs are widely used non-linear data structures. Tree and graph structures
represents hierarchial relationship between individual data elements. Graphs are nothing but
trees with certain restrictions removed.
Data structures are divided into two types:
• Primitive data structures.
• Non-primitive data structures.
Primitive Data Structures are the basic data structures that directly operate upon the
machine instructions. They have different representations on different computers. Integers,
floating point numbers, character constants, string constants and pointers come under this
Non-primitive data structures are more complicated data structures and are derived from
primitive data structures. They emphasize on grouping same or different data items with
relationship between each data item. Arrays, lists and files come under this category. Figure
1.1 shows the classification of data structures.
Lecture Notes 1 Dept. of Information Technology Data Structures
Non-Primitive Data Structures
Primitive Data Structures
Integer Float Char Pointers Arrays Lists Files
Linear Lists Non-Linear Lists
Stacks Queues Graphs Trees
Figure 1.1. Classification of Data Structures
1.2. Data structures: Organization of data
The collection of data you work with in a program have some kind of structure or organization.
No matte how complex your data structures are they can be broken down into two fundamental
In contiguous structures, terms of data are kept together in memory (either RAM or in a file).
An array is an example of a contiguous structure. Since each element in the array is located
next to one or two other elements. In contrast, items in a non-contiguous structure and
scattered in memory, but we linked to each other in some way. A linked list is an example of a
non-contiguous data structure. Here, the nodes of the list are linked together using pointers
stored in each node. Figure 1.2 below illustrates the difference between contiguous and non-
1 2 3 1 2 3
(a) Contiguous (b) non-contiguous
Figure 1.2 Contiguous and Non-contiguous structures compared
Contiguous structures can be broken drawn further into two kinds: those that contain data
items of all the same size, and those where the size may differ. Figure 1.2 shows example of
each kind. The first kind is called the array. Figure 1.3(a) shows an example of an array of
numbers. In an array, each element is of the same type, and thus has the same size.
The second kind of contiguous structure is called structure, figure 1.3(b) shows a simple
structure consisting of a person’s name and age. In a struct, elements may be of different data
types and thus may have different sizes.
Lecture Notes 2 Dept. of Information Technology For example, a person’s age can be represented with a simple integer that occupies two bytes
of memory. But his or her name, represented as a string of characters, may require many
bytes and may even be of varying length.
Couples with the atomic types (that is, the single data-item built-in types such as integer, float
and pointers), arrays and structs provide all the “mortar” you need to built more exotic form of
data structure, including the non-contiguous forms.
int arr3 = 1, 2, 3; struct cust_data
1 2 3
cust_data bill= 21, “bill the student”;
“bill the student”
Figure 1.3 Examples of contiguous structures.
Non-contiguous structures are implemented as a collection of data-items, called nodes, where
each node can point to one or more other nodes in the collection. The simplest kind of non-
contiguous structure is linked list.
A linked list represents a linear, one-dimension type of non-contiguous structure, where there
is only the notation of backwards and forwards. A tree such as shown in figure 1.4(b) is an
example of a two-dimensional non-contiguous structure. Here, there is the notion of up and
down and left and right.
In a tree each node has only one link that leads into the node and links can only go down the
tree. The most general type of non-contiguous structure, called a graph has no such
restrictions. Figure 1.4(c) is an example of a graph.
A B C A
(a) Linked List
(b) Tree F (c) graph
D E F G
Figure 1.4. Examples of non-contiguous structures
Lecture Notes 3 Dept. of Information Technology Hybrid structures:
If two basic types of structures are mixed then it is a hybrid form. Then one part contiguous
and another part non-contiguous. For example, figure 1.5 shows how to implement a double–
linked list using three parallel arrays, possibly stored a past from each other in memory.
A B C
(a) Conceptual Structure
D P N
1 A 3 4
2 B 4 0
(b) Hybrid Implementation
3 C 0 1
4 D 1 2
Figure 1.5. A double linked list via a hybrid data structure
The array D contains the data for the list, whereas the array P and N hold the previous and
next “pointers’’. The pointers are actually nothing more than indexes into the D array. For
instance, Di holds the data for node i and pi holds the index to the node previous to i,
where may or may not reside at position i–1. Like wise, Ni holds the index to the next node in
1.3. Abstract Data Type (ADT):
The design of a data structure involves more than just its organization. You also need to plan
for the way the data will be accessed and processed – that is, how the data will be interpreted
actually, non-contiguous structures – including lists, tree and graphs – can be implemented
either contiguously or non- contiguously like wise, the structures that are normally treated as
contiguously - arrays and structures – can also be implemented non-contiguously.
The notion of a data structure in the abstract needs to be treated differently from what ever is
used to implement the structure. The abstract notion of a data structure is defined in terms of
the operations we plan to perform on the data.
Considering both the organization of data and the expected operations on the data, leads to the
notion of an abstract data type. An abstract data type in a theoretical construct that consists of
data as well as the operations to be performed on the data while hiding implementation.
For example, a stack is a typical abstract data type. Items stored in a stack can only be added
and removed in certain order – the last item added is the first item removed. We call these
operations, pushing and popping. In this definition, we haven’t specified have items are stored
on the stack, or how the items are pushed and popped. We have only specified the valid
operations that can be performed.
For example, if we want to read a file, we wrote the code to read the physical file device. That
is, we may have to write the same code over and over again. So we created what is known
Lecture Notes 4 Dept. of Information Technology today as an ADT. We wrote the code to read a file and placed it in a library for a programmer to
As another example, the code to read from a keyboard is an ADT. It has a data structure,
character and set of operations that can be used to read that data structure.
To be made useful, an abstract data type (such as stack) has to be implemented and this is
where data structure comes into ply. For instance, we might choose the simple data structure
of an array to represent the stack, and then define the appropriate indexing operations to
perform pushing and popping.
1.4. Selecting a data structure to match the operation:
The most important process in designing a problem involves choosing which data structure to
use. The choice depends greatly on the type of operations you wish to perform.
Suppose we have an application that uses a sequence of objects, where one of the main
operations is delete an object from the middle of the sequence. The code for this is as follows:
void delete (int seg, int &n, int posn)
// delete the item at position from an array of n elements.
while (i n)
seqi = segi+1;
This function shifts towards the front all elements that follow the element at position posn. This
shifting involves data movement that, for integer elements, which is too costly. However,
suppose the array stores larger objects, and lots of them. In this case, the overhead for moving
data becomes high. The problem is that, in a contiguous structure, such as an array the logical
ordering (the ordering that we wish to interpret our elements to have) is the same as the
physical ordering (the ordering that the elements actually have in memory).
If we choose non-contiguous representation, however we can separate the logical ordering from
the physical ordering and thus change one without affecting the other. For example, if we store
our collection of elements using a double–linked list (with previous and next pointers), we can
do the deletion without moving the elements, instead, we just modify the pointers in each
node. The code using double linked list is as follows:
void delete (node beg, int posn)
//delete the item at posn from a list of elements.
int i = posn;
node q = beg;
while (i && q)
Lecture Notes 5 Dept. of Information Technology i;
q = q Æ next;
/ not at end of list, so detach P by making previous and
next nodes point to each other /
node p = q - prev;
node n = q - next;
p - next = n;
n - prev = P;
The process of detecting a node from a list is independent of the type of data stored in the
node, and can be accomplished with some pointer manipulation as illustrated in figure below:
A X C
A X A
Figure 1.6 Detaching a node from a list
Since very little data is moved during this process, the deletion using linked lists will often be
faster than when arrays are used.
It may seem that linked lists are superior to arrays. But is that always true? There are trade
offs. Our linked lists yield faster deletions, but they take up more space because they require
two extra pointers per element.
An algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input
values may be, an algorithm terminates after executing a finite number of instructions. In
addition every algorithm must satisfy the following criteria:
Input: there are zero or more quantities, which are externally supplied;
Output: at least one quantity is produced;
Lecture Notes 6 Dept. of Information Technology Definiteness: each instruction must be clear and unambiguous;
Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm will
terminate after a finite number of steps;
Effectiveness: every instruction must be sufficiently basic that it can in principle be carried out
by a person using only pencil and paper. It is not enough that each operation be definite, but it
must also be feasible.
In formal computer science, one distinguishes between an algorithm, and a program. A
program does not necessarily satisfy the fourth condition. One important example of such a
program for a computer is its operating system, which never terminates (except for system
crashes) but continues in a wait loop until more jobs are entered.
We represent an algorithm using pseudo language that is a combination of the constructs of a
programming language together with informal English statements.
1.6. Practical Algorithm design issues:
Choosing an efficient algorithm or data structure is just one part of the design process. Next,
will look at some design issues that are broader in scope. There are three basic design goals
that we should strive for in a program:
1. Try to save time (Time complexity).
2. Try to save space (Space complexity).
3. Try to have face.
A program that runs faster is a better program, so saving time is an obvious goal. Like wise, a
program that saves space over a competing program is considered desirable. We want to “save
face” by preventing the program from locking up or generating reams of garbled data.
1.7. Performance of a program:
The performance of a program is the amount of computer memory and time needed to run a
program. We use two approaches to determine the performance of a program. One is
analytical, and the other experimental. In performance analysis we use analytical methods,
while in performance measurement we conduct experiments.
The time needed by an algorithm expressed as a function of the size of a problem is called the
TIME COMPLEXITY of the algorithm. The time complexity of a program is the amount of
computer time it needs to run to completion.
The limiting behavior of the complexity as size increases is called the asymptotic time
complexity. It is the asymptotic complexity of an algorithm, which ultimately determines the
size of problems that can be solved by the algorithm.
The space complexity of a program is the amount of memory it needs to run to completion. The
space need by a program has the following components:
Lecture Notes 7 Dept. of Information Technology Instruction space: Instruction space is the space needed to store the compiled version of the
Data space: Data space is the space needed to store all constant and variable values. Data
space has two components:
• Space needed by constants and simple variables in program.
• Space needed by dynamically allocated objects such as arrays and class instances.
Environment stack space: The environment stack is used to save information needed to
resume execution of partially completed functions.
Instruction Space: The amount of instructions space that is needed depends on factors such
• The compiler used to complete the program into machine code.
• The compiler options in effect at the time of compilation
• The target computer.
1.8. Classification of Algorithms
If ‘n’ is the number of data items to be processed or degree of polynomial or the size of the file
to be sorted or searched or the number of nodes in a graph etc.
1 Next instructions of most programs are executed once or at most only a few times.
If all the instructions of a program have this property, we say that its running time
is a constant.
Log n When the running time of a program is logarithmic, the program gets slightly
slower as n grows. This running time commonly occurs in programs that solve a big
problem by transforming it into a smaller problem, cutting the size by some
constant fraction., When n is a million, log n is a doubled whenever n doubles, log
n increases by a constant, but log n does not double until n increases to n .
n When the running time of a program is linear, it is generally the case that a small
amount of processing is done on each input element. This is the optimal situation
for an algorithm that must process n inputs.
n. log n This running time arises for algorithms but solve a problem by breaking it up into
smaller sub-problems, solving them independently, and then combining the
solutions. When n doubles, the running time more than doubles.
n When the running time of an algorithm is quadratic, it is practical for use only on
relatively small problems. Quadratic running times typically arise in algorithms that
process all pairs of data items (perhaps in a double nested loop) whenever n
doubles, the running time increases four fold.
n Similarly, an algorithm that process triples of data items (perhaps in a triple–
nested loop) has a cubic running time and is practical for use only on small
problems. Whenever n doubles, the running time increases eight fold.
2 Few algorithms with exponential running time are likely to be appropriate for
practical use, such algorithms arise naturally as “brute–force” solutions to
problems. Whenever n doubles, the running time squares.
Lecture Notes 8 Dept. of Information Technology 1.9. Complexity of Algorithms
The complexity of an algorithm M is the function f(n) which gives the running time and/or
storage space requirement of the algorithm in terms of the size ‘n’ of the input data. Mostly,
the storage space required by an algorithm is simply a multiple of the data size ‘n’. Complexity
shall refer to the running time of the algorithm.
The function f(n), gives the running time of an algorithm, depends not only on the size ‘n’ of
the input data but also on the particular data. The complexity function f(n) for certain cases
1. Best Case : The minimum possible value of f(n) is called the best case.
2. Average Case : The expected value of f(n).
3. Worst Case : The maximum value of f(n) for any key possible input.
The field of computer science, which studies efficiency of algorithms, is known as analysis of
Algorithms can be evaluated by a variety of criteria. Most often we shall be interested in the
rate of growth of the time or space required to solve larger and larger instances of a problem.
We will associate with the problem an integer, called the size of the problem, which is a
measure of the quantity of input data.
1.10. Rate of Growth
Big–Oh (O), Big–Omega (Ω), Big–Theta (Θ) and Little–Oh
1. T(n) = O(f(n)), (pronounced order of or big oh), says that the growth rate of T(n) is
less than or equal () that of f(n)
2. T(n) = Ω(g(n)) (pronounced omega), says that the growth rate of T(n) is greater than
or equal to () that of g(n)
3. T(n) = Θ(h(n)) (pronounced theta), says that the growth rate of T(n) equals (=) the
growth rate of h(n) if T(n) = O(h(n)) and T(n) = Ω (h(n)
4. T(n) = o(p(n)) (pronounced little oh), says that the growth rate of T(n) is less than the
growth rate of p(n) if T(n) = O(p(n)) and T(n) ≠ Θ (p(n)).
2 n 2 n
2n + 5n – 6 = O(2 ) 2n + 5n – 6 ≠ Θ(2 )
2 3 2 3
2n + 5n – 6 = O(n ) 2n + 5n – 6 ≠ Θ(n )
2 2 2 2
2n + 5n – 6 = O(n )
2n + 5n – 6 = Θ(n )
2n + 5n – 6 ≠ O(n) 2n + 5n – 6 ≠ Θ(n)
2 n 2 n
2n + 5n – 6 ≠ Ω(2 ) 2n + 5n – 6 = o(2 )
2n + 5n – 6 ≠ Ω(n ) 2n + 5n – 6 = o(n )
2 2 2 2
2n + 5n – 6 = Ω(n ) 2n + 5n – 6 ≠ o(n )
2n + 5n – 6 = Ω(n) 2n + 5n – 6 ≠ o(n)
Lecture Notes 9 Dept. of Information Technology 1.11. Analyzing Algorithms
Suppose ‘M’ is an algorithm, and suppose ‘n’ is the size of the input data. Clearly the
complexity f(n) of M increases as n increases. It is usually the rate of increase of f(n) we want
to examine. This is usually done by comparing f(n) with some standard functions. The most
common computing times are:
2 3 n n
O(1), O(log n), O(n), O(n. log n), O(n ), O(n ), O(2 ), n and n
Numerical Comparison of Different Algorithms
The execution time for six of the typical functions is given below:
2 3 n
S.No log n n n. log n n n 2
1 0 1 1 1 1 2
2 1 2 2 4 8 4
3 2 4 8 16 64 16
4 3 8 24 64 512 256
5 4 16 64 256 4096 65536
2 3 n n
Graph of log n, n, n log n, n , n , 2 , n and n
O(log n) does not depend on the base of the logarithm. To simplify the analysis, the convention
will not have any particular units of time. Thus we throw away leading constants. We will also
throw away low–order terms while computing a Big–Oh running time. Since Big-Oh is an upper
bound, the answer provided is a guarantee that the program will terminate within a certain
time period. The program may stop earlier than this, but never later.
Lecture Notes 10 Dept. of Information Technology One way to compare the function f(n) with these standard function is to use the functional ‘O’
notation, suppose f(n) and g(n) are functions defined on the positive integers with the property
that f(n) is bounded by some multiple g(n) for almost all ‘n’. Then,
f(n) = O(g(n))
Which is read as “f(n) is of order g(n)”. For example, the order of complexity for:
• Linear search is O(n)
• Binary search is O(log n)
• Bubble sort is O(n )
• Quick sort is O(n log n)
For example, if the first program takes 100n milliseconds. While the second taken 5n
milliseconds. Then might not 5n program better than 100n program?
As the programs can be evaluated by comparing their running time functions, with constants by
proportionality neglected. So, 5n program be better than the 100n program.
5 n /100 n = n/20
for inputs n 20, the program with running time 5n will be faster those the one with running
time 100 n .
Therefore, if the program is to be run mainly on inputs of small size, we would indeed prefer
the program whose running time was O(n )
However, as ‘n’ gets large, the ratio of the running times, which is n/20, gets arbitrarily larger.
Thus, as the size of the input increases, the O(n ) program will take significantly more time
than the O(n ) program. So it is always better to prefer a program whose running time with the
lower growth rate. The low growth rate function’s such as O(n) or O(n log n) are always better.
1. Define algorithm.
2. State the various steps in developing algorithms?
3. State the properties of algorithms.
4. Define efficiency of an algorithm?
5. State the various methods to estimate the efficiency of an algorithm.
6. Define time complexity of an algorithm?
7. Define worst case of an algorithm.
8. Define average case of an algorithm.
9. Define best case of an algorithm.
10. Mention the various spaces utilized by a program.
Lecture Notes 11 Dept. of Information Technology 11. Define space complexity of an algorithm.
12. State the different memory spaces occupied by an algorithm.
Multiple Choice Questions
1. _____ is a step-by-step recipe for solving an instance of problem. A
A. Algorithm B. Complexity
C. Pseudocode D. Analysis
2. ______ is used to describe the algorithm, in less formal language. C
A. Cannot be defined B. Natural Language
C. Pseudocode D. None
3. ______ of an algorithm is the amount of time (or the number of steps) D
needed by a program to complete its task.
A. Space Complexity B. Dynamic Programming
C. Divide and Conquer D. Time Complexity
4. ______ of a program is the amount of memory used at once by the C
algorithm until it completes its execution.
A. Divide and Conquer B. Time Complexity
C. Space Complexity D. Dynamic Programming
5. ______ is used to define the worst-case running time of an algorithm. A
A. Big-Oh notation B. Cannot be defined
C. Complexity D. Analysis
Lecture Notes 12 Dept. of Information Technology Chapter
Recursion is deceptively simple in statement but exceptionally
complicated in implementation. Recursive procedures work fine in many
problems. Many programmers prefer recursion through simpler
alternatives are available. It is because recursion is elegant to use
through it is costly in terms of time and space. But using it is one thing
and getting involved with it is another.
In this unit we will look at “recursion” as a programmer who not only
loves it but also wants to understand it With a bit of involvement it is
going to be an interesting reading for you.
2.1. Introduction to Recursion:
A function is recursive if a statement in the body of the function calls itself. Recursion is
the process of defining something in terms of itself. For a computer language to be
recursive, a function must be able to call itself.
For example, let us consider the function factr() shown below, which computers the
factorial of an integer.
int factorial (int);
int num, fact;
printf (“Enter a positive integer value: ");
scanf (“%d”, &num);
fact = factorial (num);
printf ("\n Factorial of %d =%5d\n", num, fact);
int factorial (int n)
if (n == 0)
result = n factorial (n-1);
A non-recursive or iterative version for finding the factorial is as follows:
factorial (int n)
int i, result = 1;
if (n == 0)
Lecture Notes 13 Dept. of Information Technology return (result);
for (i=1; i=n; i++)
result = result i;
The operation of the non-recursive version is clear as it uses a loop starting at 1 and
ending at the target value and progressively multiplies each number by the moving
When a function calls itself, new local variables and parameters are allocated storage
on the stack and the function code is executed with these new variables from the start.
A recursive call does not make a new copy of the function. Only the arguments and
variables are new. As each recursive call returns, the old local variables and
parameters are removed from the stack and execution resumes at the point of the
function call inside the function.
When writing recursive functions, you must have a exit condition somewhere to force
the function to return without the recursive call being executed. If you do not have an
exit condition, the recursive function will recurse forever until you run out of stack
space and indicate error about lack of memory, or stack overflow.
2.2. Differences between recursion and iteration:
• Both involve repetition.
• Both involve a termination test.
• Both can occur infinitely.
Iteration explicitly user a repetition Recursion achieves repetition through
structure. repeated function calls.
Iteration terminates when the loop Recursion terminates when a base case
continuation. is recognized.
Iteration keeps modifying the counter Recursion keeps producing simple
until the loop continuation condition versions of the original problem until
fails. the base case is reached.
Iteration normally occurs within a loop Recursion causes another copy of the
so the extra memory assigned is function and hence a considerable
omitted. memory space’s occupied.
It reduces the processor’s operating It increases the processor’s operating
2.3. Factorial of a given number:
The operation of recursive factorial function is as follows:
Start out with some natural number N (in our example, 5). The recursive definition is:
n = 0, 0 = 1 Base Case
n 0, n = n (n - 1) Recursive Case
Lecture Notes 14 Dept. of Information Technology Recursion Factorials:
5 =5 4 = 5 ___ = ____ factr(5) = 5 factr(4) = __
4 = 4 3 = 4 ___ = ___ factr(4) = 4 factr(3) = __
3 = 3 2 = 3 ___ = ___ factr(3) = 3 factr(2) = __
2 = 2 1 = 2 ___ = ___ factr(2) = 2 factr(1) = __
1 = 1 0 = 1 __ = __ factr(1) = 1 factr(0) = __
0 = 1 factr(0) = __
5 = 54 = 543 = 5432 = 54321 = 543210 = 543211
We define 0 to equal 1, and we define factorial N (where N 0), to be N factorial
(N-1). All recursive functions must have an exit condition, that is a state when it does
not recurse upon itself. Our exit condition in this example is when N = 0.
Tracing of the flow of the factorial () function:
When the factorial function is first called with, say, N = 5, here is what happens:
Does N = 0? No
Function Return Value = 5 factorial (4)
At this time, the function factorial is called again, with N = 4.
Does N = 0? No
Function Return Value = 4 factorial (3)
At this time, the function factorial is called again, with N = 3.
Does N = 0? No
Function Return Value = 3 factorial (2)
At this time, the function factorial is called again, with N = 2.
Does N = 0? No
Function Return Value = 2 factorial (1)
At this time, the function factorial is called again, with N = 1.
Does N = 0? No
Function Return Value = 1 factorial (0)
At this time, the function factorial is called again, with N = 0.
Does N = 0? Yes
Function Return Value = 1
Lecture Notes 15 Dept. of Information Technology Now, we have to trace our way back up See, the factorial function was called six
times. At any function level call, all function level calls above still exist So, when we
have N = 2, the function instances where N = 3, 4, and 5 are still waiting for their
So, the function call where N = 1 gets retraced first, once the final call returns 0. So,
the function call where N = 1 returns 11, or 1. The next higher function call, where N
= 2, returns 2 1 (1, because that's what the function call where N = 1 returned). You
just keep working up the chain.
When N = 2, 2 1, or 2 was returned.
When N = 3, 3 2, or 6 was returned.
When N = 4, 4 6, or 24 was returned.
When N = 5, 5 24, or 120 was returned.
And since N = 5 was the first function call (hence the last one to be recalled), the value
120 is returned.
2.4. The Towers of Hanoi:
In the game of Towers of Hanoi, there are three towers labeled 1, 2, and 3. The game
starts with n disks on tower A. For simplicity, let n is 3. The disks are numbered from 1
to 3, and without loss of generality we may assume that the diameter of each disk is
the same as its number. That is, disk 1 has diameter 1 (in some unit of measure), disk
2 has diameter 2, and disk 3 has diameter 3. All three disks start on tower A in the
order 1, 2, 3. The objective of the game is to move all the disks in tower 1 to entire
tower 3 using tower 2. That is, at no time can a larger disk be placed on a smaller disk.
Figure 3.11.1, illustrates the initial setup of towers of Hanoi. The figure 3.11.2,
illustrates the final setup of towers of Hanoi.
The rules to be followed in moving the disks from tower 1 tower 3 using tower 2 are as
• Only one disk can be moved at a time.
• Only the top disc on any tower can be moved to any other tower.
• A larger disk cannot be placed on a smaller disk.
Tower 1 Tower 2 Tower 3
Fig. 3.11.1. Initial setup of Towers of Hanoi
Lecture Notes 16 Dept. of Information Technology Tower 1 Tower 2 Tower 3
Fig 3.11.2. Final setup of Towers of Hanoi
The towers of Hanoi problem can be easily implemented using recursion. To move the
largest disk to the bottom of tower 3, we move the remaining n – 1 disks to tower 2
and then move the largest disk to tower 3. Now we have the remaining n – 1 disks to
be moved to tower 3. This can be achieved by using the remaining two towers. We can
also use tower 3 to place any disk on it, since the disk placed on tower 3 is the largest
disk and continue the same operation to place the entire disks in tower 3 in order.
The program that uses recursion to produce a list of moves that shows how to
accomplish the task of transferring the n disks from tower 1 to tower 3 is as follows:
void towers_of_hanoi (int n, char a, char b, char c);
int main (void)
printf("Enter number of discs: ");
towers_of_hanoi (n, "Tower 1", "Tower 2", "Tower 3");
void towers_of_hanoi (int n, char a, char b, char c)
if (n == 1)
printf ("\n%5d: Move disk 1 from %s to %s", cnt, a, c);
towers_of_hanoi (n-1, a, c, b);
printf ("\n%5d: Move disk %d from %s to %s", cnt, n, a, c);
towers_of_hanoi (n-1, b, a, c);
Lecture Notes 17 Dept. of Information Technology Output of the program:
Enter the number of discs: 3
1: Move disk 1 from tower 1 to tower 3.
2: Move disk 2 from tower 1 to tower 2.
3: Move disk 1 from tower 3 to tower 2.
4: Move disk 3 from tower 1 to tower 3.
5: Move disk 1 from tower 2 to tower 1.
6: Move disk 2 from tower 2 to tower 3.
7: Move disk 1 from tower 1 to tower 3.
Enter the number of discs: 4
1: Move disk 1 from tower 1 to tower 2.
2: Move disk 2 from tower 1 to tower 3.
3: Move disk 1 from tower 2 to tower 3.
4: Move disk 3 from tower 1 to tower 2.
5: Move disk 1 from tower 3 to tower 1.
6: Move disk 2 from tower 3 to tower 2.
7: Move disk 1 from tower 1 to tower 2.
8: Move disk 4 from tower 1 to tower 3.
9: Move disk 1 from tower 2 to tower 3.
10: Move disk 2 from tower 2 to tower 1.
11: Move disk 1 from tower 3 to tower 1.
12: Move disk 3 from tower 2 to tower 3.
13: Move disk 1 from tower 1 to tower 2.
14: Move disk 2 from tower 1 to tower 3.
15: Move disk 1 from tower 2 to tower 3.
2.5. Fibonacci Sequence Problem:
A Fibonacci sequence starts with the integers 0 and 1. Successive elements in this
sequence are obtained by summing the preceding two elements in the sequence. For
example, third number in the sequence is 0 + 1 = 1, fourth number is 1 + 1= 2, fifth
number is 1 + 2 = 3 and so on. The sequence of Fibonacci integers is given below:
0 1 1 2 3 5 8 13 21 . . . . . . . . .
Lecture Notes 18 Dept. of Information Technology A recursive definition for the Fibonacci sequence of integers may be defined as follows:
Fib (n) = n if n = 0 or n = 1
Fib (n) = fib (n-1) + fib (n-2) for n =2
We will now use the definition to compute fib(5):
fib(5) = fib(4) + fib(3)
fib(3) + fib(2) + fib(3)
fib(2) + fib(1) + fib(2) + fib(3)
fib(1) + fib(0) + fib(1) + fib(2) + fib(3)
1 + 0 + 1 + fib(1) + fib(0) + fib(3)
1 + 0 + 1 + 1 + 0 + fib(2) + fib(1)
1 + 0 + 1 + 1 + 0 + fib(1) + fib(0) + fib(1)
1 + 0 + 1 + 1 + 0 + 1 + 0 + 1 = 5
We see that fib(2) is computed 3 times, and fib(3), 2 times in the above calculations.
We save the values of fib(2) or fib(3) and reuse them whenever needed.
A recursive function to compute the Fibonacci number in the n position is given below:
printf (“=nfib(5) is %d”, fib(5));
if (n==0 n==1)
x=fib(n-1) + fib(n-2);
fib(5) is 5
Lecture Notes 19 Dept. of Information Technology