Sei sulla pagina 1di 224

Semester-III

Programming
and
Data Structures

02_Programming and Data Structures.indd 1 4/23/2014 12:43:27 PM


The aim of this publication is to supply information taken from sources believed to be valid and
reliable. This is not an attempt to render any type of professional advice or analysis, nor is it to
be treated as such. While much care has been taken to ensure the veracity and currency of the
information presented within, neither the publisher nor its authors bear any responsibility for
any damage arising from inadvertent omissions, negligence or inaccuracies (typographical or
factual) that may have found their way into this book.

EEE_Sem-VI_Chennai_FM.indd iv 12/7/2012 6:40:43 PM


B.E./B.Tech. Degree Examination,
Nov/Dec 2013
Third Semester
Computer Science and Engineering
(Common to Information Technology)

PROGRAMMING AND DATA STRUCTURES


(Regulation 2008/2010)
Time: Three hoursMaximum: 100 marks
Answer ALL questions

PART A (10 × 2 = 20 marks)

1. What is meant by abstract data types?

2. W
 hat is the advantages of using cursor based linked list over single
linked list?

3. What are the conditions are satisfied in Binary search tree?

4. Obtain expression tree for given expression : (a + b)*(c – d) – (e/f).

5. What is meant by collision in hashing?

6. What is the use of extendible hashing?

7. What is meant by Topological sort?

8. What is meant by Euler’s circuits?

9. Define NP—complete problems.

10. Define Theta Notation.

02_Programming and Data Structures.indd 3 4/23/2014 12:43:27 PM


2.4 B.E./B.Tech. Question Papers

Part B (5 × 16 = 80 marks)

11. (a) I mplement circular linked list for the operations of insert, delete and
display. (16)
Or
(b) I mplement stack operations to check whether the given string is
palindrome or Not. (16)

12. (a) Explain briefly about the rotations of AVL Tree. (16)
Or
(b) E
 xplain briefly about the insert operations in binary search tree.
 (16)

13. (a) Explain briefly about open hashing techniques with neat example.
 (16)
Or
(b) Explain about dynamic equivalence problem with an example. (16)

1 4. (a) Explain about minimum spanning tree with algorithm techniques.


 (16)
Or
(b) Explain briefly about single shortest path algorithm with an example.
 (16)

15. (a) A
 nalyze merge sort for the following numbers using divide and
conquer strategy 25, 3, 45, 789, 87, 56, 47, 65, 30, 59, 56, 24. (16)
Or
(b) E
 xplain eighth queens problem in backtracking algorithm with neat
example. (16)
(b) (i) Describe the Various schemes launched for woman and child
welfare in India.
(ii) What are the modes of transmission of HIV? And how can it be
prevented.

02_Programming and Data Structures.indd 4 4/23/2014 12:43:27 PM


Programming and Data Structures (Nov/Dec 2013) 2.5

Solutions
Part A

1. The programmer needs to define everything related to the data type such
as how the data values are stored, the possible operations that can be
carried out with the custom data type and that it must behave like a built-
in type and not create any confusion while coding the program. Such
custom data types are called Abstract data type.
For example, if the programmer wants to define date data type which
is not available in C/C++, s/he can create it using struct or class. Only
a declaration is not enough; it is also necessary to check whether the
date is valid or invalid. This can be achieved by checking using different
conditions. The following program explains the creation of date data
type:
Example
# include <stdio.h>
# include <conio.h>
struct date
{
int dd;
int mm;
int yy;
};
void main()
{
struct date d; /* date is abstract data type */
clrscr();
printf("Enter date(dd mm yy) :");
scanf("%d %d %d",&d.dd,&d.mm, &d.yy);
printf("Date%d-%d-%d",d.dd,d.mm,d.yy);
}
OUTPUT
Enter date(dd/mm/yy): 25 10 2011
Date 25-10-2011

Explanation:
In this program, using struct keyword the date data type is declared. It
contains three-integer variables dd, mm and yy to store date, month and

02_Programming and Data Structures.indd 5 4/23/2014 12:43:28 PM


2.6 B.E./B.Tech. Question Papers

year. Through scanf statement date, month and year is entered and it is
displayed using printf statement.

2. The advantages are:


•• The data are stored in a collection of structures. Each structure con-
tains data and a pointer to the next structure.
•• A new structure can be obtained from the systems global memory by
a call to malloc and realeased by a call to free.

3.
•• The left subtree of a given node must only contain keys that are less
than the parent node's.
•• The right subtree of a given node must only contain keys that are
more than the parent node's.
•• The left and right subtrees must themselves be binary search trees,
i.e. they must have two edges that satisfy the above conditions.
•• There cannot be duplicate nodes within the tree.

4. (a + b)*(c - d) - (e / f)

* –

+ – /

a b c d e f

5. In an open addressing hashing system,if a collision occurs,alternative


cells are tried until an empty cell is found.

6. Extendible Hashing can be used in applications where exact match


query is the most important query such as hash join.

02_Programming and Data Structures.indd 6 4/23/2014 12:43:28 PM


Programming and Data Structures (Nov/Dec 2013) 2.7

7. Topological sort is an ordering of vertices in a directed acylic graph


such that if there is a path from vi to vj, then vj appears after vi in the
ordering.

8. Euler circuit is referred to as Euler Path. It exists if it is possible to travel


over every edge of a graph exactly once and return to the starting vertex.

9. An NP-complete problem has the property that any problem in NP can


be polynomially reduced to it.
e.g., Travelling Salesman Problem.

10.

Part B
11. (a)
CIRCULAR LINKED LIST
In circular linked list the last node points to the header node. The linear
link list can be converted to circular linked list by linking the last node to
the first node. The last node of the linear linked list holds NULL pointer,
which indicates the end of linked list but performance of linked list can
be advanced with minor adjustment in the linear linked list. Instead of
placing the NULL pointer, the address of the first node can be given
to the last node, such a list is called circular linked list (as shown in
Figure 1).
First Node

data data data data

pointer pointer pointer pointer

Figure 1 Circular Linked List

02_Programming and Data Structures.indd 7 4/23/2014 12:43:28 PM


2.8 B.E./B.Tech. Question Papers

The circular linked list is more helpful as compared to singly linked list.
In the circular linked list, all the nodes of the list are accessible from the
given node. Once a node is accessed, by traversing all the nodes can be
accessed in succession.
In this type of list, the deletion operation is very easy. To delete an
element from the singly linked list, it is essential to obtain the address
of the first node of the list. For example, we want to delete the element,
say 5, which exists in the middle of the list. To remove the element five,
we need to find predecessor of five. Obviously, a particular element can
be searched using searching process in which all elements are visited
and compared. In the circular linked list, no such process is needed. The
address of predecessor can be found from the given element itself. In
addition, the operation splitting and concatenation of the list (discussed
later) are easier.
Though the circular linked list has advantages over linear linked list,
it also has some limitations. This list does not have first and last node.
While traversing the linked list, due to lack of NULL pointer, there may
be a possibility to get into an infinite loop. Thus, in the circular linked
list it is necessary to identify the end of the list. This can be done by
setting up the first and last node by convention. We detect the first node
by creating the list head, which holds the address of the first node. The
list head is also called as external pointer. We can also keep a counter,
which is incremented when nodes are created and end of the node can be
detected. Figure 2 gives an example of circular linked list with header.
Header Last Node

9 & 2 & 7 & 3

Element to be Deleted

Figure 2 Circular Linked List with Header

11. (b)
#include <stdio.h>
#include <stdlib.h>
#include <conio.h>
#define SIZE 10
typedef struct
{
int items[SIZE];

02_Programming and Data Structures.indd 8 4/23/2014 12:43:28 PM


Programming and Data Structures (Nov/Dec 2013) 2.9

int top;
}STACK;
void push();
int pop();
void display();
int isoverflow();
int isempty();
int main()
{
STACK s;
char str[100];
int i, flag;
s.top = -1;
printf(“\nEnter a string: “);
gets(str);
for(i=0;str[i]!=’\0′; i++)
push(&s, str[i]);
flag = 1;
for(i=0;str[i]!=’\0′;i++)
{
if(str[i] != pop(&s))
{
flag = 0;
break;
}
}
if(flag == 1)
printf(“\nEntered string is palindrome\n”);
else
printf(“\nEntered string is not a palindrome\n”);
getch();
}
void push(STACK *p, int element)
{
if(isoverflow(p->top))
{
printf(“\nStack is overflow”);
}
else
{
(p->top)++;
p->items[p->top] = element;
}

02_Programming and Data Structures.indd 9 4/23/2014 12:43:28 PM


2.10 B.E./B.Tech. Question Papers

}
int pop(STACK *p)
{
if(isempty(p->top))
{
printf(“\nStack is underflow”);
}
else
{
return (p->items[(p->top)--]);
}
}
void display(STACK s)
{
int i;
if(isempty(s.top))
{
printf(“\nStack is empty”);
}
else
{
for(i=s.top; i>=0; i–)
{
printf(“\n%d”, s.items[i]);
}
}
}
//Function to check stack overflow condition
int isoverflow(int top)
{
if(top == SIZE – 1)
return (1);
else
return (0);
}
//Function to check stack empty condition
int isempty(int top)
{
if(top == -1)
return (1);
else
return (0);
}

02_Programming and Data Structures.indd 10 4/23/2014 12:43:28 PM


Programming and Data Structures (Nov/Dec 2013) 2.11

12. (a)
Characteristics of AVL Tree
The main characteristics of an AVL tree are as follows:
1. If the BF of a node is 0, it means that the height of the left sub-tree
and height of the right sub-tree is equal .
2. If the BF of a node is 1, it means that the height of the left sub-tree is
greater than the height of the right sub-tree.
3. If the BF of a node is –1, it means that the height of the left sub-tree
is lesser than the height of the right sub-tree.
A tree is said to be a complete AVL tree if all the nodes of the tree satisfy
all the three conditions discussed above. Some AVL and non-AVL trees
with their balance factors are shown in Figure 1.
1 1 0

0 0 0

(a) AVL Trees

2 −2

−1

1
0

0
(b) Non-AVL Trees

Figure 1 AVL Tree and Non-AVL Tree

A non-AVL tree can be converted to an AVL tree by performing


different types of rotation on a given tree. Single or double rotation may
be performed.
Single rotation can be left rotation (anti-clock-wise rotation) or right
rotation (clock-wise rotation). Left and right rotations for AVL and non-
AVL trees are shown in Figure 2 and Figure 3, respectively.

02_Programming and Data Structures.indd 11 4/23/2014 12:43:29 PM


2.12 B.E./B.Tech. Question Papers

–2
A

Rotate Left 0 B
Rotate Left
−1 B

0 A 0 C
0 C

(a) Non-AVL Tree (b) AVL Tree

Figure 2 Left Rotation (Anti-Clock-Wise Rotation)

0
C
0 B

1 Rotate Right
B

0 A 0 C

0
A

(a) Non-AVL Tree (b) AVL Tree

Figure 3 Right Rotation (Clock-Wise Rotation)

Double rotation consists of the following:


•• Rotate right and rotate left (Figure 4)
•• Rotate left and rotate right (Figure 5)

Rotate Right and Rotate left 0


−2 A B
−2 A
Rotate Right Rotate Left
1 −1 B
B 0 A 0 C

0 C 0 C
Non-AVL Tree Non-AVL Tree AVL Tree
Figure 4 Rotate Right and Rotate Left

02_Programming and Data Structures.indd 12 4/23/2014 12:43:29 PM


Programming and Data Structures (Nov/Dec 2013) 2.13

2 C 20 C
B

Rotate Left Rotate Right


1 B
−1 A
0 A 0 c

0 B 0 A

Non-AV L T ree Non-AVL Tree AV L T ree

Figure 5 Rotate Left and Rotate Right

12. (b)
Binary Search Tree
A binary search tree is also called as binary sorted tree. Binary search
tree is either empty or each node N of tree satisfies the following
property:
1. The key value in the left child is not more than the value of root.
2. The key value in the right child is more than or identical to the value
of root.
3. All the sub-trees, i.e. left and right sub-trees follow the two rules
mentioned above.
Binary search tree is shown in Figure 1.
In Figure 1 number 7 is the root node of the binary tree. There are two
sub-trees to root 7. The left sub-tree is 4 and right sub-tree is 8. Here,
the value of left sub-tree is lower than root and value of right sub-tree
is higher than root node. This property can be observed at all levels in
the tree.
7
4 8

3 5 1 9

4 6

Figure 1 Binary Search Tree

02_Programming and Data Structures.indd 13 4/23/2014 12:43:30 PM


2.14 B.E./B.Tech. Question Papers

Searching an Element in Binary Search Tree


The item which is to be searched is compared with the root node. If it
is less than the root node then the left child of left sub tree is compared
otherwise right child is compared. The process would be continued till
the item is found. A program based on the above point is given below.
Example: Write a program to search an element from the binary
tree.
# include <stdio.h>
# include <conio.h>
struct tree
{
long data;
struct tree *left;
struct tree *right;
};
int sn;
struct tree *bt=NULL;
struct tree *insert(struct tree*bt,long no);
void search(struct tree *bt, long sn);
void main()
{
long no;
clrscr();
puts("Enter the nodes of tree in preorder: and 0 to quit");
scanf("%ld",&no);
while(no!=0)
{
bt= insert(bt,no);
scanf("%d",&no);
}
printf("\nEnter the number to search:-");
scanf("%d",&sn);
search(bt,sn);
}
struct tree*insert(struct tree*bt,long no)
{
if(bt==NULL)
{
bt=(struct tree*) malloc(sizeof(struct tree));
bt->left=bt->right=NULL;
bt->data=no;

02_Programming and Data Structures.indd 14 4/23/2014 12:43:30 PM


Programming and Data Structures (Nov/Dec 2013) 2.15

}
else
{
if(no<bt->data)
bt->left=insert(bt->left,no);
else
if(no>bt->data)
bt->right=insert(bt->right,no);
else
if(no==bt->data)
{
puts("Duplicates nodes: Program exited");
exit(0);
}
}
return(bt);
}
void search(struct tree*bt, long fn)
{
if(bt==NULL)
puts("The number does not exit");
else
if(fn==bt->data)
printf("The number %d is present in tree",fn);
else
if(fn<bt->data)
search(bt->left,fn);
else
search(bt->right, fn);
}
OUTPUT
Enter the nodes of tree in preorder: and 0 to quit
3 5 11 17 34 0
Enter the number to search:-17
The number 17 is present in tree
Enter the nodes of tree in preorder: and 0 to quit
3 5 11 17 34 0
Enter the number to search:-4
The number does not exit

Explanation:
This program contains struct tree which is used to store the binary
tree. The insert() function inserts the nodes into the binary tree. The

02_Programming and Data Structures.indd 15 4/23/2014 12:43:30 PM


2.16 B.E./B.Tech. Question Papers

search() function searches the number from the tree. The fn variable
is used to store the number which the user will not find. The search
function first compares the fn with the root node. If the value of fn is
less than the root node then the function searches the number in the left
sub-tree, else it finds the number in the right sub-tree.
Insertion of an Element in Binary Search Tree
Insertion of an element in binary search tree needs to locate the parent
node. The element to be inserted in the tree may be on the left sub-tree
or right sub-tree. If the inserted number is lesser than the root node then
left sub-tree is recursively called, otherwise right sub-tree is chosen for
insertion. A program based on the above notion is described below.
Example: Write a program to insert an element into the binary
search tree.
# include <stdio.h>
# include <conio.h>
struct tree
{
long data;
struct tree *left;
struct tree *right;
};
int in;
struct tree *bt=NULL;
struct tree *insert(struct tree*bt,long no);
void inorder(struct tree *bt);
main()
{
long no;
clrscr();
puts("Enter the nodes of tree in preorder: and 0 to quit");
scanf("%ld",&no);
while(no!=0)
{
bt= insert(bt,no);
scanf("%d",&no);
}
printf("Enter the number to insert:- ");
scanf("%d",&in);
bt=insert(bt,in);
printf("The inorder of tree after insertion of an element\n");

02_Programming and Data Structures.indd 16 4/23/2014 12:43:30 PM


Programming and Data Structures (Nov/Dec 2013) 2.17

inorder(bt);
}
struct tree*insert(struct tree*bt,long no)
{
if(bt==NULL)
{
bt=(struct tree*) malloc(sizeof(struct tree));
bt->left=bt->right=NULL;
bt->data=no;
}
else
{
if(no<bt->data)
bt->left=insert(bt->left,no);
else
if(no>bt->data)
bt->right=insert(bt->right,no);
else
if(no==bt->data)
{
puts("Duplicates nodes: Program exited");
exit(0);
}
}
return(bt);
}
void inorder(struct tree *bt)
{
if(bt!=NULL)
{
inorder(bt->left);
printf("%d ",bt->data);
inorder(bt->right);
}
}
OUTPUT
Enter the nodes of tree in preorder: and 0 to quit
7 5 9 0
Enter the number to insert:- 2
The inorder of tree after insertion of an element
2 5 7 9

02_Programming and Data Structures.indd 17 4/23/2014 12:43:30 PM


2.18 B.E./B.Tech. Question Papers

Explanation:
This program invokes a function insert() which inserts the node
into the structure pointer object *bt. Firstly, the nodes which are to be
inserted are entered and then program calls insert() for insertion of
new element. The new inserted element firstly checks with the root of the
tree. If the number is lesser than the root node it is recursively checked
with the nodes, which are present on the left sub-tree, otherwise right
sub-tree. Appropriate position of parent node is found and the element
is inserted. After insertion, elements are arranged in inorder using the
inorder() and the same numbers are displayed on the screen.

Traversing the Binary Search Tree


The binary search tree can be traversed similar to the binary tree by the
inorder, preorder and postorder traversing methods.
Example: Program to traverse the binary search tree by using the
inorder, preorder and postorder methods.
# include <stdio.h>
# include <conio.h>
struct rec
{
int data;
struct rec *left;
struct rec *right;
};
struct rec *t1;
struct rec *insert(struct rec * t1, int data);
void inorder(struct rec *t1);
void preorder(struct rec *t1);
void postorder(struct rec *t1)
void main()
{
int digit;
clrscr();
the t1 is\n");
inorder(t1);
printf("\nThe post order of the t1 is\n");
postorder(t1);
}
struct rec *insert(struct rec * t1, int digit)
{
if(t1==NULL)

02_Programming and Data Structures.indd 18 4/23/2014 12:43:30 PM


Programming and Data Structures (Nov/Dec 2013) 2.19

{
t1=(struct rec *) malloc (sizeof(struct rec));
t1->left= t1->right=NULL;
t1->data=digit;
}
else
if(digit< t1->data)
t1->left=insert(t1->left,digit);
else
if(digit > t1->data)
t1->right=insert(t1->right, digit);
else
if(digit==t1->data)
{
printf("Duplicate node: program exited");
exit(0);
}
return(t1);
}
void inorder(struct rec *t1)
{
if(t1!=NULL)
{
inorder(t1->left);
printf("%d ",t1->data);
inorder(t1->right);
}
}
void preorder(struct rec *t1)
{
if(t1!=NULL)
{
printf("%d ",t1->data);
preorder(t1->left);
preorder(t1->right);
}
}
void postorder(struct rec *t1)
{
if(t1!=NULL)
{
postorder(t1->left);

02_Programming and Data Structures.indd 19 4/23/2014 12:43:30 PM


2.20 B.E./B.Tech. Question Papers

postorder(t1->right);
printf("%d ",t1->data);
}
}
OUTPUT
Enter the t1 in pre order and 0 to quit
3 2 4 0
The preorder of the t1 is
3 2 4
The inorder of the t1 is
2 3 4
The post order of the t1 is
2 4 3

Explanation:
The program first gets the binary search tree into the structure object t1.
And the insert() is used to insert the element into the tree. The three
functions inorder(), preorder() and postorder() are used to traverse
the tree in appropriate manner.

13. (a)
Open Addressing:
It is also called closed hashing which is used to resolve collisions with
linked list. Three common
Collision resolution strategies are:
•• Linear Probing
•• Quadratic Probing
•• Double Hashing

Linear Probing:
In Linear Probing, F is a linear function of I, typically F(i) = i. The
following fig shows the result of inserting key {89, 18, 49, 58, 69} into
hash table using the same hash function and collision resolution strategy
F(i) = i.

Empty After 89 After 18 After 49 After 58 After 69


Table
0 49 49 49
1 59 58

02_Programming and Data Structures.indd 20 4/23/2014 12:43:30 PM


Programming and Data Structures (Nov/Dec 2013) 2.21

2 69
3
4
5
6
7
8 18 18 18 18
9 89 89 89 89 89
The first collision occurs when 49 is inserted and it is put in next
available spot, spot 0. The key 58 collides with 18, 89 and then 49 and
so it is put is next available free spot, spot 1.
Disadvantage: Even if the table is empty, blocks of occupied cells start
forming. This is known as primary clustering.
Quadratic Probing:
It eliminates the primary clustering problem of linear probing. The
choice is F(i) = i2. The following fig shows the example of quadratic
probing.

Empty After 89 After 18 After 49 After 58 After 69


Table
0 49 49 49
1 59 58
2 69
3
4
5
6
7
8 18 18 18 18
9 89 89 89 89 89

When 49 collides with 89, the next cell is attempted and since it is empty
it is placed there. Next 58, collides at spot 8. Then the next cell is tried,

02_Programming and Data Structures.indd 21 4/23/2014 12:43:30 PM


2.22 B.E./B.Tech. Question Papers

again collision occur. A vacant cell is formed by i2 is 22 = 4. Thus it is


placed in cell 2.
Disadvantage:
•• Performance degrades

The insertion algorithm with quadratic probing is:


Void
Insert (ElementType Key, HashTable H)
{
Position Current Pos;
Int CollisionNum;
CollisionNum = 0;
Current Pos = Has(Key, H → Table Size);
While(H → The Cells [Current Pos].Info ! = Empty &&
H → TheCells[CurrentPos].Element ! = key)
{
CurrentPos + = 2 * + + CollisionNum - 1;
If(CurrentPos > = H → TableSize)
}
returnCurrentPos;
}
Double Hashing:
The choice is F(i) = i, hash2(x)

Empty After 89 After 18 After 49 After 58 After 69


Table
0 49 49 49
1 59 58
2 69
3
4
5
6
7

02_Programming and Data Structures.indd 22 4/23/2014 12:43:30 PM


Programming and Data Structures (Nov/Dec 2013) 2.23

8 18 18 18 18
9 89 89 89 89 89

13. (b)
The Dynamic Equivalence Problem
Given an equivalence relation ~, the natural problem is to decide, for any
a and b, if a ~ b. If the relation is stored as a two-dimensional array of
booleans, then, of course, this can be done in constant time. The problem
is that the relation is usually not explicitly, but rather implicitly, defined.
As an example, suppose the equivalence relation is defined over the
five-element set {a1, a2, a3, a7, a5}. Then there are 25 pairs of elements,
each of which is either related or not. However, the information a1 ~ a2,
a3 ~ a4, a5 ~ a1, a4 ~ a2 implies that all pairs are related. We would like to
be able to infer this quickly.
The equivalence class of an element a ∈ S is the subset of S that
contains all the elements that are related to a. Notice that the equivalence
classes form a partition of S: every member of S appears in exactly one
equivalence class. To decide if a ~ b, we need only to check whether a
and b are in the same equivalence class. This provides our strategy to
solve the equivalence problem.
The input is initially a collection of N sets, each with one element. This
initial representation is that all relations (except reflexive relations) are
false. Each set has a different element, so that Si ∩ Sj = ∅; this makes
the sets disjoint,
There are two permissible operations. The first is Find, which returns
the name of the set (that is, the equivalence class) containing a given
element. The second operation adds relations. If we want to add the
relation a ~ b, then we first see if a and b are already related. This is done
by performing Finds on both a and b and checking whether they are in
the same equivalence class. If they are not, then we apply Union. This
operation merges the two equivalence classes containing a and b into
a new equivalence class. From a set point of view, the result of U is to
create a new set Sk = Si ∪ Sj, destroying the originals and preserving the
disjointness of all the sets. The algorithm to do this is frequently known
as the disjoint set Union/Find algorithm for this reason.

02_Programming and Data Structures.indd 23 4/23/2014 12:43:30 PM


2.24 B.E./B.Tech. Question Papers

This algorithm is dynamic because, during the course of the algorithm,


the sets can change via the Union operation. The algorithm must also
operate on-line: When a Find is performed, it must give an answer before
continuing. Another possibility would be an off-line algorithm. Such an
algorithm would be allowed to see the entire sequence of Unions and
Finds. The answer it provides for each Find must still be consistent with
all the Unions that were performed up until the Find, but the algorithm
can give all its answers after it has seen all the questions. The difference
is similar to taking a written exam (which is generally off-line—you
only have to give the answers before time expires), and an oral exam
(which is on-line, because you must answer the current question before
proceeding to the next question).
Notice that we do not perform any operations comparing the relative
values of elements, but merely require knowledge of their location. For
this reason, we can assume that all the elements have been numbered
sequentially from 1 to N and that the numbering can be determined
easily by some hashing scheme. Thus, initially we have Si = {i} for i = 1
through N.
Our second observation is that the name of the set returned by Find is
actually fairly arbitrary. All that really matters is that Find(a) = Find(b)
if and only if a and b are in the same set.
These operations are important in many graph theory problems and also
in compilers which process equivalence (or type) declarations. We will
see an application later.
There are two strategies to solve this problem. One ensures that the
Find instruction can be executed in constant worst-case time, and the
other ensures that the Union instruction can be executed in constant
worst-case time. It has recently been shown that both cannot be done
simultaneously in constant worst-case time.
We will now briefly discuss the first approach. For the Find operation
to be fast, we could maintain, in an array, the name of the equivalence
class for each element. Then Find is just a simple O(1) lookup. Suppose
we want to perform Union(a, b). Suppose that a is in equivalence class
i and b is in equivalence class j. Then we scan down the array, changing
all i’s to j. Unfortunately, this scan takes Θ(N). Thus, a sequence of N - 1
Unions (the maximum, since then everything is in one set) would take
Θ(N2) time. If there are W(N2) Find operations, this performance is fine,
since the total running time would then amount to O(1) for each Union

02_Programming and Data Structures.indd 24 4/23/2014 12:43:30 PM


Programming and Data Structures (Nov/Dec 2013) 2.25

or Find operation over the course of the algorithm. If there are fewer
Finds, this bound is not acceptable.
One idea is to keep all the elements that are in the same equivalence
class in a linked list. This saves time when updating, because we do not
have to search through the entire array. This by itself does not reduce the
asymptotic running time, because it is still possible to perform Θ(N2)
equivalence class updates over the course of the algorithm.
If we also keep track of the size of each equivalence class, and when
performing Unions we change the name of the smaller equivalence class
to the larger, then the total time spent for N - 1 merges is O(N log N).
The reason for this is that each element can have its equivalence class
changed at most log N times, since every time its class is changed, its
new equivalence class is at least twice as large as its old. Using this
strategy, any sequence of M Finds and up to N - 1 Unions takes at most
O(M + N log N) time.
In the remainder of this chapter, we will examine a solution to the
Union/Find problem that makes Unions easy but Finds hard. Even so,
the running time for any sequence of at most M Finds and up to N - 1
Unions will be only a little more than O(M +N).

14. (a)
A second greedy strategy is continually to select the edges in order of
smallest weight and accept an edge if it does not cause a cycle. The
action of the algorithm on the graph in the preceding example is shown
in Figure 1.

Figure 1 Action of Kruskal’s algorithm on G

02_Programming and Data Structures.indd 25 4/23/2014 12:43:31 PM


2.26 B.E./B.Tech. Question Papers

Formally, Kruskal’s algorithm maintains a forest—a collection of trees.


Initially, there are | V | single-node trees. Adding an edge merges two
trees into one. When the algorithm terminates, there is only one tree, and
this is the minimum spanning tree. Figure 2 shows the order in which
edges are added to the forest.

Figure 2 KruskaPs algorithm after each stage


The algorithm terminates when enough edges are accepted. It turns
out to be simple to decide whether edge (u, v) should be accepted or
rejected. The appropriate data structure is the Union/Find algorithm of
the previous chapter.
The invariant we will use is that at any point in the process, two vertices
belong to the same set if and only if they are connected in the current
spanning forest. Thus, each vertex is initially in its own set. If u and v
are in the same set, the edge is rejected, because since they are already
connected, adding (u, v) would form a cycle. Otherwise, the edge is
accepted, and a Union is performed on the two sets containing u and v. It
is easy to see that this maintains the set invariant, because once the edge
(u, v) is added to the spanning forest, if w was connected to u and x was
connected to v, then x and w must now be connected, and thus belong in
the same set.
The edges could be sorted to facilitate the selection, but building a heap
in linear time is a much better idea. Then DeleteMins give the edges
to be tested in order. Typically, only a small fraction of the edges need

02_Programming and Data Structures.indd 26 4/23/2014 12:43:31 PM


Programming and Data Structures (Nov/Dec 2013) 2.27

to be tested before the algorithm can terminate, although it is always


possible that all the edges must be tried. For instance, if there was
an extra vertex v8 and edge (v5, v8) of cost 100, all the edges would
have to be examined. Procedure Kruskal in Figure 3 finds a minimum
spanning tree. Because an edge consists of three pieces of data, on some
machines it is more efficient to implement the priority queue as an array
of pointers to edges, rather than as an array of edges. The effect of this
implementation is that, to rearrange the heap, only pointers, not large
records, need to be moved.

Figure 3 Pseudocode for Kruskal’s algorithm


The worst-case running time of this algorithm is O(|E|log|E|), which is
dominated by the heap operations. Notice that since |E| = O(|V|2), this
running time is actually O(|E| log | V|). In practice, the algorithm is much
faster than this time bound would indicate.

14. (b)
If the graph is weighted, the problem (apparently) becomes harder, but
we can still use the ideas from the unweighted case.

02_Programming and Data Structures.indd 27 4/23/2014 12:43:32 PM


2.28 B.E./B.Tech. Question Papers

We keep all of the same information as before. Thus, each vertex is


marked as either known or unknown. A tentative distance dv is kept for
each vertex, as before. This distance turns out to be the shortest path
length from s to v using only known vertices as intermediates. As before,
we record pV ,which is the last vertex to cause a change to dv.
The general method to solve the single-source shortest-path problem is
known as Dijkstra’s algorithm. This thirty-year-old solution is a prime
example of a greedy algorithm. Greedy algorithms generally solve a
problem in stages by doing what appears to be the best thing at each
stage. For example, to make change in U.S. currency, most people count
out the quarters first, then the dimes, nickels, and pennies. This greedy
algorithm gives change using the minimum number of coins. The main
problem with greedy algorithms is that they do not always work. The
addition of a 12-cent piece breaks the coin-changing algorithm for
returning 15 cents, because the answer it gives (one 12-cent piece and
three pennies) is not optimal (one dime and one nickel).
Dijkstra’s algorithm proceeds in stages, just like the unweighted shortest-
path algorithm. At each stage, Dijkstra’s algorithm selects a vertex v,
which has the smallest dv among all the unknown vertices, and declares
that the shortest path from s to v is known. The remainder of a stage
consists of updating the values of dw,
In the unweighted case, we set dw = dv + 1 if dw = ∞. Thus, we essentially
lowered the value of dw if vertex v offered a shorter path. If we apply the
same logic to the weighted case, then we should set dw = dv + cV,W if this
new value for dw would be an improvement. Put simply, the algorithm
decides whether or not it is a good idea to use v on the path to w. The
original cost, dw, is the cost without using v; the cost calculated above is
the cheapest path using v (and only known vertices).
The graph in Figure 9.20 is our example. Figure 9.21 represents the
initial configuration, assuming that the start node, s, is v1. The first
vertex selected is v1, with path length 0. This vertex is marked known.
Now that v1 is known, some entries need to be adjusted. The vertices
adjacent to v1 are v2 and v4. Both these vertices get their entries adjusted,
as indicated in Figure 9.22.

02_Programming and Data Structures.indd 28 4/23/2014 12:43:32 PM


Programming and Data Structures (Nov/Dec 2013) 2.29

Figure 9.20 The directed graph G (again)

Figure 9.21 Initial configuration of table used in Dijkstra’s algorithm

Figure 9.22 After v1 is declared known


Next, v4 is selected and marked known. Vertices v3, v5, v6, and v7 are
adjacent, and it turns out that all require adjusting, as shown in Figure
9.23.

02_Programming and Data Structures.indd 29 4/23/2014 12:43:33 PM


2.30 B.E./B.Tech. Question Papers

Figure 9.23 After v4 is declared known


Next, v2 is selected. v4 is adjacent but already known, so no work is
performed on it. v5 is adjacent but not adjusted, because the cost of
going through v2 is 2 + 10 = 12 and a path of length 3 is already known.
Figure 9.24 shows the table after these vertices are selected.

Figure 9.24 After v2 is declared known


The next vertex selected is v5 at cost 3. v7 is the only adjacent vertex, but
it is not adjusted, because 3 + 6 > 5. Then v3 is selected, and the distance
for v6 is adjusted down to 3 + 5 = 8. The resulting table is depicted in
Figure 9.25.

Figure 9.25 After v5 and then v3 are declared known

02_Programming and Data Structures.indd 30 4/23/2014 12:43:33 PM


Programming and Data Structures (Nov/Dec 2013) 2.31

Next v7 is selected; v6 gets updated down to 5 + 1 =6. The resulting table


is Figure 9.26.

Figure 9.26 After v-j is declared known


Finally, v6 is selected. The final table is shown in Figure 9.27. Figure 9.28
graphically shows how edges are marked known and vertices updated
during Dijkstra’s algorithm.

Figure 9.27 After v6 is declared known and algorithm terminates


To print out the actual path from a start vertex to some vertex v, we can
write a recursive routine to follow the trail left in the p array.

02_Programming and Data Structures.indd 31 4/23/2014 12:43:34 PM


2.32 B.E./B.Tech. Question Papers

Figure 9.28 Stages of Dijkstra’s algorithm


We now give pseudocode to implement Dijkstra’s algorithm. We will
assume that the vertices are numbered from 0 to NumVertex – 1 for
convenience (see Fig. 9.29) and that the graph can be read into an
adjacency list by the routine ReadGraph.

02_Programming and Data Structures.indd 32 4/23/2014 12:43:34 PM


Programming and Data Structures (Nov/Dec 2013) 2.33

Figure 9.29 Declarations for Dijkstra’s algorithm


In the routine in Figure 9.30, the start vertex is passed to the initialization
routine. This is the only place in the code where the start vertex needs
to be known.

Figure 9.30 Table initialization routine


The path can be printed out using the recursive routine in Figure 9.31.
The routine recursively prints the path all the way up to the vertex before
v on the path, and then just prints v. This works because the path is
simple.

02_Programming and Data Structures.indd 33 4/23/2014 12:43:35 PM


2.34 B.E./B.Tech. Question Papers

Figure 9-31 Routine to print the actual shortest path


Figure 9.32 shows the main algorithm, which is just a for loop to fill up
the table using the greedy selection rule.

Figure 932 Pseudocode for Dijkstra’s algoritm


A proof by contradiction will show that this algorithm always works as
long as no edge has a negative cost. If any edge has negative cost, the
algorithm could produce the wrong answer. The running time depends
on how the table is manipulated, which we have yet to consider. If we use
the obvious algorithm of scanning down the table to find the minimum

02_Programming and Data Structures.indd 34 4/23/2014 12:43:36 PM


Programming and Data Structures (Nov/Dec 2013) 2.35

dv, each phase will take O(|V|) time to find the minimum, and thus O(|V|2)
time will be spent finding the minimum over the course of the algorithm.
The time for updating dw is constant per update, and there is at most one
update per edge for a total of O(|E|). Thus, the total running time is O(|E|
+ |V|2) = O(|V|2). If the graph is dense, with |E| = Θ(|V|2), this algorithm
is not only simple but essentially optimal, since it runs in time linear in
the number of edges.
If the graph is sparse, with |E| = Θ(|V|), this algorithm is too slow. In this
case, the distances would need to be kept in a priority queue. There are
actually two ways to do this; both are similar.
Lines 2 and 5 combine to form a DeleteMin operation, since once the
unknown minimum vertex is found, it is no longer unknown and must
be removed from future consideration. The update at line 9 can be
implemented two ways.
One way treats the update as a DecreaseKey operation. The time to find
the minimum is then O(log|V|), as is the time to perform updates, which
amount to DecreaseKey operations. This gives a running time of O(|E|
log |V| + |V|log|V|) = O(|E|log |V|), an improvement over the previous
bound for sparse graphs. Since priority queues do not efficiently support
the Find operation, the location in the priority queue of each value of
di will need to be maintained and updated whenever di changes in the
priority queue. If the priority queue is implemented by a binary heap,
this will be messy. If a pairing heap (Chapter 12) is used, the code is not
too bad.
An alternate method is to insert w and the new value dw into the priority
queue every time line 9 is executed. Thus, there may be more than one
representative for each vertex in the priority queue. When the DeleteMin
operation removes the smallest vertex from the priority queue, it must be
checked to make sure that it is not already known. Thus, line 2 becomes
a loop performing DeleteMins until an unknown vertex emerges.
Although this method is superior from a software point of view, and is
certainly much easier to code, the size of the priority queue could get to
be as large as |£|. This does not affect the asymptotic time bounds, since
|E| ≤ |V|2 implies that log|E| ≤ 2 log |V|. Thus, we still get an O(|E|log|V|)
algorithm. However, the space requirement does increase, and this could
be important in some applications. Moreover, because this method
requires |E| DeleteMins instead of only |V|, it is likely to be slower in
practice.

02_Programming and Data Structures.indd 35 4/23/2014 12:43:36 PM


2.36 B.E./B.Tech. Question Papers

Notice that for the typical problems, such as computer mail and
mass transit commutes, the graphs are typically very sparse because
most vertices have only a couple of edges, so it is important in many
applications to use a priority queue to solve this problem.
There are better time bounds possible using Dijkstra’s algorithm if
different data structures are used. In Chapter 11, we will see another
priority queue data structure called the Fibonacci heap. When this is
used, the running time is O(|E| + | V| log | V|). Fibonacci heaps have
good theoretical time bounds but a fair amount of overhead, so it is not
clear whether using Fibonacci heaps is actually better in practice than
Dijkstra’s algorithm with binary heaps. Needless to say, there are no
average-case results for this problem, since it is not even obvious how to
model a random graph.

15. (a)
Merge Sort
25, 3, 45, 789, 87, 56, 47, 65, 30, 59, 56, 24
Pass 1:
25 3  45 789  87 56  47 65  30 59  56 24
Sort the elements of each pair
3 25  45 789  56 87  47 65  30 59 24 56
Pass 2: Merge two pairs
3 25  45 789  56 87 47 65  30 59 24 56
Sort the elements
3 25  45 789  47 56 65 87  24 30 56 59
Pass 3: Again merge two groups
3 25 45 789   47 56 65 87  24 30 56 59
Sort the elements
3 25 45 47 56 65 87 789 24 30 56 59
Pass 4: Again merge two groups
3 25 45 47 56 65 87 789 24 30 56 59
Sort the elements
3 24 25 30 45 47 56 59 65 87 789

02_Programming and Data Structures.indd 36 4/23/2014 12:43:36 PM


Programming and Data Structures (Nov/Dec 2013) 2.37

15. (b)
Backtracking is a technique used to solve problems with a large search
space, by systematically trying and eliminating possibilities.
Backtracking – Eight Queens Problem:

•• Find an arrangement of 8 queens on a single chess board such that no


two queens are attacking one another.
•• In chess, queens can move all the way down any row, column or
diagonal (so long as no pieces are in the way).

The backtracking strategy is as follows:


•• Place a queen on the first available square in row 1.
•• Move onto the next row, placing a queen on the first available square
there (that doesn't conflict with the previously placed queens).

02_Programming and Data Structures.indd 37 4/23/2014 12:43:36 PM


2.38 B.E./B.Tech. Question Papers

•• Continue in this fashion until either: you have solved the problem,
or you get stuck.
•• When you get stuck, remove the queens that got you there, until you
get to a row where there is another valid square to try.

When we carry out backtracking, an easy way to visualize what is going


on is a tree that shows all the different possibilities that have been tried.
On the board we will show a visual representation of solving the 4
Queens problem (placing 4 queens on a 4x4 board where no two attack
one another).
•• The neat thing about coding up backtracking, is that it can be done
recursively, without having to do all the bookkeeping at once.
•• Instead, the stack or recursive calls does most of the bookkeeping
•• (ie, keeping track of which queens we’ve placed, and which combi-
nations we’ve tried so far, etc.)

02_Programming and Data Structures.indd 38 4/23/2014 12:43:36 PM


B.E./B.Tech. Degree Examination,
May/June 2013
Third Semester
Computer Science and Engineering
(Common to Information Technology)

PROGRAMMING AND DATA STRUCTURES


(Regulation 2008/2010)
Time: Three hoursMaximum: 100 marks
Answer ALL questions

PART A (10 × 2 = 20 marks)

1. What is abstract data types? Give example.

2. What are the applications of stack and queue?

3. Show that in a binary tree of N nodes, there are N + 1 NULL pointer.

4. S
 how the result of inserting 2; 1; 4; 5; 9; 3; 6; 7 into an initially empty
AVL- tree.

5. What is rehashing?

6. Write code for disjoint set find.

7. D
 oes either prim’s or Kruskal’s algorithm work if there are negative edge
weights?

8. List out the applications of graph.

9. Compare and contrast greedy algorithm and dynamic programming.

10. Draw the solution for the 4-queen problem.

02_Programming and Data Structures.indd 39 4/23/2014 12:43:36 PM


2.40 B.E./B.Tech. Question Papers

Part B (5 × 16 = 80 marks)

11. (a) (i) Explain how stack is used to convert the following infix
expression into postfix form a + b* c+ (d* e + f)* g. (8)
(ii) Give the linked list implementation of stack. (8)

Or
(b) E
 xplain and write the routine for insertion, deletion and finding
element in the cursor based linked list. (16)
12. (a) (i) Construct an expression tree for the expression ab + cde +**
 (10)
(ii) Give a precise expression for the minimum number of nodes
in an AVL tree of height h and what is the minimum number of
nodes in an AVL tree of height 15? (6)

Or
(b) (i) Write function to perform delete min in a binary heap. (8)
(ii) Show the result of inserting 3; 1; 4; 6; 9; 2; 5; 7 into an initially
empty binary search tree. (8)
13. (a) G
 iven input (4371, 1323, 6173, 4199, 4344, 9679, 1989) and a hash
function h(x) = x mod 10, show the resulting:
(i) Separate chaining hash table.
(ii) Open addressing hash table using linear probing.
(iii) Open addressing hash table using quadratic probing.
(iv) Open addressing hash table with second hash function
h2(x) = 7-(x mod 7). (16)

Or
(b) Give short note on:
(i) Dynamic equivalence problem. (8)
(ii) Smart union algorithm. (8)

02_Programming and Data Structures.indd 40 4/23/2014 12:43:36 PM


Programming and Data Structures (May/June 2013) 2.41

14. (a) (i) Find the shortest weighted path from A to all other vertices for
the graph in given below figure.
(ii) Find the shortest unweighted path from B to all other vertices for
the graph in given below figure. (16)

Or
(b) (i) Write a routine to implement Kruskal’s algorithm. (8)
(ii) Discuss in detail about bi connectivity. (8)
15. (a) (i) Explain in detail about branch and found algorithm design
technique with an example. (10)
(ii) Write a routine for random number generator algorithm. (6)

02_Programming and Data Structures.indd 41 4/23/2014 12:43:37 PM


2.42 B.E./B.Tech. Question Papers

Solutions
Part A

1. The programmer needs to define everything related to the data type such
as how the data values are stored, the possible operations that can be
carried out with the custom data type and that it must behave like a built-
in type and not create any confusion while coding the program. Such
custom data types are called Abstract data type.
For example, if the programmer wants to define date data type which
is not available in C/C++, s/he can create it using struct or class. Only
a declaration is not enough; it is also necessary to check whether the
date is valid or invalid. This can be achieved by checking using different
conditions. The following program explains the creation of date data
type:
Example
# include <stdio.h>
# include <conio.h>
struct date
{
int dd;
int mm;
int yy;
};
void main()
{
struct date d; /* date is abstract data type */
clrscr();
printf("Enter date(dd mm yy) :");
scanf("%d %d %d",&d.dd,&d.mm, &d.yy);
printf("Date%d-%d-%d",d.dd,d.mm,d.yy);
}
OUTPUT
Enter date(dd/mm/yy): 25 10 2011
Date 25-10-2011

Explanation:
In this program, using struct keyword the date data type is declared. It
contains three-integer variables dd, mm and yy to store date, month and

02_Programming and Data Structures.indd 42 4/23/2014 12:43:37 PM


Programming and Data Structures (May/June 2013) 2.43

year. Through scanf statement date, month and year is entered and it is
displayed using printf statement.

2. Applications of Stack:
•• Balancing Symbols
•• Postfix Expressions
•• Infix to Postfix conversion
•• Function calls

Applications of Queue:
•• To give efficient running times
•• First come First serve

3. Every node has 2 outgoing pointers. Therefore there are 2 N pointers.


Each node, except the root node, has one incoming pointer from its
parent. This accounts for N -1 pointers. The remaining N+ 1 pointers are
NULL pointers.

4.

2 6

1 3 5 9

5. If the table gets too full, the running time for the operations will start
taking too long and Inserts might fail for open addressing hashing with
quadratic resolution. This can happen if there are too many removals
intermixed with insertions. A solution, then, is to build another table that
is about twice as big (with an associated new hash function) and scan
down the entire original hash table, computing the new hash value for
each (nondeleted) element and inserting it in the new table

02_Programming and Data Structures.indd 43 4/23/2014 12:43:37 PM


2.44 B.E./B.Tech. Question Papers

6.
Set Type Find (Element Type X, Disjset S)
{
if (S[X] < = 0)
return X;
else
return Find (S[X], S)
}

7. Yes, both algorithms will work with negative edge weights.

8. Applications of Graph:
• Find Maximum number
• Travelling sales man problem
• Finding Strong components

9.
Greedy Algorithm Dynamic Programming
• Greedy algorithm is a meth- • Dynamic Programming is a
od for solving optimization method for solving optimiza-
problems. tion problems.
• Greedy algorithms are usually • Dynamic Programming pro-
more efficient than Dynamic vides efficient solutions for
Programming solutions some problems for which a
brute force approach would be
very slow

10.

02_Programming and Data Structures.indd 44 4/23/2014 12:43:37 PM


Programming and Data Structures (May/June 2013) 2.45

Part B

11. (a) (i)


Infix to Postfix Conversion
Not only can a stack be used to evaluate a postfix expression, but we can
also use a stack to convert an,expression in standard form (otherwise
known as infix) into postfix. We will concentrate on a small version of
the general problem by allowing only the operators +,*,(, ), and insisting
on the usual precedence rules. We will further assume that the expression
is legal. Suppose we want to convert the infix expression
a + b*c + ( d*e + f ) *g
into postfix. A correct answer isabc* + de*f + g*+.
When an operand is read, it is immediately placed onto the output.
Operators are not immediately output, so they must be saved somewhere.
The correct thing to do is to place operators that have been seen, but not
placed on the output, onto the stack. We will also stack left parentheses
when they are encountered. We start with an initially empty stack.
If we see a right parenthesis, then we pop the stack, writing symbols
until we encounter a (corresponding) left parenthesis, which is popped
but not output.
If we see any other symbol (‘+’, ‘*’, ‘(‘), then we pop entries from the
stack until we find an entry of lower priority. One exception is that we
never remove a ‘(‘ from the stack except when processing a ‘)’. For the
purposes of this operation, ‘ + ‘ has lowest priority and ‘(‘ highest. When
the popping is done, we push the operator onto the stack.
Finally, if we read the end of input, we pop the stack until it is empty,
writing symbols onto the output.
To see how this algorithm performs, we will convert the infix expression
above into its postfix form. First, the symbol a is read, so it is passed
through to the output. Then ‘ + ‘ is read and pushed onto the stack. Next
b is read and passed through to the output. The state of affairs at this
juncture is as follows:

02_Programming and Data Structures.indd 45 4/23/2014 12:43:37 PM


2.46 B.E./B.Tech. Question Papers

Next a ‘*’ is read. The top entry on the operator stack has lower
precedence than ‘*’, so nothing is output and ‘*’ is put on the stack.
Next, c is read and output. Thus far, we have

The next symbol is a ‘+’. Checking the stack, we find that we will pop
a ‘*’ and place it on the output; pop the other ‘+’, which is not of lower
but equal priority, on the stack; and then push the ‘+’.

The next symbol read is a ‘(’, which, being of highest precedence, is


placed on the stack. Then d is read and output.

We continue by reading a ‘*’. Since open parentheses do not get removed


except when a closed parenthesis is being processed, there is no output.
Next, e is read and output.

The next symbol read is a ‘+’. We pop and output ‘*’ and then push ‘+’.
Then we read and output f.

02_Programming and Data Structures.indd 46 4/23/2014 12:43:38 PM


Programming and Data Structures (May/June 2013) 2.47

Now we read a ‘)’, so the stack is emptied back to the ‘(’. We output a
‘+’.

We read a ‘*’ next; it is pushed onto the stack. Then g is read and output.

The input is now empty, so we pop and output symbols from the stack
until it is empty.

As before, this conversion requires only O(N) time and works in one pass
through the input. We can add subtraction and division to this repertoire
by assigning subtraction and addition equal priority and multiplication
and division equal priority. A subtle point is that the expression a - b - c
will be converted to ab - c- and not abc - -. Our algorithm does the
right thing, because these operators associate from left to right. This is
not necessarily the case in general, since exponentiation associates right
to left: 22 = 28 = 256, not 43 = 64. We leave as an exercise the problem of
3

adding exponentiation to the repertoire of assignments.

11. (a) (ii)


The first implementation of a stack uses a singly linked list. We perform
a Push by inserting at the front of the list. We perform a Pop by deleting
the element at the front of the list. A Top operation merely examines the
element at the front of the list, returning its value. Sometimes the Pop
and Top operations are combined into one. We could use calls to the

02_Programming and Data Structures.indd 47 4/23/2014 12:43:39 PM


2.48 B.E./B.Tech. Question Papers

linked list routines of the previous section, but we will rewrite the stack
routines from scratch for the sake of clarity.
First, we give the definitions in Figure 3.39. We implement the stack
using a header. Figure 3.40 shows that an empty stack is tested for in the
same manner as an empty list.

Figure 3.39 Type declaration for linked list implementation of the stack
adt

Figure 3.40 Routine to test whether a stack is empty— linked list


implementation
Creating an empty stack is also simple. We merely create a header node;
MakeEmpty sets the Next pointer to NULL (see Fig. 3.41). The Push is
implemented as an insertion into the front of a linked list, where the
front of the list serves as the top of the stack (see Fig. 3.42). The Top is
performed by examining the element in the first position of the list (see
Fig. 3.43). Finally, we implement Pop as a deletion from the front of the
list (see Fig. 3.44).

02_Programming and Data Structures.indd 48 4/23/2014 12:43:40 PM


Programming and Data Structures (May/June 2013) 2.49

Figure 3.41 Routine to create an empty stack—linked list


implementation

Figure 3.42 Routine to push onto a stack—linked list implementation

Figure 3.43 Routine to return top element in a stack—linked list


implementation

02_Programming and Data Structures.indd 49 4/23/2014 12:43:41 PM


2.50 B.E./B.Tech. Question Papers

Figure 3.44 Routine to pop from a stack—linked list implementation


It should be clear that all the operations take constant time, because
nowhere in any of the routines is there even a reference to the size of
the stack (except for emptiness), much less a loop that depends on this
size. The drawback of this implementation is that the calls to malloc and
free are expensive, especially in comparison to the pointer manipulation
routines. Some of this can be avoided by using a second stack, which is
initially empty. When a cell is to be dropped from the first stack, it is
merely placed on the second stack. Then, when new cells are needed for
the first stack, the second stack is checked first.

11. (b)
Many languages, such as basic and fortran, do not support pointers. If
linked lists are required and pointers are not available, then an alternative
implementation must be used. The method we will describe is known as
a cursor implementation.
The two important features present in a pointer implementation of linked
lists are as follows:
1. The data are stored in a collection of structures. Each structure
contains data and a pointer to the next structure.
2. A new structure can be obtained from the system’s global memory by
a call to malloc and released by a call to free.
Our cursor implementation must be able to simulate this. The logical
way to satisfy condition 1 is to have a global array of structures. For
any cell in the array, its array index can be used in place of an address.
Figure 3.28 gives the declarations for a cursor implementation of linked
lists.

02_Programming and Data Structures.indd 50 4/23/2014 12:43:41 PM


Programming and Data Structures (May/June 2013) 2.51

Figure 3.28 Declarations for cursor implementation of linked lists


We must now simulate condition 2 by allowing the equivalent of malloc
and free for cells in the CursorSpace array. To do this, we will keep a list
(the freelist) of cells that are not in any list. The list will use cell 0 as a
header. The initial configuration is shown in Figure 3.29.

Figure 3.29 An initialized CursorSpace

02_Programming and Data Structures.indd 51 4/23/2014 12:43:42 PM


2.52 B.E./B.Tech. Question Papers

A value of 0 for Next is the equivalent of a NULL pointer. The initialization


of CursorSpace is a straightforward loop, which we leave as an exercise.
To perform a malloc, the first element (after the header) is removed from
the freelist. To perform a free, we place the cell at the front of the freelist.
Figure 3.30 shows the cursor implementation of malloc and free. Notice
that if there is no space available, our routine does the correct thing by
setting P = 0. This indicates that there are no more cells left, and also
makes the second line of Cursor Alloc a nonoperation (no-op).

Figure 3.30 Routines: Cursor Alloc and CursorFree


Given this, the cursor implementation of linked lists is straightforward.
For consistency, we will implement our lists with a header node. As an
example, in Figure 3.31, if the value of L is 5 and the value of M is 3,
then L represents the list a, by e, and M represents the list c, d, f

Figure 3.31 Example of a cursor implementation of linked lists

02_Programming and Data Structures.indd 52 4/23/2014 12:43:42 PM


Programming and Data Structures (May/June 2013) 2.53

To write the functions for a cursor implementation of linked lists, we must


pass and return the identical parameters as the pointer implementation.
The routines are straightforward. Figure 3.32 implements a function to
test whether a list is empty. Figure 3.33 implements the test of whether
the current position is the last in a linked list. The function Find in Figure
3.34 returns the position of X in list L. The code to implement deletion is
shown in Figure 3.35. Again, the interface for the cursor implementation
is identical to the pointer implementation. Finally, Figure 3.36 shows a
cursor implementation of Insert.

Figure 3.32 Function to test whether a linked list is empty—cursor


implementation

Figure 3.33 Function to test whether P is last in a linked list—cursor


implementation

02_Programming and Data Structures.indd 53 4/23/2014 12:43:43 PM


2.54 B.E./B.Tech. Question Papers

Figure 3.34 Find routine—cursor implementation

Figure 3.35 Deletion routine for linked lists—cursor implementation

Figure 3.36 Insertion routine for linked lists—cursor implementation


The rest of the routines are similarly coded. The crucial point is that
these routines follow the adt specification. They take specific arguments
and per­form specific operations. The implementation is transparent to
the user. The cursor implementation could be used instead of the linked
list implementation, with virtually no change required in the rest of the
code. If relatively few Finds are performed, the cursor implementation
could be significantly faster because of the lack of memory management
routines.

02_Programming and Data Structures.indd 54 4/23/2014 12:43:43 PM


Programming and Data Structures (May/June 2013) 2.55

The freelist represents an interesting data structure in its own right. The
cell that is removed from the freelist is the one that was most recently
placed there by virtue of free. Thus, the last cell placed on the freelist is
the first cell taken off. The data structure that also has this property is
known as a stack.
Stack Model
A stack is a list with the restriction that insertions and deletions can be
performed in only one position, namely, the end of the list, called the top.
The fundamental operations on a stack are Push, which is equivalent to an
insert, and Pop, which deletes the most recently inserted element. The most
recently inserted element can be examined prior to performing a Pop by use
of the Top routine. A Pop or Top on an empty stack is generally considered
an error in the stack adt. On the other hand, running out of space when
performing a Push is an implementation error but not an adt error.
Stacks are sometimes known as lifo (last in, first out) lists. The model
depicted in Figure 3.37 signifies only that Pushes are input operations
and Pops and Tops are output. The usual operations to make empty
stacks and test for emptiness are part of the repertoire, but essentially all
that you can do to a stack is Push and Pop.
Figure 3.38 shows an abstract stack after several operations. The general
model is that there is some element that is at the top of the stack, and it
is the only element that is visible.

Figure 3.37 Stack model: input to a stack is by Push, output is by Pop

Figure 3.38 Stack model: only the top element is accessible

02_Programming and Data Structures.indd 55 4/23/2014 12:43:43 PM


2.56 B.E./B.Tech. Question Papers

12. (a) (i)


We now give an algorithm to convert a postfix expression into an
expression tree. Since we already have an algorithm to convert infix to
postfix, we can generate expression trees from the two common types of
input. The method we describe strongly resembles the postfix evaluation
algorithm of Section 3.2.3. We read our expression one symbol at a time.
If the symbol is an operand, we create a one-node tree and push a pointer
to it onto a stack. If the symbol is an operator, we pop pointers to two
trees T1 and T2 from the stack (T1 is popped first) and form a new tree
whose root is the operator and whose left and right children point to T2
and T1, respectively. A pointer to this new tree is then pushed onto the
stack.
As an example, suppose the input is
ab + cde + **
The first two symbols are operands, so we create one-node trees and
push pointers to them onto a stack.*

Next, a ‘+‘ is read, so two pointers to trees are popped, a new tree is
formed, and a pointer to it is pushed onto the stack.

For convenience, we will have the stack grow from left to right in the
* 

diagrams.

02_Programming and Data Structures.indd 56 4/23/2014 12:43:44 PM


Programming and Data Structures (May/June 2013) 2.57

Next, c, d, and e are read, and for each a one-node tree is created and a
pointer to the corresponding tree is pushed onto the stack.

Now a ‘ + ‘ is read, so two trees are merged.

Continuing, a ‘*’ is read, so we pop two tree pointers and form a new
tree with a ‘*’ as root.

02_Programming and Data Structures.indd 57 4/23/2014 12:43:45 PM


2.58 B.E./B.Tech. Question Papers

Finally, the last symbol is read, two trees are merged, and a pointer to the
final tree is left on the stack.

12. (a) (ii)


(a) Let S(h) be the minimum number of nodes in an AVL tree T of height
h. The subtrees of an AVL tree with mimimum number of nodes must
also have minimum number of nodes. Also, at least one of the left and
right subtrees of T is an AVL tree of height h - 1. Since the height of left
and right subtrees can differ by at most 1, the other subtree must have
height h - 2. Then, we have the following recurrence relation:
         S(h) =S(h - 1) + S(h-2) + 1. (1)
We also know the base cases: S(0) = 1 and S(1) = 2.
One method to solve a recurrence relation is to guess the solution and
prove it by induction. Observe that the recurrence relation in (1) is very
similar to the recurrence relation of Fibonacci numbers. When we look
at the first a few numbers of the sequence S(h), it is not difficult to
guess S(h) = G(h + 3) - 1. Now, let’s prove that S(h) = F(h + 3) - 1 by
induction.

02_Programming and Data Structures.indd 58 4/23/2014 12:43:45 PM


Programming and Data Structures (May/June 2013) 2.59

Base cases:
S(0) = 1, F(3) = 2. So, S(0) = F(3) - 1.
S(1) = 2, F(4) = 3. So, S(1) = F(4) - 1.
Induction hyprothesis:
Assume that the hypothesis S(h), = F(h + 3) - 1 is true for h = 1, · · · , k.
Inductive step:
Prove that it is also true for h = k + 1.
S(k + 1) = S(k ) + S(k - 1) + 1
= F(k + 3) - 1 + F(k + 2) - 1 + 1
= F(k + 4) - 1.
We replace S(k) and S(k - 1) with their equivalence according to the
hypothesis. Then, we get S(k + 1) = F(k + 4) - 1. Hypothesis is also
true for h = k + 1. Thus, it is true for all h.
(b) S(15) = F(18) - 1.

12. (b) (i)


DeleteMins are handled in a similar manner as insertions. Finding the
minimum is easy; the hard part is removing it. When the minimum is
removed, a hole is created at the root. Since the heap now becomes
one smaller, it follows that the last element X in the heap must move
somewhere in the heap. If X can be placed in the hole, then we are done.
This is unlikely, so we slide the smaller of the hole’s children into the
hole, thus pushing the hole down one level. We repeat this step until X
can be placed in the hole. Thus, our action is to place X in its correct spot
along a path from the root containing minimum children.
In Figure 6.9 the left figure shows a heap prior to the DeleteMin. After
13 is removed, we must now try to place 31 in the heap. The value 31
cannot be placed in the hole, because this would violate heap order.
Thus, we place the smaller child (14) in the hole, sliding the hole down
one level (see Fig. 6.10). We repeat this again, placing 19 into the hole
and creating a new hole one level deeper. We then place 26 in the hole
and create a new hole on the bottom level. Finally, we are able to place
31 in the hole (Fig. 6.11). This general strategy is known as a percolate
down. We use the same technique as in the Insert routine to avoid the use
of swaps in this routine.

02_Programming and Data Structures.indd 59 4/23/2014 12:43:45 PM


2.60 B.E./B.Tech. Question Papers

Figure 6.9 Creation of the hole at the root

Figure 6.10 Next two steps in DeleteMin

Figure 6.11 Last two steps in DeleteMin


A frequent implementation error in heaps occurs when there are an even
number of elements in the heap, and the one node that has only one
child is encountered. You must make sure not to assume that there are
always two children, so this usually involves an extra test. In the code
depicted in Figure 6.12, we’ve done this test at line 8. One extremely
tricky solution is always to ensure that your algorithm thinks every node
has two children. Do this by placing a sentinel, of value higher than any
in the heap, at the spot after the heap ends, at the start of each percolate

02_Programming and Data Structures.indd 60 4/23/2014 12:43:46 PM


Programming and Data Structures (May/June 2013) 2.61

down when the heap size is even. You should think very carefully before
attempting this, and you must put in a prominent comment if you do use
this technique. Although this eliminates the need to test for the presence
of a right child, you cannot eliminate the requirement that you test when
you reach the bottom, because this would require a sentinel for every
leaf.

Figure 6.12 Function to perform DeleteMin in a binary heap


The worst-case running time for this operation is O(logN). On average,
the element that is placed at the root is percolated almost to the bottom
of the heap (which is the level it came from), so the average running
time is O(logN).

02_Programming and Data Structures.indd 61 4/23/2014 12:43:46 PM


2.62 B.E./B.Tech. Question Papers

12. (b) (ii)

3 3 3

1 1 4

3 3 3

1 4 1 4 1 4

2
6 6 6

9 9

3 3

1 4 1 4

2 2
6 6

5 9 5 9

02_Programming and Data Structures.indd 62 4/23/2014 12:43:47 PM


Programming and Data Structures (May/June 2013) 2.63

13. (a)
Separate chaining hash table.

1 4371

3 6173 1323

4 4344

1989 9679 4199


9

Open addressing hash table using linear probing.

0 9679
1 4371
2 1989
3 1323
4 6173
5 4344
6
7
8
9 4199

02_Programming and Data Structures.indd 63 4/23/2014 12:43:47 PM


2.64 B.E./B.Tech. Question Papers

Open addressing hash table using quadratic probing.


0 9679
1 4371
2
3 1323
4 6173
5 4344
6
7
8 1989
9 4199

Open addressing hash table with second hash function


0
1 4371
2
3 1323
4 6173
5 9679
6
7 4344
8
9 4199

13. (b) (i)


Refer Qn. 13 (b) Nov/Dec 2013

13. (b) (ii)


The Unions above were performed rather arbitrarily, by making the
second tree a subtree of the first. A simple improvement is always
to make the smaller tree a subtree of the larger, breaking ties by any
method; we call this approach union-by-size. The three Unions in the
preceding example were all ties, and so we can consider that they were
performed by size. If the next operation were Union(4,5), then the forest

02_Programming and Data Structures.indd 64 4/23/2014 12:43:48 PM


Programming and Data Structures (May/June 2013) 2.65

in Figure 8.10 would form. Had the size heuristic not been used, a deeper
tree would have been formed (Fig. 8.11).

Figure 8.10 Result of union-by-size

Figure 8.11 Result of an arbitrary Union

Figure 8.12 Worst-case tree for N = 16


We can prove that if Unions are done by size, the depth of any node is
never more than log N. To see this, note that a node is initially at depth
0. When its depth increases as a result of a Union, it is placed in a tree
that is at least twice as large as before. Thus, its depth can be increased
at most log N times. (We used this argument in the quick-find algorithm
at the end of Section 8.2.) This implies that the running time for a Find
operation is O(log N), and a sequence of M operations takes O(M log N).
The tree in Figure 8.12 shows the worst tree possible after 16 Unions and

02_Programming and Data Structures.indd 65 4/23/2014 12:43:48 PM


2.66 B.E./B.Tech. Question Papers

is obtained if all Unions are between equal-sized trees (the worst-case


trees are binomial trees, discussed in Chapter 6).
To implement this strategy, we need to keep track of the size of each
tree. Since we are really just using an array, we can have the array entry
of each root contain the negative of the size of its tree. Thus, initially
the array representation of the tree is all -Is (and Figure 8.7 needs to be
changed accordingly). When a Union is performed, check the sizes; the
new size is the sum of the old. Thus, union-by-size is not at all difficult
to implement and requires no extra space. It is also fast, on average. For
virtually all reasonable models, it has been shown that a sequence of
M operations requires O(M) average time if union-by-size is used. This
is because when random Unions are performed, generally very small
(usually one-element) sets are merged with large sets throughout the
algorithm.
An alternative implementation, which also guarantees that all the trees
will have depth at most O(log N), is union-by-height We keep track of the
height, instead of the size, of each tree and perform Unions by making
the shallow tree a subtree of the deeper tree. This is an easy algorithm,
since the height of a tree increases only when two equally deep trees are
joined (and then the height goes up by one). Thus, union-by-height is a
trivial modification of union-by-size.
The following figures show a tree and its implicit representation for both
union-by-size and union-by-height. The code in Figure 8.13 implements
union-by-height.

02_Programming and Data Structures.indd 66 4/23/2014 12:43:49 PM


Programming and Data Structures (May/June 2013) 2.67

Figure 8.13 Code for Union-by-height (rank)

14. (a) (i)


(Unweighted paths) A->B, A->C, A->B->G, A->B->E, A->C->D,
A->B->E->F

14. (a) (ii)


(weighted paths) A->C, A->B, A->B->G, A->B->G->E, A->B->G->E-
>F, A->B->G->E->D.

14. (b) (i)


Refer Qn. 14 (a) Nov/Dec 2013

14. (b) (ii)


A connected undirected graph is biconnected if there are no vertices
whose removal disconnects the rest of the graph. The graph in the
example above is biconnected. If the nodes are computers and the edges
are links, then if any computer goes down, network mail is unaffected,
except, of course, at the down computer. Similarly, if a mass transit
system is biconnected, users always have an alternate route should some
terminal be disrupted.
If a graph is not biconnected, the vertices whose removal would disconnect
the graph are known as articulation points. These nodes are critical in
many applications. The graph in Figure 9.62 is not biconnected: C and
D are articulation points. The removal of C would disconnect G, and the
removal of D would disconnect E and F, from the rest of the graph.

02_Programming and Data Structures.indd 67 4/23/2014 12:43:49 PM


2.68 B.E./B.Tech. Question Papers

Figure 9.62 A graph with articulation points C and D


Depth-first search provides a linear-time algorithm to find all articulation
points in a connected graph. First, starting at any vertex, we perform a
depth-first search and number the nodes as they are visited. For each
vertex v, we call this preorder number Num(v). Then, for every vertex v
in the depth-first search spanning tree, we compute the lowest-numbered
vertex, which we call Low(v), that is reachable from v by taking zero or
more tree edges and then possibly one back edge (in that order). The depth-
first search tree in Figure 9.63 shows the preorder number first, and then
the lowest-numbered vertex reachable under the rule described above.

Figure 9.63 Depth-first tree for previous graph, with Num and Low

02_Programming and Data Structures.indd 68 4/23/2014 12:43:50 PM


Programming and Data Structures (May/June 2013) 2.69

The lowest-numbered vertex reachable by A, B, and C is vertex 1 (A),


because they can all take tree edges to D and then one back edge back to
A. We can efficiently compute Low by performing a postorder traversal
of the depth-first spanning tree. By the definition of Low, Low(v) is the
minimum of
1.  Num(v)
2.  The lowest Num(w) among all back edges (v, w)
3.  The lowest Low(w) among all tree edges (v, w)
The first condition is the option of taking no edges, the second way is
to choose no tree edges and a back edge, and the third way is to choose
some tree edges and possibly a back edge. This third method is succinctly
described with a recursive call. Since we need to evaluate Low for all the
children of v before we can evaluate Low(v), this is a postorder traversal.
For any edge (v, w), we can tell whether it is a tree edge or a back edge
merely by checking Num(v) and Num(w). Thus, it is easy to compute
Low(v): we merely scan down z/s adjacency list, apply the proper rule,
and keep track of the minimum. Doing all the computation takes O(|E|
+ |V|) time.
All that is left to do is to use this information to find articulation points.
The root is an articulation point if and only if it has more than one
child, because if it has two children, removing the root disconnects
nodes in different subtrees, and if it has only one child, removing the
root merely disconnects the root. Any other vertex v is an articulation
point if and only if v has some child w such that Low(w) > Num(v).
Notice that this condition is always satisfied at the root; hence the need
for a special test.
The if part of the proof is clear when we examine the articulation points
that the algorithm determines, namely, C and D. D has a child E, and
Low(E) ≥ Num(D), since both are 4. Thus, there is only one way for E
to get to any node above D, and that is by going through D. Similarly,
C is an articulation point, because Low(G) ≥ Num(C). To prove that
this algorithm is correct, one must show that the only if part of the
assertion is true (that is, this finds all articulation points). We leave
this as an exercise. As a second example, we show (Fig. 9.64) the result
of applying this algorithm on the same graph, starting the depth-first
search at C.

02_Programming and Data Structures.indd 69 4/23/2014 12:43:50 PM


2.70 B.E./B.Tech. Question Papers

Figure 9.64 Depth-first tree that results if depth-first search starts at C


We close by giving pseudocode to implement this algorithm. We will
assume that the arrays Visited[] (initialized to false), Num[], Low[],
and Parent[] are global to keep the code simple. We will also keep
a global variable called Counter, which is initialized to 1 to assign
the preorder traversal numbers, Num[]. This is not normally good
programming practice, but including all the declarations and passing
the extra parameters would cloud the logic. We also leave out the easily
implemented test for the root.
As we have already stated, this algorithm can be implemented by
performing a preorder traversal to compute Num and then a postorder
traversal to compute Low. A third traversal can be used to check which
vertices satisfy the articulation point criteria. Performing three traversals,
however, would be a waste. The first pass is shown in Figure 9.65.
The second and third passes, which are postorder traversals, can be
implemented by the code in Figure 9.66. Line 8 handles a special case.
If w is adjacent to v, then the recursive call to w will find v adjacent to w.
This is not a back edge, only an edge that has already been considered and
needs to be ignored. Otherwise, the procedure computes the minimum
of the various Low[] and Num[] entries, as specified by the algorithm.
There is no rule that a traversal must be either preorder or postorder. It
is possible to do processing both before and after the recursive calls. The
procedure in

02_Programming and Data Structures.indd 70 4/23/2014 12:43:50 PM


Programming and Data Structures (May/June 2013) 2.71

Figure 9.65 Routine to assign Hum to vertices (pseudocode)

Figure 9.66 Pseudocode to compute Low and to test for articulation


points (test for the root is omitted)
Figure 9.67 combines the two routines AssignNum and AssignLow in a
straightforward manner to produce the procedure FindArt.

02_Programming and Data Structures.indd 71 4/23/2014 12:43:51 PM


2.72 B.E./B.Tech. Question Papers

Figure 9.67 Testing for articulation points in one depth-first search (test
for the root is omitted) (pseudocode)

15. (a) (i)


Refer Qn. 14 (b) Nov/Dec 2013

15. (a) (ii)


Since our algorithms require random numbers, we must have a method
to generate them. Actually, true randomness is virtually impossible to
do on a computer, since these numbers will depend on the algorithm,
and thus cannot possibly be random. Generally, it suffices to produce
pseudorandom numbers, which are numbers that appear to be random.
Random numbers have many known statistical properties; pseudorandom
numbers satisfy most of these properties. Surprisingly, this too is much
easier said than done.
Suppose we only need to flip a coin; thus, we must generate a 0 or 1
randomly. One way to do this is to examine the system clock. The clock
might record time as an integer that counts the number of seconds since
some starting time. We could then use the lowest bit. The problem is
that this does not work well if a sequence of random numbers is needed.
One second is a long time, and the clock might not change at all while
the program is running. Even if the time were recorded in units of

02_Programming and Data Structures.indd 72 4/23/2014 12:43:51 PM


Programming and Data Structures (May/June 2013) 2.73

microseconds, if the program were running by itself the sequence of


numbers that would be generated would be far from random, since
the time between calls to the generator would be essentially identical
on every program invocation. We see, then, that what is really needed
is a sequence of random numbers.* These numbers should appear
independent. If a coin is flipped and heads appears, the next coin flip
should still be equally likely to come up heads or tails.
The simplest method to generate random numbers is the linear
congruential generator, which was first described by Lehmer in 1951.
Numbers x1, x1,... are generated satisfying
xi+1 = Axi mod M
To start the sequence, some value of x0 must be given. This value is
known as the seed. If x0 = 0, then the sequence is far from random, but if
A and M are correctly chosen, then any other 1 < x0 < M is equally valid.
If M is prime, then x\ is never 0. As an example, if M = 11, A = 7, and x0
= 1, then the numbers generated are
7,5,2,3,10,4,6,9,8,1,7,5,2,...
Notice that after M - 1 = 10 numbers, the sequence repeats. Thus, this
sequence has a period of M - 1, which is as large as possible (by the
pigeonhole principle). If M is prime, there are always choices of A that
give a full period of M - 1. Some choices of A do not; if A - 5 and x = 1,
the sequence has a short period of 5.
5,3,4,9,1,5,3,4
If M is chosen to be a large, 31-bit prime, the period should be
significantly large for most applications. Lehmer suggested the use of
the 31-bit prime M = 231 - 1 = 2,147,483,647. For this prime, A = 48,271
is one of the many values that gives a full-period generator. Its use has
been well studied and is recommended by experts in the field. We will
see later that with random number generators, tinkering usually means
breaking, so one is well advised to stick with this formula until told
otherwise.
This seems like a simple routine to implement. Generally, a global
variable is used to hold the current value in the sequence of x’s. This
is the rare case where a global variable is useful. This global variable
is initialized by some routine. When debugging a program that uses
random numbers, it is probably best to set x0 = 1, so that the same

* We will use random in place of pseudorandom in the rest of this section.

02_Programming and Data Structures.indd 73 4/23/2014 12:43:51 PM


2.74 B.E./B.Tech. Question Papers

random sequence occurs all the time. When the program seems to work,
either the system clock can be used or the user can be asked to input a
value for the seed.
It is also common to return a random real number in the open interval
(0,1) (0 and 1 are not possible values); this can be done by dividing
by M. From this, a random number in any closed interval [a, b] can
be computed by normalizing. This yields the “obvious” routine in
Figure 10.54 which, unfortunately, works on few machines.

Figure 10.54 Random number generator that does not work


The problem with this routine is that the multiplication could
overflow; although this is not an error, it affects the result and thus
the pseudorandomness. Schrage gave a procedure in which all of the
calculations can be done on a 32-bit machine without overflow. We
compute the quotient and remainder of MIA and define these as Q and
R, respectively. In our case, Q = 44,488, R = 3,399, and R < Q. We have

02_Programming and Data Structures.indd 74 4/23/2014 12:43:52 PM


Programming and Data Structures (May/June 2013) 2.75

A quick check shows that because R < Q, all the remaining terms can
be calculated without overflow (this is one of the reasons for choosing
A = 48,271). Furthermore, d(xi) = 1 only if the remaining terms evaluate
to less than zero. Thus d(xi) does not need to be explicitly computed
but can be determined by a simple test. This leads to the program in
Figure 10.55.

Figure 10.55 Random number generator that works on 32-bit machines


This program works as long as INT_MAX ≥ 231 - 1. One might be
tempted to assume that all machines have a random number generator at

02_Programming and Data Structures.indd 75 4/23/2014 12:43:52 PM


2.76 B.E./B.Tech. Question Papers

least as good as the one in Figure 10.55 in their standard library. Sadly,
this is not true. Many libraries have generators based on the function
xi+1 = (Axi + C) mod 2B
where B is chosen to match the number of bits in the machine’s integer,
and C is odd. These libraries also return x;, instead of a value between 0
and 1. Unfortunately, these generators always produce values of xi that
alternate between even and odd–hardly a desirable property. Indeed,
the lower k bits cycle with period 2k (at best). Many other random
number generators have much smaller cycles than the one provided in
Figure 10.55. These are not suitable for the case where long sequences
of random numbers are needed. Finally, it may seem that we can get a
better random number generator by adding a constant to the equation.
For instance, it seems that
xi+1 = (48,271xi + 1) mod(231 - 1)
would somehow be even more random. This illustrates how fragile these
generators are.
[48,271(179,424,105) + 1] mod(231 - 1) = 179,424,105 so if the seed is
179,424,105, the generator gets stuck in a cycle of period 1.

15. (b) (i)


The analysis required to estimate the resource use of an algorithm is
generally a theoretical issue, and therefore a formal framework is
required. We begin with some mathematical definitions.
Definition: T(N) = O(f(N)) if there are positive constants c and n0 such
that T(N) ≤ cf(N) when N > n0.
Definition: T(N) - W(g(N)) if there are positive constants c and n0 such
that T(N) > cg(N) when N ≥ n0.
Definition: T(N) = Θ(h(N)) if and only if T(N) = O(h(N)) and T(N)
W(h(N)).
Definition: T(N) = O(p(N)) if T(N) = O(p(N)) and T(N) ≠ Θ(p(N)).
The idea of these definitions is to establish a relative order among
functions. Given two functions, there are usually points where one
function is smaller than the other function, so it does not make sense to
claim, for instance, f(N) < g(N). Thus, we compare their relative rates of
growth. When we apply this to the analysis of algorithms, we shall see
why this is the important measure.

02_Programming and Data Structures.indd 76 4/23/2014 12:43:52 PM


Programming and Data Structures (May/June 2013) 2.77

Although 1,000N is larger than N2 for small values of N, N2 grows at a


faster rate, and thus N2 will eventually be the larger function. The turning
point is N = 1,000 in this case. The first definition says that eventually
there is some point n0 past which c · f(N) is always at least as large as
T(N), so that if constant factors are ignored, f(N) is at least as big as
T(N). In our case, we have T(N) = 1,000N, f(N) = N2, n0 = 1,000, and c =
1. We could also use n0 = 10 and c = 100. Thus, we can say that 1,000N
= 0(N2) (order N-squared). This notation is known as Big-Oh notation.
Frequently, instead of saying “order...,” one says “Big-Oh....”
If we use the traditional inequality operators to compare growth rates,
then the first definition says that the growth rate of T(N) is less than
or equal to (≤) that of f(N). The second definition, T(N) = W(g(N))
(pronounced “omega”), says that the growth rate of T(N) is greater
than or equal to (≥) that of g(N). The third definition, T(N) = Θ(h(N))
(pronounced “theta”), says that the growth rate of T(N) equals (= ) the
growth rate of h(N). The last definition, T(N) = o(p(N)) (pronounced
“little-oh”), says that the growth rate of T(N) is less than (<) the growth
rate of p(N). This is different from Big-Oh, because Big-Oh allows the
possibility that the growth rates are the same.
To prove that some function T(N) = O(f(N)), we usually do not apply
these definitions formally but instead use a repertoire of known results.
In general, this means that a proof (or determination that the assumption
is incorrect) is a very simple calculation and should not involve calculus,
except in extraordinary circumstances (not likely to occur in an algorithm
analysis).
When we say that T(N) = O(f(N))9 we are guaranteeing that the function
T(N) grows at a rate no faster than f(N); thus f(N) is an upper bound on
T(N). Since this implies that f(N) = W(T(N)), we say that T(N) is a lower
bound on f(N).
As an example, N3 grows faster than N2, so we can say that N2 = O(N3)
or N3 = W(N2). f(N) = N2 and g(N) = 2N2 grow at the same rate, so both
f(N) = O(g(N)) and f(N) = W(g(N)) are true. When two functions grow
at the same rate, then the decision of whether or not to signify this with
Θ() can depend on the particular context. Intuitively, if g(N) = 2N2, then
g(N) = O(N4), g(N) = O(N3), and g(N) = O(N2) are all technically correct,
but the last option is the best answer. Writing g(N) = Θ(N2) says not only
that g(N) = O(N2), but also that the result is as good (tight) as possible.
The important things to know are

02_Programming and Data Structures.indd 77 4/23/2014 12:43:52 PM


2.78 B.E./B.Tech. Question Papers

RULE 1:
If T1(N) = O(f(N)) and T2(N) - O(g(N)), then
(a) T1(N) + T2(N) = max(O(f(N)), O(g(N))),
(b) T1(N) * T2(N) = O(f(N) * g(N)),

Figure 2.1 Typical growth rates


RULE 2:
If T(N) is a polynomial of degree k, then T(N) = Θ(Nk).
RULE 3:
log N = O(N) for any constant k. This tells us that logarithms grow very
slowly.
This information is sufficient to arrange most of the common functions
by growth rate (see Figure 2.1).
Several points are in order. First, it is very bad style to include constants
or low-order terms inside a Big-Oh. Do not say T(N) = O(2N2) or T(N)
= O(N2 + N). In both cases, the correct form is T(N) = O(N2). This
means that in any analysis that will require a Big-Oh answer, all sorts of
shortcuts are possible. Lower-order terms can generally be ignored, and
constants can be thrown away. Considerably less precision is required in
these cases.
Second, we can always determine the relative growth rates of two
functions f(N) and g(N) by computing limN→∞ f(N)/g(N), using L’Hôpital’s
rule if necessary.* The limit can have four possible values:
* L’Hôpital’s rule states that if limN→∞ f(N) = ∞ and llimN→∞ f(N) = ∞, then
limN→∞ f(N) = limN→x f ′(N)/ g ′(N), where f(N) and g′(N) are the derivatives
of f(N) and g(N), respectively

02_Programming and Data Structures.indd 78 4/23/2014 12:43:53 PM


Programming and Data Structures (May/June 2013) 2.79

•• The limit is 0: This means that f(N) = o(g(N)).


•• The limit is c ≠ 0: This means that f(N) = Θ(g(N)).
•• The limit is ∞: This means that g(N) = o(f(N)).
•• The limit oscillates: There is no relation (this will not happen in our
context).

Using this method almost always amounts to overkill. Usually the


relation between f(N) and g(N) can be derived by simple algebra. For
instance, if f(N) = N log N and g(N) = N1.5, then to decide which of f(N)
and g(N) grows faster, one really needs to determine which of log N and
N0.5 grows faster. This is like determining which of log2 N or N grows
faster. This is a simple problem, because it is already known that N
grows faster than any power of a log. Thus, g(N) grows faster than f(N).
One stylistic note: It is bad to say f(N) ≤ O(g(N)), because the inequality
is implied by the definition. It is wrong to write f(N) ≥ O(g(N)), which
does not make sense.

15. (b) (ii)


Among all the problems known to be in NP, there is a subset, known as
the NP-complete problems, which contains the hardest. An NP-complete
problem has the property that any problem in NP can be polynomially
reduced to it.
A problem P1 can be reduced to P2 as follows: Provide a mapping so
that any instance of P1 can be transformed to an instance of P1. Solve P2,
and then map the answer back to the original. As an example, numbers
are entered into a pocket calculator in decimal. The decimal numbers
are converted to binary, and all calculations are performed in binary.
Then the final answer is converted back to decimal for display. For P1
to be polynomially reducible to P2, all the work associated with the
transformations must be performed in polynomial time.
The reason that NP-complete problems are the hardest NP problems is
that a problem that is NP-complete can essentially be used as a subroutine
for any problem in NP, with only a polynomial amount of overhead.
Thus, if any NP-complete problem has a polynomial-time solution, then
every problem in NP must have a polynomial-time solution. This makes
the NP-complete problems the hardest of all NP problems.
Suppose we have an NP-complete problem P1. Suppose P2 is known to
be in NP. Suppose further that P1 polynomially reduces to P2, so that we
can solve P1 by using P2 with only a polynomial time penalty. Since P1

02_Programming and Data Structures.indd 79 4/23/2014 12:43:53 PM


2.80 B.E./B.Tech. Question Papers

is NP-complete, every problem in NP polynomially reduces to P1. By


applying the closure property of polynomials, we see that every problem
in NP is polynomially reducible to P2: We reduce the problem to P1 and
then reduce P1 to P2. Thus, P2 is NP-complete.
As an example, suppose that we already know that the Hamiltonian cycle
problem is NP-complete. The traveling salesman problem is as follows.
TRAVELING SALESMAN PROBLEM:
Given a complete graph G = (V, E), with edge costs, and an integer K, is
there a simple cycle that visits all vertices and has total cost ≤ K?
The problem is different from the Hamiltonian cycle problem, because
all |V|(|V| - l)/2 edges are present and the graph is weighted. This problem
has many important applications. For instance, printed circuit boards
need to have holes punched so that chips, resistors, and other electronic
components can be placed.
This is done mechanically. Punching the hole is a quick operation; the
time-consuming step is positioning the hole puncher. The time required
for positioning depends on the distance traveled from hole to hole. Since
we would like to punch every hole (and then return to the start for the
next board), and minimize the total amount of time spent traveling, what
we have is a traveling salesman problem.
The traveling salesman problem is NP-complete. It is easy to see that a
solution can be checked in polynomial time, so it is certainly in NP. To
show that it is NP-complete, we polynomially reduce the Hamiltonian
cycle problem to it. To do this we construct a new graph G′.G′ has the
same vertices as G. For G′, each edge (v, w) has a weight of 1 if (v,w)
∈G, and 2 otherwise. We choose K = |V|. See Figure 9.78.
It is easy to verify that G has a Hamiltonian cycle if and only if G′ has a
Traveling Salesman tour of total weight | V|.
There is now a long list of problems known to be NP-complete. To prove
that some new problem is NP-complete, it must be shown to be in NP,
and then an appropriate NP-complete problem must be transformed into
it. Although the transformation to a traveling salesman problem was
rather straightforward, most transformations are actually quite involved
and require some tricky constructions. Generally, several different NP-
complete problems are considered before the problem that actually
provides the reduction. As we are only interested in the general ideas,
we will not show any more transformations; the interested reader can
consult the references.

02_Programming and Data Structures.indd 80 4/23/2014 12:43:53 PM


Programming and Data Structures (May/June 2013) 2.81

Figure 9.78 Hamiltonian cycle problem transformed to traveling


salesman problem
The alert reader may be wondering how the first NP-complete problem
was actually proven to be NP-complete. Since proving that a problem
is NP-complete requires transforming it from another NP-complete
problem, there must be some NP-complete problem for which this
strategy will not work. The first problem that was proven to be NP-
complete was the satisfiability problem. The satisfiability problem takes
as input a boolean expression and asks whether the expression has an
assignment to the variables that gives a value of 1.
Satisfiability is certainly in NP, since it is easy to evaluate a boolean
expression and check whether the result is true. In 1971, Cook showed
that satisfiability was NP-complete by directly proving that all problems
that are in NP could be transformed to satisfiability. To do this, he used
the one known fact about every problem in NP: Every problem in NP
can be solved in polynomial time by a nondeterministic computer. The
formal model for a computer is known as a Turing machine. Cook
showed how the actions of this machine could be simulated by an
extremely complicated and long, but still polynomial, boolean formula.
This boolean formula would be true if and only if the program which
was being run by the Turing machine produced a “yes” answer for its
input.
Once satisfiability was shown to be NP-complete, a host of new NP-
complete problems, including some of the most classic problems, were
also shown to be NP-complete.
In addition to the satisfiability, Hamiltonian circuit, traveling salesman,
and longest-path problems, which we have already examined, some
of the more well-known NP-complete problems which we have not

02_Programming and Data Structures.indd 81 4/23/2014 12:43:53 PM


2.82 B.E./B.Tech. Question Papers

discussed are bin packing, knapsack, graph coloring, and clique. The
list is quite extensive and includes problems from operating systems
(scheduling and security), database systems, operations research, logic,
and especially graph theory.

02_Programming and Data Structures.indd 82 4/23/2014 12:43:53 PM


B.E./B.Tech. Degree Examination,
Nov/Dec 2012
Third Semester
Computer Science and Engineering
(Common to Information Technology)

PROGRAMMING AND DATA STRUCTURES


(Regulation 2008/2010)
Time: Three hoursMaximum: 100 marks
Answer ALL questions

PART A (10 × 2 = 20 marks)

1. Write any four applications of stack.

2. What is abstract data type? Give any two examples.

3. Define binary tree and give the binary tree node structure.

4. D
 raw an expression tree for the expression:
(a + b * c) + ((d * e + f) * g)

5. How does the AVL tree differ from binary search tree?

6. What arc the applications of priority queue?

7. What is meant by primary clustering?

8. Define open addressing hashing.

9. What residual graph?

10. Define event-node graph.

02_Programming and Data Structures.indd 83 4/23/2014 12:43:53 PM


2.84 B.E./B.Tech. Question Papers

Part B (5 × 16 = 80 marks)

11. (a) (i) Write an algorithm to merge two sorted linked lists into a single
sorted list (8)
(ii) Explain the operation of inserting an element at the front, middle
and at the rear in a doubly linked list. (8)
Or
(b) (i) Write and explain an algorithm to display the contents of a stack
with an example. (8)
(ii) Briefly explain the operations of queue with examples. (8)
12. (a) (i) Explain the process of finding the minimum and maximum
elements of binary search tree.  (8)
(ii) Explain the process of displaying the nodes of a binary tree at a
particular level. (8)
Or
(b) (i) Write a function to insert a node into a binary search tree and
explain with an example. (8)
(ii) Explain the operations of threaded binary tree. (8)
13. (a) (i) Explain the insert and delete operations of heap with examples.
 (8)
(ii) Explain how does double rotation fix the problem found in the
single rotation of AVL tree with and example. (8)
Or
(b) (i) Describe the operations of B-tree using 2-3 tree. (8)
(ii) Explain the operations which are performed in splay tree. (8)
14. (a) (i) Briefly describe linear probing and quadratic probing collision
resolution strategies. (8)
(ii) Discuss about the two permissible operations in the dynamic
equivalence problem. (8)
Or

02_Programming and Data Structures.indd 84 4/23/2014 12:43:53 PM


Programming and Data Structures (Nov/Dec 2012) 2.85

(b) (i) Explain the algorithms which are associated with path
compression. (8)
(ii) List the advantages and disadvantages of various collision
resolution strategies. (8)
15. (a) (i) Explain the Kruskal’s algorithm to find out minimum cost
spanning tree with an example. (8)
(ii) Explain the Dijkstra’s algorithm for shortest path problem with
an example. (8)
Or
(b) (i) Explain the topological sort with an example. (8)
(ii) Given an account of Euler circuits in the applications of graph.
 (8)

02_Programming and Data Structures.indd 85 4/23/2014 12:43:53 PM


2.86 B.E./B.Tech. Question Papers

Solutions
Part A

1. Applications of Stack:
•• Evaluation of Postfix Expression
•• Reverse String
•• Stack Frames
•• Recursion

2. The programmer needs to define everything related to the data type such
as how the data values are stored, the possible operations that can be
carried out with the custom data type and that it must behave like a built-
in type and not create any confusion while coding the program. Such
custom data types are called Abstract data type.
For example, if the programmer wants to define date data type which
is not available in C/C++, s/he can create it using struct or class. Only
a declaration is not enough; it is also necessary to check whether the
date is valid or invalid. This can be achieved by checking using different
conditions. The following program explains the creation of date data
type:
Example
# include <stdio.h>
# include <conio.h>
struct date
{
int dd;
int mm;
int yy;
};
void main()
{
struct date d; /* date is abstract data type */
clrscr();
printf("Enter date(dd mm yy) :");
scanf("%d %d %d",&d.dd,&d.mm, &d.yy);
printf("Date%d-%d-%d",d.dd,d.mm,d.yy);
}

02_Programming and Data Structures.indd 86 4/23/2014 12:43:53 PM


Programming and Data Structures (Nov/Dec 2012) 2.87

OUTPUT
Enter date(dd/mm/yy): 25 10 2011
Date 25-10-2011

Explanation:
In this program, using struct keyword the date data type is declared. It
contains three-integer variables dd, mm and yy to store date, month and
year. Through scanf statement date, month and year is entered and it is
displayed using printf statement.

3. A binary tree is a finite set of data elements. A tree is binary if each node
of it has a maximum of two branches. The data element is either empty
or holds a single element called root along with two disjoint trees called
left sub-tree and right sub-tree, i.e. in a binary tree the maximum degree
of any node is two.

4. Figure 4.14 shows an example of an expression tree. The leaves of an


expression tree are operands, such as constants or variable names, and
the other nodes contain operators. This particular tree happens to be
binary, because all of the operations are binary, and although this is the
simplest case, it is possible for nodes to have more than two children. It
is also possible for a node to have only one child, as is the case with the
unary minus operator. We can evaluate an expression tree, T, by applying
the operator at the root to the values obtained by recursively evaluating
the left and right subtrees. In our example, the left subtree evaluates to
a + (b * c) and the right subtree evaluates to ((d * e) + f) * g. The entire
tree therefore represents (a + b * c) + ((d * e + f) * g)

Figure 4.14 Expression tree for (a + b * c) + ((d * e + f) * g)

02_Programming and Data Structures.indd 87 4/23/2014 12:43:54 PM


2.88 B.E./B.Tech. Question Papers

5. Binary Search Tree


A binary search tree is also called as binary sorted tree. Binary search
tree is either empty or each node N of tree satisfies the following
property:
1. The key value in the left child is not more than the value of root.
2. The key value in the right child is more than or identical to the value
of root.
3. All the sub-trees, i.e. left and right sub-trees follow the two rules
mentioned above.
AVL Tree
The AVL tree is a binary search tree. It is also known as height balanced
tree. It is used to minimize the search time by keeping every node of the
tree completely balanced in terms of height. The balance factor plays an
important role in the insertion and deletion of elements in an AVL tree.
The balance factor decides whether all the nodes of a tree are completely
balanced.

6. Applications of Priority Queue:


•• Round Robin
•• Simulation

7. Any key that hashes into the cluster will require several attempts to
resolve the collision , and then it will add to the cluster is said to be
primary clustering.

8. “open addressing” refers to the location of the item is not determined by


its hash value. This method is also called closed hashing.

9. A graph,Gr , which show for each edge , how much more flow can be
added.This can be calculated by subtracting the current flow from the
capacity for each edge.This Is said to be residual graph.

10. In the dual graph the edges represent the activities, and the vertices
represent the commencement and termination of activities. For this
reason, the dual graph is called an event- node graph

02_Programming and Data Structures.indd 88 4/23/2014 12:43:54 PM


Programming and Data Structures (Nov/Dec 2012) 2.89

Part B

11. (a) (i)


Merge Sort

Merging is a process in which two lists are merged to form a new list. The
new list is termed as the sorted list. Before merging, individual lists are
sorted and then merging is done. The procedure is very straightforward.
Consider two arrays containing integer elements. The elements of the
first array are added successively. The result of the addition of all the
elements of the first array is added with the successive elements of the
second array. Thus, one obtains the summation of all the elements of two
arrays. Initially, sum is assumed to 0. Successively, addition of elements
is carried out with the sum. It is then compared with the elements of the
two arrays. In case there is no match, the assumed value is incremented
till it matches the array element value (ascending order). This process is
repeated till the assumed value reaches the obtained sum. A program on
merge sort is discussed below.
Example: Write a program to create two arrays containing integer
elements. Sort and store the elements of both the arrays in the third list.
# include <stdio.h>
# include <conio.h>
# include <math.h>
void main()
{
int m,n,p,sum=0;
int a[5],b[5],c[10];
clrscr();
printf("\n Enter elements for first list : ");
for(m=0;m<5;m++)
{
scanf("%d",&a[m]);
if(a[m]==0)
m--;
sum=sum+abs(a[m]);
}

02_Programming and Data Structures.indd 89 4/23/2014 12:43:54 PM


2.90 B.E./B.Tech. Question Papers

printf("\n Enter elements for second list : ");


for(n=0;n<5;n++)
{
scanf("%d",&b[n]);
if(a[n]==0) n--;
sum=sum+abs(b[n]);
}
p=n=m=0;
m=m-sum;
while(m<sum)
{
for(n=0;n<5;n++)
{
if(m==a[n] || m==b[n])
c[p++]=m;
if(m==a[n] && m==b[n])
c[p++]=m;
}
m++;
}
puts(" Merged sorted list : ");
for(n=0;n<10;++n)
printf(" %d ",c[n]);
return 0;
}
Output:
Enter elements for first list: 1 4 2 6 7
Enter elements for second list: 9 5 3 0 8
Merged sorted list :

0 1 2 3 4 5 6 7 8 9

Explanation:
In the above program, three arrays a[], b[] and c[] are declared. Using
for loop, elements

02_Programming and Data Structures.indd 90 4/23/2014 12:43:54 PM


Programming and Data Structures (Nov/Dec 2012) 2.91

in a[] and b[] are declared. The sum of all the ten elements entered in
both the arrays is taken as the variable sum. The while and nested for
loop checks the corresponding elements of both the lists:
a) If one of the corresponding elements is the same, that element is
stored in the c[] array.
b) If both the corresponding elements are same, they are stored in
successive locations in the c[].
The value of m is initially zero. The sum obtained is again subtracted
from m. This is because negative numbers have to be considered while
sorting. For example, the value of sum = 20. In real execution, the sum
may be different depending on the integers entered.
Value of m would be m = –20 (as m = m – 20 when m = 0)
Thus, in the while loop, the value of m varies from –20 to 20. All the
entered elements are covered in this range. The value of m changes from
–20 to 20, i.e. –20, –19 up to +20 in ascending order. Thus, the same
order is applied while saving element in c[].

11. (a) (ii)


Doubly Linked List
The singly linked list and circular linked list contain only one pointer
field. Every node holds an address of next node. Thus, the singly linked
list can traverse only in one direction, i.e. forward. This limitation can be
overcome by doubly linked list. Each node of the doubly linked list has
two pointer fields and holds the address of predecessor and successor
elements. These pointers enable bi-directional traversing, i.e. traversing
the list in backward and forward direction. In several applications, it
is very essential to traverse the list in backward direction. The pointer
pointing to the predecessor node is called left link and pointer pointing
to successor is called right link. A list having such type of node is called
doubly linked list. The pointer field of the first and last node holds
NULL value, i.e. the beginning and end of the list can be identified by
NULL value. The structure of the node is as shown in Fig. 6.30.
Previous Next
Link Data Link

Figure 6.30 Structure of Node

02_Programming and Data Structures.indd 91 4/23/2014 12:43:54 PM


2.92 B.E./B.Tech. Question Papers

The structure of node would be as follows:


struct node
{
int number;
struct node *llink
struct node *rlink;

The above structure can be represented by using Fig. 6.31.


First Node Last Node

pointer pointer pointer

data data data data

pointer pointer pointer

Next Pointer Previous


Pointer

Figure 6.31 Doubly Linked List

Insertion and Deletion with Doubly Linked List


We are aware of insertion process in which an element can be inserted
at beginning, at end and at the specified position. Figure 6.32 shows the
insertion in the doubly linked list.
Head

8 & & 3 & & 7

5 New node

Figure 6.32 Inserting a Node at the Beginning

We know that the head node of the doubly linked list contains NULL
value. When a new node is to be inserted at the beginning, the address of
the head node is assigned to the new node. The previous pointer of the
node is assigned a NULL value. The arrow ↔ indicates that the node has
both previous and next pointer.

02_Programming and Data Structures.indd 92 4/23/2014 12:43:54 PM


Programming and Data Structures (Nov/Dec 2012) 2.93

When a node is inserted at the end, the next pointer of the new node is
assigned a NULL value and the previous pointer of the node is assigned
the address of last node. Figure 6.33 describes the insertion at the end.
3 & & 9 & & 4

Figure 6.33 Inserting a Node at the End

In the deletion operation as shown in the Fig. 6.34 when a node is


deleted, the memory allocated to that node is released and the previous
and next nodes of that node are linked.
Head

3 & 9 & & 4

Figure 6.34 Deleting a Node from Beginning

When a node is to be deleted from the beginning of the node, the head
pointer points to the second node. Because after deletion of first node,
the second node becomes the first. The symbol X indicates the link will
be destroyed. This is shown in Fig. 6.35.
Head

3 & 9 & & 4

Figure 6.34 Deleting a Node from Beginning

11. (b) (i)


When an item is added to a stack, It is pushed on to the stack, given a
stack and an item I, performing the operation push (st,I) adds the item I
to the top of stack ‘st’. Push operation is applicable to any stack.
Algorithm for push operation
Variables
Size → Total no of elements

02_Programming and Data Structures.indd 93 4/23/2014 12:43:55 PM


2.94 B.E./B.Tech. Question Papers

tos → Top of the stack.


val → Information which you want to insert in stack.
Stack[] → Array of stack.
Step 1 [Check that the stack is Full]
If tos = size-1 then
(print message)
Stack is full.
Return.
Step 2 [else]
[Increment tos by 1]
tos ← tos + 1
Step 3 [Input the element to stack]
Stack[tos] ← val
Step 4 [Stop]
Pop operation:
The pop operation removes the top most item to understand after
removed of top most information new value of the pointer top becomes
the previous value of top that is top = top - 1 and free position is
allocated as free space.
Algorithm:
Step 1 [Check that stack is empty]
if tos = -1 then
(print message)
“Stack is Empty”
return
Step 2 [Else]
[Decrement tos by 1]
tos ← tos - 1
return.
Step 3 [Stop]

02_Programming and Data Structures.indd 94 4/23/2014 12:43:55 PM


Programming and Data Structures (Nov/Dec 2012) 2.95

11. (b) (ii)


Algorithm for addition and deletion operation on a circular queue.
procedure ADDQ {item, Q, n, front, rear)
rear(rear + 1)mod n
if front = rear than call QUEUE-FULL
Q(rear) item
end ADDQ
procedure DELETE Q{item, Q, n, front, rear)
if front = rear then call QUEUE-EMPTY
front (front + 1) nod n
item Q(front)
end DELETE Q

12. (a) (i)


Priority Queues
Many algorithms process items according to a particular order. For
example, suppose you have to schedule a list of jobs given the deadline
by which each job must be performed or else its importance relative
to the other jobs. Scheduling jobs requires sorting them by time or
importance, and then performing them in this sorted order.  
Priority queues provide extra flexibility over sorting, which is required
because jobs often enter the system at arbitrary intervals. It is much
more cost-effective to insert a new job into a priority queue than to re-
sort everything. Also, the need to perform certain jobs may vanish before
they are executed, meaning that they must be removed from the queue.
The basic priority queue supports three primary operations:
•• Insert(Q,x): Given an item x with key k, insert it into the priority
queue Q.
•• Find-Minimum(Q) or Find-Maximum(Q): Return a pointer to the
item whose key value is smaller (larger) than any other key in the
priority queue Q.
•• Delete-Minimum(Q) or Delete-Maximum(Q) - Remove the item
from the priority queue Q whose key is minimum (maximum).

02_Programming and Data Structures.indd 95 4/23/2014 12:43:55 PM


2.96 B.E./B.Tech. Question Papers

Figure: The maximum and minimum element in a binary sear ch tree


All three of these priority queue operations can be implemented in time
by representing the heap with a binary search tree. Implementing the
find-minimum operation requires knowing where the minimum element
in the tree is. By definition, the smallest key must reside in the left
subtree of the root, since all keys in the left subtree have values less than
that of the root. Therefore, as shown in Figure , the minimum element
must be the leftmost decendent of the root. Similarly, the maximum
element must be the rightmost decendent of the root.
Find-Minimum(x)
while
do x = right[x]
do x = left[x]
return x
Find-Minimum(x)
while
Do x= left[x]
return x
Repeatedly traversing left (or right) pointers until we hit a leaf takes time
proportional to the height of the tree, or O(lgn)if the tree is balanced.
The insert operation can be implemented exactly as binary tree insertion.
Delete-Min can be implemented by finding the minimum element and
then using standard binary tree deletion. It follows that each of the
operations can be performed in O(lgn) time.
Priority queues are very useful data structures. Indeed, they are the hero
of the war story described in Section . A complete set of priority
queue implementations is presented in Section .

02_Programming and Data Structures.indd 96 4/23/2014 12:43:55 PM


Programming and Data Structures (Nov/Dec 2012) 2.97

12. (a) (ii)


The item which is to be searched is compared with the root node. If it
is less than the root node then the left child of left sub tree is compared
otherwise right child is compared. The process would be continued till
the item is found. A program based on the above point is given below.
Example: Write a program to search an element from the binary
tree.
# include <stdio.h>
# include <conio.h>
struct tree
{
long data;
struct tree *left;
struct tree *right;
};
int sn;
struct tree *bt=NULL;
struct tree *insert(struct tree*bt,long no);
void search(struct tree *bt, long sn);
void main()
{
long no;
clrscr();
puts("Enter the nodes of tree in preorder: and 0 to quit");
scanf("%ld",&no);
while(no!=0)
{
bt= insert(bt,no);
scanf("%d",&no);
}
printf("\nEnter the number to search:-");
scanf("%d",&sn);
search(bt,sn);
}
struct tree*insert(struct tree*bt,long no)
{
if(bt==NULL)
{
bt=(struct tree*) malloc(sizeof(struct tree));
bt->left=bt->right=NULL;
bt->data=no;

02_Programming and Data Structures.indd 97 4/23/2014 12:43:55 PM


2.98 B.E./B.Tech. Question Papers

}
else
{
if(no<bt->data)
bt->left=insert(bt->left,no);
else
if(no>bt->data)
bt->right=insert(bt->right,no);
else
if(no==bt->data)
{
puts("Duplicates nodes: Program exited");
exit(0);
}
}
return(bt);
}
void search(struct tree*bt, long fn)
{
if(bt==NULL)
puts("The number does not exit");
else
if(fn==bt->data)
printf("The number %d is present in tree",fn);
else
if(fn<bt->data)
search(bt->left,fn);
else
search(bt->right, fn);
}
OUTPUT
Enter the nodes of tree in preorder: and 0 to quit
3 5 11 17 34 0
Enter the number to search:-17
The number 17 is present in tree
Enter the nodes of tree in preorder: and 0 to quit
3 5 11 17 34 0
Enter the number to search:-4
The number does not exit

Explanation:
This program contains struct tree which is used to store the binary
tree. The insert() function inserts the nodes into the binary tree. The

02_Programming and Data Structures.indd 98 4/23/2014 12:43:55 PM


Programming and Data Structures (Nov/Dec 2012) 2.99

search() function searches the number from the tree. The fn variable
is used to store the number which the user will not find. The search
function first compares the fn with the root node. If the value of fn is
less than the root node then the function searches the number in the left
sub-tree, else it finds the number in the right sub-tree.

12. (b) (i)


Insertion operation in Binary Search Tree
Search Tree
Insert(Element Type X, SearchTree T)
{
if (T = = NULL)
{
T = malloc (size of (struct TreeNode));
if (T = = NULL)
FatalError (“Out of Spae”);
else
{
T → Element = X;
T → Left = T → Right = NULL;
}
}
else
if (X < T → Element)
T → Left = Insert (X, T → Left);
else
if (X > T → Element)
T → Right = Insert (X, T → Right);
Return T;
}
e.g.,: 6, 2, 8, 1, 4, 3, 5

02_Programming and Data Structures.indd 99 4/23/2014 12:43:55 PM


2.100 B.E./B.Tech. Question Papers

To insert X into tree T, proceed down the tree. If X is found, do nothing.


Otherwise insert X at the last spot on the path traversed.
6

2 8

1 4

To insert 5, we traverse the tree as it is occurring or not. At the node with


key 4, we go right and insert it as it is correct spot.
6

2 8

1 4

3 5

Deletion operation in a Binary search Tree


SearchTree Delete (ElementType X, SearchTree T)
{
Position TmpCell;
if (T = = Null)
Error (“Element not found”);
else
if (X<T→Element)
T→Left = Delete (X,T→Left);
else
if (X>T→Element)
T→Left = Delete (X,T→Left);
else
if (T→Left && T→Right)

02_Programming and Data Structures.indd 100 4/23/2014 12:43:56 PM


Programming and Data Structures (Nov/Dec 2012) 2.101

{
TmpCell = FindMin(T→Right);
T→Element = TmpCell→Element;
T→Right = Delete (T→Element, T→Right);
}
else
{
TmpCell = T;
if (T→Left = = Null)
T = T→Right;
else
if (T→Right = = Null)
T = T→Left;
Free (TmpCell);
}
return T;
}

12. (b) (ii)


While studying the linked representation of a binary tree, it is observed
that the number of nodes that have null values are more than the non-
null pointers. The number of left and right leaf nodes has number of
null pointer fields in such a representation. These null pointer fields are
used to keep some other information for operations of binary tree. The
null pointer fields are to be used for storing the address fields of higher
nodes in tree, which is called thread. Threaded binary tree is the one in
which we find these types of pointers from null pointer fields to higher
nodes in a binary tree. Consider the following tree:
In Figure 1, in the binary tree there are 7 null pointers. These are shown
with the dotted lines. There are total 12 node pointers out of which
5 are actual node pointers, i.e. non-null pointer (solid lines). For any
binary tree having n nodes there will be (n+1) null pointers and 2n total
pointers. All the null pointers can be replaced with appropriate pointer

02_Programming and Data Structures.indd 101 4/23/2014 12:43:56 PM


2.102 B.E./B.Tech. Question Papers

value known as thread. The binary tree can be threaded according to


appropriate traversal method. The null pointer can be replaced as follows:
Threaded binary tree can be traversed by any one of the three traversals,
i.e. preorder, postorder and inorder. Further, in inorder threading there
may be one-way inorder threading or two–way inorder threading. In one
way inorder threading the right child of the node would point to the
next node in the sequence of the inorder traversal. Such a threading tree
is called right in threaded binary tree. Also, the left child of the node
would point to the previous node in the sequence of inorder traversal.
This type of tree is called as left in threaded binary tree. In case both the
children of the nodes point to other nodes then such a tree is called as
fully threaded binary tree.

Figure 1 A Binary Tree with Null Pointers


Figure 1 describes the working of right in threaded binary tree in which
one can see that the right child of the node points to the node in the
sequence of the inorder traversal method.

Figure 9.51 Right in Threaded Binary Tree

02_Programming and Data Structures.indd 102 4/23/2014 12:43:56 PM


Programming and Data Structures (Nov/Dec 2012) 2.103

The inorder traversal shown in the Fig. 9.51 will be as M-K-N-J-O-L.


Two dangling pointers are shown to point a header node as shown
below:
•• rchild of M is made to point to K
•• rchild of N is made to point to J
•• rchild of O is made to point to L.

Similarly, the working of the left in binary threaded tree is illustrated in


Fig. 9.52. In this case the left child of node points to the previous node
in the sequence of inorder traversal.
As shown in Fig. 9.52, thread of N points to K. Here, K is the predecessor
of N in inorder traversal.
Hence, the pointer points to K. In this type of tree the pointers pointing
to other nodes are as follows:
•• l child of N is made to point to K
•• l child of O is made to point to J.

Figure 9.52 Left in Threaded Binary Tree


Figure 9.53 illustrates the operation of fully threaded binary tree. Right
and left children are used for pointing to the nodes in inorder traversal
method.
•• rchild of M is made to point to K
•• l child of N is made to point to K
•• rchild of N is made to point to J
•• l child of O is made to point to J
•• rchild of O is made to point to L.

02_Programming and Data Structures.indd 103 4/23/2014 12:43:56 PM


2.104 B.E./B.Tech. Question Papers

Figure 9.53 Fully Threaded Binary Tree


Fully threaded binary tree with header is described in Fig. 9.54.
•• rchild of M is made to point to K
•• lchild of M is made to point to Header
•• lchild of N is made to point to K
•• rchild of N is made to point to J
•• lchild of O is made to point to J
•• rchild of O is made to point to L
•• rchild of L is made to point to Header.

Figure 9.54 Fully Threaded Tree with Header

02_Programming and Data Structures.indd 104 4/23/2014 12:43:57 PM


Programming and Data Structures (Nov/Dec 2012) 2.105

The working of the fully threaded binary tree is illustrated in Fig. 9.54.
In this case the left child of node points to the previous node in the
sequence of inorder traversal and right child of the node points to the
successor node in the inorder traversal of the node. In the previous two
methods left and right pointers of the first and last node in the inorder
list are NULL. But in this method the left pointer of the first node points
to the header node and the right pointer of the last node points to the
header node. The header node’s right pointer points to itself, and the left
pointer points to the root node of the tree. The use of the header is to
store the starting address of the tree. In the fully threaded binary thread
each and every pointer points to the other nodes. In this tree we do not
find any NULL pointers.
In the Fig. 9.54 the first node in the inorder is M and its left pointer
points to the left pointer of the header node. Similarly, the last node
in the inorder is L and its right pointer points to the left pointer of the
header.
In memory representation of threaded binary tree, it is very important to
consider the difference between thread and normal pointer. The threaded
binary tree node is represented in Fig. 9.55.

Figure 9.55 Representation of the Node


Each node of any binary tree stores the three fields. The left field stores
the left thread value and the right field stores the right thread value. The
middle field contains the actual value of the node, i.e. data.

13. (a) (i)


The insertion routine is conceptually simple. To insert X into tree T,
proceed down the tree as you would with a Find. If X is found, do
nothing (or “update” something). Otherwise, insert X at the last spot
on the path traversed. Figure 4.21 shows what happens. To insert 5, we
traverse the tree as though a Find were occurring. At the node with key
4, we need to go right, but there is no subtree, so 5 is not in the tree, and
this is the correct spot.
Duplicates can be handled by keeping an extra field in the node record
indicating the frequency of occurrence. This adds some extra space to
the entire tree, but is better than putting duplicates in the tree (which

02_Programming and Data Structures.indd 105 4/23/2014 12:43:57 PM


2.106 B.E./B.Tech. Question Papers

tends to make the tree very deep). Of course this strategy does not work
if the key is only part of a larger structure. If that is the case, then we
can keep all of the structures that have the same key in an auxiliary data
structure, such as a list or another search tree.

Figure 4.21 Binary search trees before and after inserting 5

Figure 4.22 Insertion into a binary search tree


Figure 4.22 shows the code for the insertion routine. Since T points to
the root of the tree, and the root changes on the first insertion, Insert is
written as a function that returns a pointer to the root of the new tree.
Lines 8 and 10 recursively insert and attach X into the appropriate
subtree.

02_Programming and Data Structures.indd 106 4/23/2014 12:43:58 PM


Programming and Data Structures (Nov/Dec 2012) 2.107

13. (a) (ii)


Refer Qn No 12 (a) Nov/Dec 2013

13. (b) (i)


A B-tree of order 4 is more popularly known as a 2-3-4 tree, and a B-tree
of order 3 is known as a 2-3 tree. We will describe the operation of
B-trees by using the special case of 2-3 trees. Our starting point is the
2-3 tree that follows.

We have drawn interior nodes (nonleaves) in ellipses, which contain


the two pieces of data for each node. A dash line as a second piece
of information in an interior node indicates that the node has only two
children. Leaves are drawn in boxes, which contain the keys. The keys in
the leaves are ordered. To perform a Find, we start at the root and branch
in one of (at most) three directions, depending on the relation of the key
we are looking for to the two (possibly one) values stored at the node.
To perform an Insert on a previously unseen key, X, we follow the path
as though we were performing a Find. When we get to a leaf node, we
have found the correct place to put X. Thus, to insert a node with key 18,
we can just add it to a leaf without causing any violations of the 2-3 tree
properties. The result is shown in the following figure.

Unfortunately, since a leaf can hold only two or three keys, this might
not always be possible. If we now try to insert 1 into the tree, we find

02_Programming and Data Structures.indd 107 4/23/2014 12:43:58 PM


2.108 B.E./B.Tech. Question Papers

that the node where it belongs is already full. Placing our new key into
this node would give it a fourth element, which is not allowed. This
can be solved by making two nodes of two keys each and adjusting the
information in the parent.

Unfortunately, this idea does not always work, as can be seen by an


attempt to insert 19 into the current tree. If we make two nodes of two
keys each, we obtain the following tree.

This tree has an internal node with four children, but we only allow
three per node. The solution is simple. We merely split this node into
two nodes with two children. Of course, this node might be one of three
children itself, and thus splitting it would create a problem for its parent
(which would now have four children), but we can keep on splitting
nodes on the way up to the root until we either get to the root or find a
node with only two children. In our case, we can get by with splitting
only the first internal node we see, obtaining the following tree.

02_Programming and Data Structures.indd 108 4/23/2014 12:43:58 PM


Programming and Data Structures (Nov/Dec 2012) 2.109

If we now insert an element with key 28, we create a leaf with four
children, which is split into two leaves of two children:

13. (b) (ii)


The splaying strategy is similar to the rotation idea above, except that
we are a little more selective about how rotations are performed. We
will still rotate bottom up along the access path. Let X be a (nonroot)
node on the access path at which we are rotating. If the parent of X is
the root of the tree, we merely rotate X and the root. This is the last
rotation along the access path. Otherwise, X has both a parent (P) and a
grandparent (G), and there are two cases, plus symmetries, to consider.
The first case is the zig-zag case (see Fig. 4.44). Here X is a right child
and P is a left child (or vice versa). If this is the case, we perform
a double rotation, exactly like an avl double rotation. Otherwise, we
have a zig-zig case: X and P are either both left children or both right
children. In that case, we transform the tree on the left of Figure 4.45
to the tree on the right.

Figure 4.44 Zig-zag

02_Programming and Data Structures.indd 109 4/23/2014 12:43:59 PM


2.110 B.E./B.Tech. Question Papers

Figure 4.45 Zig-zig


As an example, consider the tree from the last example, with a Find on
k1:

The first splay step is at k1, and is clearly a zig-zag, so we perform


a standard avl double rotation using k1, k2, and k3. The resulting tree
follows.

The next splay step at k1 is a zig-zig, so we do the zig-zig rotation with k1,
k4, and k5, obtaining the final tree.

Although it is hard to see from small examples, splaying not or ly moves


the accessed node to the root, but also has the effect of roughly halving
the depth of most nodes on the access path (some shallow nodes are
pushed down at most two levels).

02_Programming and Data Structures.indd 110 4/23/2014 12:43:59 PM


Programming and Data Structures (Nov/Dec 2012) 2.111

To see the difference that splaying makes over simple rotation, consider
again the effect of inserting keys 1, 2, 3, ..., N into an initially empty
tree. This takes a total of O(N), as before, and yields the same tree as
simple rotations. Figure 4.46 shows the result of splaying at the node
with key 1. The difference is that after an access of the node with key
1, which takes N — 1 units, the access on the node with key 2 will only
take about N/2 units instead of N — 2 units; there are no nodes quite as
deep as before.
An access on the node with key 2 will bring nodes to within N/4 of the
root, and this is repeated until the depth becomes roughly log N (an
example with N = 7 is too small to see the effect well). Figures 4.47 to
4.55 show the result of accessing keys 1 through 9 in a 32-node tree that
originally contains only left children. Thus we do not get the same bad
behavior from splay trees that is prevalent in the simple rotation strategy.
(Actually, this turns out to be a very good case. A rather complicated
proof shows that for this example, the N accesses take a total of O(N)
time.)
These figures highlight the fundamental and crucial property of splay
trees. When access paths are long, thus leading to a longer-than-normal
search time, the rotations tend to be good for future operations. When
accesses are cheap, the rotations are not as good and can be bad.
The extreme case is the initial tree formed by the insertions. All the
insertions were constant-time operations leading to a bad initial tree. At
that point in time, we had a very bad tree, but we were running ahead
of schedule and had the compensation of less total running time. Then
a couple of really horrible accesses left a nearly balanced tree, but the
cost was that we had to give back some of the time that had been saved.
The main theorem, which we will prove in Chapter 11, is that we never
fall behind a pace of O(logN) per operation: We are always on schedule,
even though there are occasionally bad operations.

Figure 4.46 Result of splaying at node 1

02_Programming and Data Structures.indd 111 4/23/2014 12:44:00 PM


2.112 B.E./B.Tech. Question Papers

Figure 4.47 Result of splaying at node 1 a tree of all left children

Figure 4.48 Result of splaying previous tree at node 2


We can perform deletion by accessing the node to be deleted. This puts
the node at the root. If it is deleted, we get two subtrees TL and TR (left
and right). If we find the largest element in TL (which is easy), then this
element is rotated to the root of TL, and TL will now have a root with no
right child. We can finish the deletion by making TL the right child.

Figure 4.49 Result of splaying previous tree at node 3

02_Programming and Data Structures.indd 112 4/23/2014 12:44:00 PM


Programming and Data Structures (Nov/Dec 2012) 2.113

Figure 4.50 Result of splaying previous tree at node 4

Figure 4.51 Result of splaying previous tree at node 5

Figure 4.52 Result of splaying previous tree at node 6

Figure 4.53 Result of splaying previous tree at node 7

Figure 4.54 Result of splaying previous tree at node 8

02_Programming and Data Structures.indd 113 4/23/2014 12:44:01 PM


2.114 B.E./B.Tech. Question Papers

Figure 4.55 Result of splaying previous tree at node 9


The analysis of splay trees is difficult, because it must take into account
the ever-changing structure of the tree. On the other hand, splay trees
are much simpler to program than avl trees, since there are fewer cases
to consider and no balance information to maintain. Some empirical
evidence suggests that this translates into faster code in practice,
although the case for this is far from complete.

14. (a) (i)


Refer Qn no 13 (a) Nov/Dec 2013

14. (a) (ii)


Refer Qn no 13 (b) Nov/Dec 2013

14. (b) (i)


Path Compression is performed during FIND operation and is used to
perform union. Suppose the operation is Find(x). Then the effect of path
compression is that every node on the path from X to the root has its
parent changed to root.
Example:
The following fig shows the effect of path Compression after Find(15)
1

2 3 5 9 13 15

4 6 7 10 11 14 16

8 12

02_Programming and Data Structures.indd 114 4/23/2014 12:44:01 PM


Programming and Data Structures (Nov/Dec 2012) 2.115

The algorithm for Path Compression is setType Find(Element Type X,


DisjSet S)
{
If(S[X] < = 0)
Return X;
Else
Return S[X] = Find (S[X], S);
}

14. (b) (ii)


Refer Qn no 13 (a) Nov/Dec 2013

15. (a) (i)


Refer Qn no 14 (a) Nov/Dec 2013

15. (a) (ii)


Refer Qn no 14 (b) Nov/Dec 2013

15. (b) (ii)


Consider the three figures in Figure 9.68. A popular puzzle is to
reconstruct these figures using a pen, drawing each line exactly once.
The pen may not be lifted from the paper while the drawing is being
performed. As an extra challenge, make the pen finish at the same point
at which it started. This puzzle has a surprisingly simple solution. Stop
reading if you would like to try to solve it.

Figure 9.68 Three drawings


The first figure can be drawn only if the starting point is the lower left-
or right-hand corner, and it is not possible to finish at the starting point.
The second figure is easily drawn with the finishing point the same as
the starting point, but the third figure cannot be drawn at all within the
parameters of the puzzle.

02_Programming and Data Structures.indd 115 4/23/2014 12:44:02 PM


2.116 B.E./B.Tech. Question Papers

We can convert this problem to a graph theory problem by assigning a


vertex to each intersection. Then the edges can be assigned in the natural
manner, as in Figure 9.69.

Figure 9.69 Conversion of puzzle to graph


After this conversion is performed, we must find a path in the graph that
visits every edge exactly once. If we are to solve the “extra challenge,”
then we must find a cycle that visits every edge exactly once. This graph
problem was solved in 1736 by Euler and marked the beginning of
graph theory. The problem is thus commonly referred to as an Euler
path (sometimes Euler tour) or Euler circuit problem, depending on the
specific problem statement. The Euler tour and Euler circuit problems,
though slightly different, have the same basic solution. Thus, we will
consider the Euler circuit problem in this section.
The first observation that can be made is that an Euler circuit, which
must end on its starting vertex, is possible only if the graph is connected
and each vertex has an even degree (number of edges). This is because,
on the Euler circuit, a vertex is entered and then left. If any vertex v has
odd degree, then eventually we will reach the point where only one edge
into v is unvisited, and taking it will strand us at v. If exactly two vertices
have odd degree, an Euler tour, which must visit every edge but need not
return to its starting vertex, is still possible if we start at one of the odd-
degree vertices and finish at the other. If more than two vertices have
odd degree, then an Euler tour is not possible.
The observations of the preceding paragraph provide us with a necessary
condition for the existence of an Euler circuit. It does not, however, tell
us that all connected graphs that satisfy this property must have an Euler
circuit, nor does it give us guidance on how to find one. It turns out that
the necessary condition is also sufficient. That is, any connected graph,
all of whose vertices have even degree, must have an Euler circuit.
Furthermore, a circuit can be found in linear time.
We can assume that we know that an Euler circuit exists, since we can
test the necessary and sufficient condition in linear time. Then the basic
algorithm is to perform a depth-first search. There are a surprisingly

02_Programming and Data Structures.indd 116 4/23/2014 12:44:02 PM


Programming and Data Structures (Nov/Dec 2012) 2.117

large number of “obvious” solutions that do not work. Some of these are
presented in the exercises.
The main problem is that we might visit a portion of the graph and return
to the starting point prematurely. If all the edges coming out of the start
vertex have been used up, then part of the graph is untraversed. The
easiest way to fix this is to find the first vertex on this path that has an
untraversed edge, and perform another depth-first search. This will give
another circuit, which can be spliced into the original. This is continued
until all edges have been traversed.
As an example, consider the graph in Figure 9.70. It is easily seen that
this graph has an Euler circuit. Suppose we start at vertex 5, and traverse
the circuit 5, 4, 10, 5. Then we are stuck, and most of the graph is still
untraversed. The situation is shown in Figure 9.71.

Figure 9.70 Graph for Euler circuit problem

Figure 9.71 Graph remaining after 5, 4, 10, 5


We then continue from vertex 4, which still has unexplored edges. A
depth-first search might come up with the path 4, 1, 3, 7, 4, 11, 10, 7, 9,
3, 4. If we splice this path into the previous path of 5, 4, 10, 5, then we
get a new path of 5, 4, 1, 3, 7, 4, 11,10,7,9,3,4,10,5.
The graph that remains after this is shown in Figure 9.72. Notice that in
this graph all the vertices must have even degree, so we are guaranteed
to find a cycle to add. The remaining graph might not be connected, but
this is not important. The next vertex on the path that has untraversed
edges is vertex 3. A possible circuit would then be 3, 2, 8, 9, 6, 3. When
spliced in, this gives the path 5, 4, 1, 3, 2, 8, 9, 6,3,7,4,11,10,7,9,3,4,10,5.

02_Programming and Data Structures.indd 117 4/23/2014 12:44:02 PM


2.118 B.E./B.Tech. Question Papers

Figure 9.72 Graph after the path 5, 4, 1, 3, 7, 4, 11, 10, 7, 9, 3, 4, 10,


5
The graph that remains is in Figure 9.73. On this path, the next vertex
with an untraversed edge is 9, and the algorithm finds the circuit 9, 12,
10, 9. When this is added to the current path, a circuit of 5, 4, 1, 3, 2, 8,
9, 12, 10, 9, 6, 3, 7, 4, 11, 10, 7, 9, 3, 4, 10, 5 is obtained. As all the edges
are traversed, the algorithm terminates with an Euler circuit.
To make this algorithm efficient, we must use appropriate data
structures. We will sketch some of the ideas, leaving the implementation
as an exercise. To make splicing simple, the path should be maintained
as a linked list. To avoid repetitious scanning of adjacency lists, we must
maintain, for each adjacency list, a pointer to the last edge scanned. When
a path is spliced in, the search for a new vertex from which to perform
the next depth-first search must begin at the start of the splice point. This
guarantees that the total work performed on the vertex search phase is
O(|E|) during the entire life of the algorithm. With the appropriate data
structures, the running time of the algorithm is O(|E| + | V |).
A very similar problem is to find a simple cycle, in an undirected graph,
that visits every vertex. This is known as the Hamiltonian cycle problem.
Although it seems almost identical to the Euler circuit problem, no
efficient algorithm for it is known.

02_Programming and Data Structures.indd 118 4/23/2014 12:44:02 PM


B.E./B.Tech. Degree Examination,
May/June 2012
Third Semester
Computer Science and Engineering
(Common to Information Technology)

PROGRAMMING AND DATA STRUCTURES


(Regulation 2008/2010)
Time: Three hoursMaximum: 100 marks
Answer ALL questions

PART A (10 × 2 = 20 marks)

1. What are the operations can be done with set ADT?

2. Give any three applications of linked list.

3. Draw the expression tree for ((b + c)*a) + ((d + e*f) + g).

4. What are the advantages of threaded binary tree?

5. What are the differences between binary search tree and AVL .ret?

6. What is the purpose of splay tree?

7. What is meant by open addressing?

8. What is the purpose of dynamic hashing?

9. Define critical path.

10. What is weakly connected graph?

02_Programming and Data Structures.indd 119 4/23/2014 12:44:02 PM


2.120 B.E./B.Tech. Question Papers

Part B (5 × 16 = 80 marks)

11. (a) (i) Explain the operations of queue with C function. (8)
(ii) Explain the array implementation of stacks. (8)
Or
(b) Explain the cursor implimentation of linked list. (16)
12. (a) Explain the traversal of binary tree with examples. (16)
Or
(b) Describe the operations of binary search tree with functions. 16)
13. (a) B
 riefly explain the single rotation and double rotation of AVL tree
with examples. (16)
Or
(b) Explain the binary heap operations with examples. (16)
14. (a) F
 or the given input (4371, 1323, 6173, 4199, 4344, 9679, 1989) and
a hash function n(X) = X mod 10. Show the resulting:
(i) Seperate chaining hash table.
(ii) Open addressing hash table using linear probing.
(iii) Open addressing hash table using quadratic probing.
(iv) Open addressing hash table with second hash function.
h2(X) = 7 – (X mod 7). (16)
Or
(b) Explain the smart union algorithm with example. (16)
15. (a) (i) Explain the prims algorithm with example. (8)
(ii) Explain topological sort with an example. (8)
Or
(b) (i) Explain the kruskals algorithm with example. (8)
(ii) Explain the Dijikstras algorithm with an example. (8)

02_Programming and Data Structures.indd 120 4/23/2014 12:44:02 PM


Programming and Data Structures (May/June 2012) 2.121

Solutions
Part A

1. In programming, a situation occurs when built-in data types are not


enough to handle the complex data structures. It is the programmer’s
responsibility to create this special kind of data type. The programmer
needs to define everything related to the data type such as how the data
values are stored, the possible operations that can be carried out with
the custom data type and that it must behave like a built-in type and not
create any confusion while coding the program. Such custom data types
are called Abstract data type.

2. Applications of Linked List:


•• Polynomial Manipulation
•• Linked Dictionary
•• Addition of Long Positive Integers

3. Refer Qn no 4 from Nov/Dec 2013

4. Threaded binary tree can be traversed by any one of the three traversals,
i.e. preorder, postorder and inorder. Further, in inorder threading there
may be one-way inorder threading or two–way inorder threading. In one
way inorder threading the right child of the node would point to the
next node in the sequence of the inorder traversal. Such a threading tree
is called right in threaded binary tree. Also, the left child of the node
would point to the previous node in the sequence of inorder traversal.
This type of tree is called as left in threaded binary tree. In case both the
children of the nodes point to other nodes then such a tree is called as
fully threaded binary tree.

5. Binary Search Tree


A binary search tree is also called as binary sorted tree. Binary search
tree is either empty or each node N of tree satisfies the following
property:
1.  The key value in the left child is not more than the value of root.
2. The key value in the right child is more than or identical to the value
of root.

02_Programming and Data Structures.indd 121 4/23/2014 12:44:02 PM


2.122 B.E./B.Tech. Question Papers

3. All the sub-trees, i.e. left and right sub-trees follow the two rules
mentioned above.
AVL TREE
The AVL tree is a binary search tree. It is also known as height balanced
tree. It is used to minimize the search time by keeping every node of the
tree completely balanced in terms of height. The balance factor plays an
important role in the insertion and deletion of elements in an AVL tree.
The balance factor decides whether all the nodes of a tree are completely
balanced.

6. A splay tree is an efficient implementation of a balanced binary search


tree that takes advantage of locality in the keys used in incoming lookup
requests. Splay trees can provide good performance in this situation.

7. “open addressing” refers to the location of the item is not determined by


its hash value. This method is also called closed hashing.

8. The purpose of dynamic hashing is the Minimal space overhead where


no buckets need be reserved for future use. Bucket address table only
contains one pointer for each hash value of current prefix length.

9. The critical path is an algorithm which calculates the early start and
early finishing time for an individual activity in a forward pass through
the network.

10. When a directed graph replaces its direct edges with the undirected
edges, it produces a connected graph is said to be weakly connected
graph.

Part A

11. (a) (i)


Two operations can be carried out on queue.
Insertion of Element
Table 5.1 Algorithm for Insertion in Queue

02_Programming and Data Structures.indd 122 4/23/2014 12:44:02 PM


Programming and Data Structures (May/June 2012) 2.123

A program based on algorithm, which is shown in Table 5.1, is described


below.
# include <stdio.h>
# include <conio.h>
# include <process.h>
# define S 5
void main()
{
int queue[S];
int r=0,n;
clrscr();
while(1)
{
if(r>=S)
{
printf("\n Queue overflow");
break;
}
else
printf("Enter a number: ");
scanf("%d",&n);
queue[r]=n;
r++;
}
printf("\n Queue elements are: ");

02_Programming and Data Structures.indd 123 4/23/2014 12:44:02 PM


2.124 B.E./B.Tech. Question Papers

for(n=0;n<S;n++)
printf(" %d ",queue[n]);
}
OUTPUT
Enter a number: 3
Enter a number: 4
Enter a number: 5
Enter a number: 6
Enter a number: 7
Queue overflow

Queue elements are: 3 4 5 6 7

Explanation:
This a simple example of queue that contains maximum 5 elements.
All the elements are entered and stored in the queue. The queue is
implemented by declaring array[s]. The while loop and scanf() statement
read elements through the keyboard. The rear is incremented to get
successive position in the queue. The for loop displays the elements.
Deletion of Element
The following program explains insertion and deletion operations.
# include<stdio.h>
# include <conio.h>
void main()
{
int queue[7]={11,12,13,14,15,16,17};
int i,r=6,f=0,n;
clrscr();
printf("\nThe Elements of queue are as follows:-\n");
for(i=0;i<7;i++)
printf("%2d ",queue[i]);
printf("\nInitial values rear=%d front=%d",r,f);
printf("\n\nHow many elements u want to delete: ");
scanf("%d",&n);
while(f<n)
{

02_Programming and Data Structures.indd 124 4/23/2014 12:44:03 PM


Programming and Data Structures (May/June 2012) 2.125

queue[f]=NULL;
f++;
printf("\nrear=%d front=%d",r,f);
}
printf("\n Queue elements are: ");
for(n=0;n<7;n++)
printf(" %d ",queue[n]);
}
OUTPUT
The Elements of queue are as follows:-
11 12 13 14 15 16 17
Initial values rear=6 front=0
How many elements u want to delete: 3
rear=6 front=1
rear=6 front=2
rear=6 front=3

Queue elements are: 0 0 0 14 15 16 17

Explanation:
This program is the second part of the first program. We have started
exactly from where we ended the last program. The output of the last
program is starting of this program. The array is initialized with entered
values and rear and front are initialized to six and zero, respectively.
Static Implementation
Static implementation can be achieved using arrays. Though, it is a
very simple method, it has few limitations. Once a size of an array is
declared, its size cannot be modified during program execution. It is
also inefficient for utilization of memory. While declaration of an array,
memory is allocated which is equal to array size. The vacant space of
stack (array) also occupies memory space. In both cases, if we store less
argument than declared, memory is wasted and if we want to store more
elements than declared, array cannot be expanded. It is suitable only
when we exactly know the number of elements to be stored.
Example: Write a program to explain working of stack using array
implementation.
# include <stdio.h>

02_Programming and Data Structures.indd 125 4/23/2014 12:44:03 PM


2.126 B.E./B.Tech. Question Papers

# include <conio.h>
# include <process.h>
void main()
{
int j,stack[5]={0};
int p=0;
clrscr();
printf("Enter Elements, put zero to exit: \n");
while(1)
{
scanf("%d",&stack[p]);
if(stack[p]==0)
{
printf("\n By choice terminated: ");
exit(0);
}
p++;
if(p>4)
{
printf("Stack is full \n");
break;
}
else
printf("Stack is empty\n");
}
printf("\n Elements of stack are: ");
for(j=0;j<5;j++)
printf(" %d ",stack[j]);
}
OUTPUT
Enter Elements, put zero to exit:
4
Stack is empty
2

02_Programming and Data Structures.indd 126 4/23/2014 12:44:03 PM


Programming and Data Structures (May/June 2012) 2.127

Stack is empty
6
Stack is empty
8
Stack is empty
3
Stack is full

Elements of stack are: 4 2 6 8 3

Explanation:
In this program, an array stack [5] is declared. Using while loop, elements
are entered through the keyboard and placed in an array. The value of
variable p is initially zero. The variable p increases in every iteration.
The variable p acts as a top of stack. The if statement checks the value
of p . If the value of p is greater than four it means that, the stack is full
and no more elements can be added. When value of p is less than four
it means more elements can be inserted in the stack. The for loop and
printf statements display the array elements. In stack random insertion
or deletion of element is not possible (Though in array it is possible, but
here keep in mind that we are treating an array as a stack. Hence, we
have to follow the restrictions that exist in stack implementation). If we
want to delete any particular element, we have to delete every element
present before that element. Example 4.2 explains deletion of elements.

11. (b)
Refer Qn no 11. (b) May/June 2013

12. (a)
Three parameters are needed for formation of binary tree. They are
node, left and right sub-trees. Traversing is one of the most important
operations done on binary tree and frequently this operation is carried
on data structures. Traversal means passing through every node of the
tree one by one. Every node is traversed only once. Assume, root is
indicated by O, left sub-tree as L and right sub-tree as R. The following
traversal combinations are possible:
1. ORL - ROOT - RIGHT-LEFT
2. OLR - ROOT - LEFT-RIGHT
3. LOR - LEFT - ROOT- RIGHT

02_Programming and Data Structures.indd 127 4/23/2014 12:44:03 PM


2.128 B.E./B.Tech. Question Papers

4. LRO - LEFT - RIGHT- ROOT


5. ROL - RIGHT - ROOT - LEFT
6. RLO - RIGHT - LEFT-ROOT
Out of six methods only three are standard and are discussed in this
chapter. In traversing always right sub-tree is traversed after left sub-
tree. Hence, the OLR is preorder, LOR is inorder and LRO is postorder.
Figure 9.22 shows a model tree (binary tree)

Figure 9.22 A Model Tree


The inorder representation of the above tree is P-N-Q-M-R-O-S-V.
Traversing is a common operation on binary tree. The binary tree can
be used to represent an arithmetic expression. Here, divide and conquer
technique is used to convert an expression into a binary tree. The
procedure to implement it is as follows.
The expression for which the following tree has been drawn is (X*Y)+Z.
Fig. 9.23 represents the expression. Using the following three methods,
the traversing operation can be performed. They are:

Figure 9.23 An Arithmetic Expression in Binary Tree Form


1. Preorder traversal
2. Inorder traversal

02_Programming and Data Structures.indd 128 4/23/2014 12:44:03 PM


Programming and Data Structures (May/June 2012) 2.129

3. Postorder traversal.
All the above three types of traversing methods are explained below.
Inorder Traversal
The functioning of inorder traversal of a non-empty binary tree is as
follows:
1. Firstly, traverse the left sub-tree in inorder.
2. Next, visit the root node.
3. At last, traverse the right sub-tree in inorder.
In the inorder traversal firstly the left sub-tree is traversed recursively
in inorder. Then the root node is traversed. After visiting the root node,
the right sub-tree is traversed recursively in inorder. Fig. 9.22 illustrates
the binary tree with inorder traversal. The inorder traversal for the tree is
P-N-Q-M-R-O-S-V. It can be illustrated as per Fig. 9.24.

Figure 9.24 Inorder Traversal


The left part constitutes P, N, and Q as the left sub-tree of root and R, O,
S, and V are the right sub-tree.
Figure 9.25 depicts another example of inorder traversal. The inorder
traversal of Fig. 9.25 is C-B-DA-F-E-G.

Figure 9.25 Inorder Traversal

02_Programming and Data Structures.indd 129 4/23/2014 12:44:03 PM


2.130 B.E./B.Tech. Question Papers

Example: Write a program for inserting the elements into the tree and
traverse the tree by the inorder.
# include <stdio.h>
# include <conio.h>
struct tree
{
long data;
struct tree *left;
struct tree *right;
};
struct tree *btree=NULL;
struct tree *insert(struct tree*btree,long digit);
void inorder(struct tree*btree);
void main()
{
long digit;
clrscr();
puts("Enter integers: and 0 to quit");
scanf("%ld",&digit);
while(digit!=0)
{
btree= insert(btree,digit);
scanf("%d",&digit);
}
puts("Inorder traversing of btree:\n");
inorder(btree);
}
struct tree*insert(struct tree*btree,long digit)
{
if(btree==NULL)
{
btree=(struct tree*) malloc(sizeof(struct tree));
btree->left=btree->right=NULL;
btree->data=digit;

02_Programming and Data Structures.indd 130 4/23/2014 12:44:03 PM


Programming and Data Structures (May/June 2012) 2.131

}
else
{
if(digit<btree->data)
btree->left=insert(btree->left,digit);
else
{
if(digit>btree->data)
btree->right=insert(btree->right,digit);
else
if(digit==btree->data)
{
puts("Duplicates nodes: Program exited");
exit(0);
}
}
return(btree);
}
void inorder(struct tree*btree)
{
if(btree!=NULL)
{
inorder(btree->left);
printf("%4ld",btree->data);
inorder(btree->right);
}
}
OUTPUT
Enter integers: and 0 to quit
6 1 2 3 7 8 9 0
Inorder traversing of btree:
1 2 3 6 7 8 9

Explanation:

02_Programming and Data Structures.indd 131 4/23/2014 12:44:03 PM


2.132 B.E./B.Tech. Question Papers

This program is used to evaluate the inorder of the given tree. The
given binary tree is stored in *btree. The elements are inserted by using
the insert(). The inorder traversing, i.e. left, root and right is done by
inorder(). Fig. 9.26 illustrates the binary tree.
The inorder of binary tree is 1, 2, 3, 6, 7, 8, and 9 as shown in Fig. 9.26.

Figure 9.26 Binary Tree for Inorder


Preorder Traversal
The node is visited before the sub-trees. The following is the procedure
for preorder traversal of nonempty binary tree.
1. Firstly, visit the root node (N).
2. Then, traverse the left sub-tree in preorder (L).
3. At last, traverse the right sub-tree in preorder R.
The preorder is recursive in operation. In this type, first root node is
visited and later its left and right sub-trees. Consider the Fig. 9.27 for
preorder traversing.

Figure 9.27 Tree for Preorder Traversal


The preorder traversing for Fig. 9.27 is M, N, P, Q, O, R, S, and V. This
can also be shown in Fig. 9.28.

02_Programming and Data Structures.indd 132 4/23/2014 12:44:04 PM


Programming and Data Structures (May/June 2012) 2.133

In this traversing the root comes first and the left sub-tree and right sub-
tree at last.

Figure 9.28 Preorder Traversal


In the preorder the left sub-tree appears as N, P, and Q and right sub-tree
appears as O, R, S, and V.
Example: Write a program for inserting the elements in the tree and
display them in preorder.
# include <stdio.h>
# include <conio.h>
struct tree
{
long data;
struct tree *left;
struct tree *right;
};
struct tree *btree=NULL;
struct tree *insert(struct tree*btree,long digit);
void preorder(struct tree*btree);
void main()
{
long digit;
clrscr();
puts("Enter integers: and 0 to quit");
scanf("%ld",&digit);
while(digit!=0)
{
btree= insert(btree,digit);
scanf("%d",&digit);
}

02_Programming and Data Structures.indd 133 4/23/2014 12:44:04 PM


2.134 B.E./B.Tech. Question Papers

puts("Preorder traversing btree:\n");


preorder(btree);
}
struct tree*insert(struct tree*btree,long digit)
{
if(btree==NULL)
{
btree=(struct tree*) malloc(sizeof(struct tree));
btree->left=btree->right=NULL;
btree->data=digit;
}
else
{
if(digit<btree->data)
btree->left=insert(btree->left,digit);
else
if(digit>btree->data)
btree->right=insert(btree->right,digit);
else
if(digit==btree->data)
{
puts("Duplicates nodes: Program exited");
exit(0);
}
}
return(btree);
}
void preorder(struct tree*btree)
{
if(btree!=NULL)
{
printf("%4ld",btree->data);
preorder(btree->left);
preorder(btree->right);

02_Programming and Data Structures.indd 134 4/23/2014 12:44:04 PM


Programming and Data Structures (May/June 2012) 2.135

}
}
OUTPUT
Enter integers: and 0 to quit
5 2 1 7 0
Preorder traversing btree:

5 2 1 7

Explanation:
The program stores the binary tree in *btree structure variable of the
structure tree. By invoking the insert() the elements are inserted. The
preorder traversing is done by using preorder(). Fig. 9.29 shows the
*btree , which is inserted in this program.

Figure 9.29 Binary Tree


The preorder of the binary tree is 5, 2, 1, 7.
Postorder Traversal
In the postorder traversal the traversing is done firstly left and right
sub-trees before visiting the root. The postorder traversal of non-empty
binary tree is to be implemented as follows:
1. Firstly, traverse the left sub-tree in postorder style.
2. Then, traverse the right sub tree in postorder.
3. At last, visit the root node (N).
In this type, the left and right sub-trees are processed recursively. The
left sub-tree is traversed first in postorder. After this, the right sub-tree is
traversed in post order. At last, the data of the root node is shown.
The postorder traversing for Fig. 9.30 is P, Q, N, R, V, S, O, and M. This
can also be shown as in Fig. 9.31. In this traversing the left sub-tree is
traversed first, then right sub-tree and at last root.

02_Programming and Data Structures.indd 135 4/23/2014 12:44:04 PM


2.136 B.E./B.Tech. Question Papers

In the postorder the left sub-tree is P, Q and N and the right sub-tree is
R, V, S and O.

Figure 9.30 Tree for Post Order

Figure 9.31 Postorder Traverse


Example: Program for inserting the nodes in tree and traversing these
nodes by using the postorder method.
# include <stdio.h>
# include <conio.h>
struct tree
{
long data;
struct tree *left;
struct tree *right;
};
struct tree *btree=NULL;
struct tree *insert(struct tree*btree,long digit);
void postorder(struct tree*btree);
void main()
{
long digit;

02_Programming and Data Structures.indd 136 4/23/2014 12:44:04 PM


Programming and Data Structures (May/June 2012) 2.137

clrscr();
puts("Enter integers: and 0 to quit");
scanf("%ld",&digit);
while(digit!=0)
{
btree= insert(btree,digit);
scanf("%d",&digit);
}
puts("Postorder traversing btree:\n");
postorder(btree);
}
struct tree*insert(struct tree*btree,long digit)
{
if(btree==NULL)
{
btree=(struct tree*) malloc(sizeof(struct tree));
btree->left=btree->right=NULL;
btree->data=digit;
}
else
{
if(digit<btree->data)
btree->left=insert(btree->left,digit);
else
if(digit>btree->data)
btree->right=insert(btree->right,digit);
else
if(digit==btree->data)
{

02_Programming and Data Structures.indd 137 4/23/2014 12:44:04 PM


2.138 B.E./B.Tech. Question Papers

puts("Duplicates nodes: Program exited");


exit(0);
}
}
return(btree);
}
void postorder(struct tree*btree)
{
if(btree!=NULL)
{
postorder(btree->left);
postorder(btree->right);
printf("%4ld",btree->data);
}
}
OUTPUT
Enter integers: and 0 to quit
53419670
Postorder traversing btree:
1437695
Explanation:
In this program, *btree is used to store the binary tree. The *btree is
the structure variable of the structure tree. In this program the insert()
is invoked to insert the elements in the *btree. postorder() is used to
traverse the binary tree in the post order manner. Fig. 9.32 shows the
btree which is inserted in the program.
The postorder traversing of the tree in Fig. 9.32 is 1, 4, 3, 7, 6, 9 and 5.

02_Programming and Data Structures.indd 138 4/23/2014 12:44:05 PM


Programming and Data Structures (May/June 2012) 2.139

Figure 9.32 B-tree of Example 9.4

12. (b)
Refer Qn no 12. (b) Nov/Dec 2013

13. (a)
Refer Qn no 12. (a) Nov/Dec 2013

13. (b)
It is easy (both conceptually and practically) to perform the two required
operations. All the work involves ensuring that the heap order property
is maintained.
Insert
To insert an element X into the heap, we create a hole in the next
available location, since otherwise the tree will not be complete. If X can
be placed in the hole without violating heap order, then we do so and are
done. Otherwise we slide the element that is in the hole’s parent node
into the hole, thus bubbling the hole up toward the root. We continue
this process until X can be placed in the hole. Figure 6.6 shows that to
insert 14, we create a hole in the next available heap location. Inserting
14 in the hole would violate the heap order property, so 31 is slid down
into the hole. This strategy is continued in Figure 6.7 until the correct
location for 14 is found.

02_Programming and Data Structures.indd 139 4/23/2014 12:44:05 PM


2.140 B.E./B.Tech. Question Papers

Figure 6.6 Attempt to insert 14: creating the hole, and bubbling the
hole up

Figure 6.7 The remaining two steps to insert 14 in previous heap


This general strategy is known as a percolate up; the new element is
percolated up the heap until the correct location is found. Insertion is
easily implemented with the code shown in Figure 6.8.

Figure 6.8 Procedure to insert into a binary key


We could have implemented the percolation in the Insert routine by
performing repeated swaps until the correct order was established, but a

02_Programming and Data Structures.indd 140 4/23/2014 12:44:05 PM


Programming and Data Structures (May/June 2012) 2.141

swap requires three assignment statements. If an element is percolated


up d levels, the number of assignments performed by the swaps would
be 3d. Our method uses d + 1 assignments.
If the element to be inserted is the new minimum, it will be pushed all
the way to the top. At some point, i will be 1 and we will want to break
out of the while loop. We could do this with an explicit test, but we
have chosen to put a very small value in position 0 in order to make the
while loop terminate. This value must be guaranteed to be smaller than
(or equal to) any element in the heap; it is known as a sentinel. This
idea is similar to the use of header nodes in linked lists. By adding a
dummy piece of information, we avoid a test that is executed once per
loop iteration, thus saving some time.
The time to do the insertion could be as much as O(log N), if the element
to be inserted is the new minimum and is percolated all the way to the
root. On average, the percolation terminates early; it has been shown that
2.607 comparisons are required on average to perform an insert, so the
average Insert moves an element up 1.607 levels.
DeleteMin
DeleteMins are handled in a similar manner as insertions. Finding the
minimum is easy; the hard part is removing it. When the minimum is
removed, a hole is created at the root. Since the heap now becomes
one smaller, it follows that the last element X in the heap must move
somewhere in the heap. If X can be placed in the hole, then we are done.
This is unlikely, so we slide the smaller of the hole’s children into the
hole, thus pushing the hole down one level. We repeat this step until X
can be placed in the hole. Thus, our action is to place X in its correct spot
along a path from the root containing minimum children.
In Figure 6.9 the left figure shows a heap prior to the DeleteMin. After
13 is removed, we must now try to place 31 in the heap. The value 31
cannot be placed in the hole, because this would violate heap order.
Thus, we place the smaller child (14) in the hole, sliding the hole down
one level (see Fig. 6.10). We repeat this again, placing 19 into the hole
and creating a new hole one level deeper. We then place 26 in the hole
and create a new hole on the bottom level. Finally, we are able to place
31 in the hole (Fig. 6.11). This general strategy is known as a percolate
down. We use the same technique as in the Insert routine to avoid the use
of swaps in this routine.

02_Programming and Data Structures.indd 141 4/23/2014 12:44:05 PM


2.142 B.E./B.Tech. Question Papers

Figure 6.9 Creation of the hole at the root

Figure 6.10 Next two steps in DeleteMin

Figure 6.11 Last two steps in DeleteMin


A frequent implementation error in heaps occurs when there are an even
number of elements in the heap, and the one node that has only one
child is encountered. You must make sure not to assume that there are
always two children, so this usually involves an extra test. In the code
depicted in Figure 6.12, we’ve done this test at line 8. One extremely
tricky solution is always to ensure that your algorithm thinks every node
has two children. Do this by placing a sentinel, of value higher than any
in the heap, at the spot after the heap ends, at the start of each percolate
down when the heap size is even. You should think very carefully before
attempting this, and you must put in a prominent comment if you do use
this technique. Although this eliminates the need to test for the presence
of a right child, you cannot eliminate the requirement that you test when

02_Programming and Data Structures.indd 142 4/23/2014 12:44:06 PM


Programming and Data Structures (May/June 2012) 2.143

you reach the bottom, because this would require a sentinel for every
leaf.

Figure 6.12 Function to perform DeleteMin in a binary heap


The worst-case running time for this operation is O(log N). On average,
the element that is placed at the root is percolated almost to the bottom
of the heap (which is the level it came from), so the average running time
is O(log N).

14. (a)
Refer Qn no 13. (a) May/June 2013
14. (b)
Refer Qn no 13. (b) (ii) May/June 2013

02_Programming and Data Structures.indd 143 4/23/2014 12:44:06 PM


2.144 B.E./B.Tech. Question Papers

15. (a) (i)


One way to compute a minimum spanning tree is to grow the tree in
successive stages. In each stage, one node is picked as the root, and we
add an edge, and thus an associated vertex, to the tree.
At any point in the algorithm, we can see that we have a set of vertices
that have already been included in the tree; the rest of the vertices have
not. The algorithm then finds, at each stage, a new vertex to add to the
tree by choosing the edge (u, v) such that the cost of (u, v) is the smallest
among all edges where u is in the tree and v is not. Figure 9.49 shows
how this algorithm would build the minimum spanning tree, starting
from v\. Initially, v\ is in the tree as a root with no edges. Each step adds
one edge and one vertex to the tree.

Figure 9.49 Prim’s algorithm after each stage


We can see that Prim’s algorithm is essentially identical to Dijkstra’s
algorithm for shortest paths. As before, for each vertex we keep values
dv and pv and an indication of whether it is known or unknown. dv is the
weight of the shortest arc connecting v to a known vertex, and pVf as
before, is the last vertex to cause a change in dv. The rest of the algorithm
is exactly the same, with the exception that since the definition of dv is
different, so is the update rule. For this problem, the update rule is even
simpler than before: After a vertex v is selected, for each unknown w
adjacent to v, dw = min(dw,cW,V).
The initial configuration of the table is shown in Figure 9.50. v1 is
selected, and v1, v3, and v4 are updated. The table resulting from this
is shown in Figure 9.51. The next vertex selected is v4. Every vertex is

02_Programming and Data Structures.indd 144 4/23/2014 12:44:07 PM


Programming and Data Structures (May/June 2012) 2.145

adjacent to v4. v1 is not examined, because it is known, vi is unchanged,


because it has dv = 2 and the edge cost from v4 to v2 is 3; all the rest are
updated. Figure 9.52 shows the resulting table. The next vertex chosen
is v2 (arbitrarily breaking a tie). This does not affect any distances. Then
v3 is chosen, which affects the distance in v6, producing Figure 9.53.
Figure 9.54 results from the selection of v7, which forces v6 and v5 to be
adjusted. v6 and then v5 are selected, completing the algorithm.

Figure 9-50 Initial configuration of table used in Prim’s algorithm

Figure 9.51 The table after v1 is declared known

Figure 9-52 The table after v4 is declared known

02_Programming and Data Structures.indd 145 4/23/2014 12:44:07 PM


2.146 B.E./B.Tech. Question Papers

The final table is shown in Figure 9.55. The edges in the spanning tree
can be read from the table: (v2, v1), (v3, v4), (v4, v1), (v5, v7), (v6,v7), (v7,
v4). The total cost is 16.

Figure 9.53 The table after v2 and then v3 are declared known

Figure 9.54 The table after v7 is declared known

Figure 9.55 The table after v6 and v5 are selected (Prim's algorithm
terminates)

02_Programming and Data Structures.indd 146 4/23/2014 12:44:07 PM


Programming and Data Structures (May/June 2012) 2.147

The entire implementation of this algorithm is virtually identical to that


of Dijkstra’s algorithm, and everything that was said about the analysis
of Dijkstra’s algorithm applies here. Be aware that Prim’s algorithm runs
on undirected graphs, so when coding it, remember to put every edge in
two adjacency lists. The running time is O(|V|2) without heaps, which is
optimal for dense graphs, and O(|E| log | V|) using binary heaps, which
is good for sparse graphs.

15. (a) (ii)


A topological sort is an ordering of vertices in a directed acyclic graph,
such that if there is a path from vi to vj, then vi appears after vi in the
ordering. The graph in Figure 9.3 represents the course prerequisite
structure at a state university in Miami. A directed edge (v,w) indicates
that course v must be completed before course w may be attempted. A
topological ordering of these courses is any course sequence that does
not violate the prerequisite requirement.
It is clear that a topological ordering is not possible if the graph has
a cycle, since for two vertices v and w on the cycle, v precedes w and
w precedes v. Furthermore, the ordering is not necessarily unique; any
legal ordering will do. In the graph in Figure 9.4, v1, v2, v5, v4, v3, v7, v6,
and v1, v2, v5, v5, v7, v3, v6, are both topological orderings.

Figure 9.3 An acyclic graph representing course prerequisite structure

02_Programming and Data Structures.indd 147 4/23/2014 12:44:07 PM


2.148 B.E./B.Tech. Question Papers

Figure 9.4 An acyclic graph


A simple algorithm to find a topological ordering is first to find any
vertex with no incoming edges. We can then print this vertex, and
remove it, along with its edges, from the graph. Then we apply this same
strategy to the rest of the graph.
To formalize this, we define the indegree of a vertex v as the number
of edges (u, v). We compute the indegrees of all vertices in the graph.
Assuming that the Indegree array is initialized and that the graph is read
into an adjacency list, we can then apply the algorithm in Figure 9.5 to
generate a topological ordering.
The function FindNewVertexOflndegreeZero scans the Indegree array
looking for a vertex with indegree 0 that has not already been assigned
a topological number. It returns NotAVertex if no such vertex exists; this
indicates that the graph has a cycle.

Figure 9.5 Simple topological sort pseudocode

02_Programming and Data Structures.indd 148 4/23/2014 12:44:08 PM


Programming and Data Structures (May/June 2012) 2.149

Because FindNewVertexOflndegreeZero is a simple sequential scan of


the Indegree array, each call to it takes O(|V|) time. Since there are |V|
such calls, the running time of the algorithm is O(|V|2).
By paying more careful attention to the data structures, it is possible
to do better. The cause of the poor running time is the sequential scan
through the Indegree array. If the graph is sparse, we would expect that
only a few vertices have their indegrees updated during each iteration.
However, in the search for a vertex of indegree 0, we look at (potentially)
all the vertices, even though only a few have changed.
We can remove this inefficiency by keeping all the (unassigned) vertices
of indegree 0 in a special box. The FindNewVertexOflndegreeZero
function then returns (and removes) any vertex in the box. When we
decrement the indegrees of the adjacent vertices, we check each vertex
and place it in the box if its indegree falls toO.
To implement the box, we can use either a stack or a queue. First, the
indegree is computed for every vertex. Then all vertices of indegree 0
are placed on an initially empty queue. While the queue is not empty,
a vertex v is removed, and all edges adjacent to v have their indegrees
decremented. A vertex is put on the queue as soon as its indegree falls
to 0. The topological ordering then is the order in which the vertices
dequeue. Figure 9.6 shows the status after each phase.
A pseudocode implementation of this algorithm is given in Figure
9.7. As before, we will assume that the graph is already read into an
adjacency list and that the indegrees are computed and placed in an
array. A convenient way of doing this in practice would be to place the
indegree of each vertex in the header cell. We also assume an array
TopNum, in which to place the topological numbering.

Figure 9.6 Result of applying topological sort to the graph in


Figure 9.4

02_Programming and Data Structures.indd 149 4/23/2014 12:44:08 PM


2.150 B.E./B.Tech. Question Papers

Figure 9.7 Pseudocode to perform topological sort


The time to perform this algorithm is O(|E| + |V|) if adjacency lists are
used. This is apparent when one realizes that the body of the for loop is
executed at most once per edge. The queue operations are done at most
once per vertex, and the initialization steps also take time proportional
to the size of the graph.

15. (b) (i)


Refer Qn no 14. (4) Nov/Dec 2013

02_Programming and Data Structures.indd 150 4/23/2014 12:44:08 PM


B.E./B.Tech. Degree Examination, NOV/DEC 2011

Third Semester
Computer Science and Engineering

DATA STRUCTURES
(Regulation 2010)

Time: Three hoursMaximum: 100 marks

Answer ALL Questions


Part A (10 × 2 = 20 marks)
1. Mention the advantages in the array implementation of lists.

2. Why is circular queue better than standard linear queue?

 raw an expression tree for the given infix expression: (a/(b*c/d +


3. D
e/f*g)).

4. F
 or the given binary search tree, if we remove the root and replace it with
something from the left sub-tree. What will be the value of the new root?
Justify your answer.

14
/\
2 22
/\/\
1 5 20 30
//\
4 17 40

5. How do we calculate the balance factor for each node in a AVL tree?

6. Draw the minheap for the following numbers 12, 42, 25, 63, 9.
2.4 B.E./B.Tech. Question Papers

7. C
 onsider the given 4 digit number {4371, 1323, 6173, 4199, 4344, 9679,
1989} and a hash function h(X) = X(mod 10). Find the hash address of
each number using separate chaining.

8. Define an equivalence relation.

9. What is a minimum spanning tree?

10. In the graph shown, find whether the graph contains an Eulerian circuit.

Part B (5 × 16 = 80 marks)
11. (a) (i) Write the algorithm for converting Infix to Postfix Expression.
(ii) Transform the given expression to postfix (Using Stacks)

((a + b) + c*d(d + e) + f)*(g + h).


Or
(b) I magine a college group that has booked some railway tickets for a
small picnic out of the town. These railway tickets are not in order so
the teacher writes down the seat number of only one ticket on each
ticket, each of them being different. Now, when the students need
to get down, they simply have to look in the ticket to find where
a particular student is. This will continue on till all of them have
come together. Then, they can finally get off the train. Choose the
appropriate data structure for performing the following operations:
(i)  Cancellation of tickets.
(ii)  Reservation of tickets.

12. (a) E
 xplain the three standard ways of traversing a binary tree T with a
recursive algorithm. (16)
Or
(b) W
 rite an algorithm for inserting and deleting a node in a binary
search tree. (16)
13. (a) W
 hat are AVL tree? Describe the different rotations defined for AVL
tree. Insert the following elements step by step in sequence into an
empty AVL tree 15, 18, 20, 21, 28, 23, 30, 26. (16)
Or
Data Structures (Nov/Dec 2011) 2.5

(b) S
 how the result of inserting 15, 17, 6, 19, 11, 10, 13, 20, 8, 14, 12
one at a time, into an initially empty binary min heap. Also show the
result of performing three delete Min operations in the final binary
min heap obtained. (18)
14. (a) Write about the different types of hashing techniques in detail. (18)
Or
(b) Explain about disjoint sets and its operation in detail. (18)
15. (a) (i) Traverse the graph using depth first algorithm starting from ‘A’.
(ii) Explain the Breadth first search technique in detail with an
example.
Or
(b) (i) Write about Prim’s algorithm.
(ii) Find the minimum cost spanning tree for the given Graph G
using Kruskal algorithm
Solutions
Part A
1.
(i) Insertion and deletion are performed easily.
(ii) Find lakes constant time

2. It overcomes the problem of unutilized space is std.linear queues when


it is implemented as arrays.

3.

a b


c +

9
d e

4. If we replace the new root node as 17,

17

2 22

1 5 20 30

4 17 40

5. Balance factor, BF(T) of a node T in a binary tree is defined as hl-hr


where hl and hr are heights of left and right subtrees of T.
6.
9

25 12

42 63
Data Structures (Nov/Dec 2011) 2.7

7. 1.  Separate chaining hash table

Hash function h(x) = x mod 10

Insert 4371, h(4371) = 4371 mod 10 = 1

h(1323) = 3, h(6173) = 3, h(4199) = 9,

h(4344) = 4, h(9679) = 9, h(1989) = 9

0
1 4371
2
3 1323 6173
4 4344
5
6
7
8
9 4199 9679 1989

2. Linear Probing:

Linear function being used is F(i) = i


Empty After After After After After After After
table 4371 1323 6173 4199 4344 9679 1989
0 9679 9679
1 4371 4371 4371 4371 4371 4371 4371
2 1989
3 1323 1323 1323 1323 1323 1323
4 6173 6173 6173 6173 6173
5 4344 4344 4344
6
7
8
9 4199 4199 4199 4199
2.8 B.E./B.Tech. Question Papers

8. A equivalence relation is a relation R that satisfies three properties:

(i) Reflexive, aRa, for all a ∈ S


(ii) Symmetric, aRb, if and only if bRa
(iii) Transitive, aRb and bRc implies that aRc.

9. Minimum spanning tree of an undirected graph G is a tree formed from


graph edges that connects all the vertices of G at lowest total cost.

10. Euler circuit is referred to as Euler Path. It exists if it is possible to travel


over every edge of a graph exactly once and return to the starting vertex.

Part B
11. (a) (i)  Refer Q. No. 12(a)(ii) Nov/Dec 2009.

(ii) First step

1.
ab
+
stack output

2.
+
abc
+ o/p

abc+
+

3.

abc+d
+

4.
abc+d∗
+

5.
abc+d∗+

6.
abc+d∗+ef
+
Data Structures (Nov/Dec 2011) 2.9

7.
abc+d∗+ef+
+

8.
∗ abc+d∗+ef+
+

abc+d∗+ef+∗
+

9.
abc+d∗+ef+∗+ef
+

10.
abc+d∗+ef+∗+ef+

11 (b)
#include<stdio.h>
#include <conio.h>
#include <studlib.h>
typedef struct node
{
  int data;
  struct node *next;
}node;
node *create ();
node *reserve (node *head, int x);
node *cancel (node *head);
void main ()
{
  int op, op1,x;
  node *head = NULL;
  do
{
  printf(“\n1 1)creat 2)reserve 3)cancel”);
  printf (“n\ Enter Your Choice”);
  scanf (“%d”, & op);
}
2.10 B.E./B.Tech. Question Papers

node *create()
{
  node *head, *p;
  string name;
  int age;
  head = NULL;
  prinf(“\n Enter the name”);
  scanf(“%s”, & name);
  printf(“\n Enter age”);
  scanf(“%d”, age);
  for(i = 0;i<n;i + + )
{
  if(head = NULL)
  name = head = (node*)malloc(size of(node));
else
{
  name⋅next = (node*)malloc(size of (node));
  name = name⋅next;
}
  name⋅;
}
  return(head);
}
node*reserve_b(node *head, int x)
{
  node*name;
  name = (node *)malloc(size of (node));
  name⋅data = x;
  name⋅next = head;
  head = name;
  return(head);
}
Node*cancel(node *head);
{
  node *name, *q;
  if(head = = NULL)
{
  printf(“not registered”;
  return(head);
}
  name head;
Data Structures (Nov/Dec 2011) 2.11

  head = head⋅next;
  free(name);
  return(head);
}
}

12. (a) Binary tree Transversal:

Traversing a tree is a process of visiting the every node of tree and


exactly once. The different types of tree traversals are

•• Preorder traversal
•• Inorder traversal
•• Postoder traversal

Preorder traversal:
•• Visit the root node first.
•• next, traverse the left subtree in preorder
•• At last, traverse the right – subtree in preorder

Preorder: ABCDEFGHI
Preorder on Preorder on
A
⇒A+ B
Preorder
on B C C
D +
D
G H G H
E F
E F

⇒ A + ( B + Preorder on ) + (C + Preorder on + Preorder on)


⇒ A + ( B + ( D + Prreorder on + Preorder on )) + (CGH )
⇒ A + ( BDEF ) + (CGH )
⇒ ABDEFCGH
Inorder traversal

•• First traverse the left subtree inorder


•• Next visit the root node.
•• At last, traverse the right subtree in inorder.
2.12 B.E./B.Tech. Question Papers

Inorder on Inorder on
A
B
Inorder
on B C C
⇒ D +A+
D
G H G H
E F
E F

⇒ ( B + (inorder on ) + A + (inorder on ) + C + (inorder on )


⇒ ( B + (inoorder on ) + D + (inorder on ) + A + (GCH )
⇒ BEDFAGCH

recursive algorithms:
void preorder (node *T)
{
  if(T! = NULL)
  {
  printf(“\n%d”,T→data);
  preorder (T→left);
  preorder (T→right);
  }
}
void inorder (node *T)
{
  if(T! = NULL)
  {
   inorder(T→left);
   printf(“\n%d”,T→data);
   inorder (T→right);
  }
}

12. (b) Algorithm for inserting an deleting a node in BST.

Insertion operation in Binary Search Tree

Search Tree
Insert(Element Type X, SearchTree T)
{
  if (T = = NULL)
   {
Data Structures (Nov/Dec 2011) 2.13

    T = malloc (size of (struct TreeNode));


    if (T = = NULL)
     FatalError (“Out of Spae”);
    else
     {
      T → Element = X;
      T → Left = T → Right = NULL;
     }
   }
else
  if (X < T → Element)
  T → Left = Insert (X, T → Left);
  else
  if (X > T → Element)
  T → Right = Insert (X, T → Right);
  Return T;
}

e.g.,: 6, 2, 8, 1, 4, 3, 5

To insert X into tree T, proceed down the tree. If X is found, do nothing.


Otherwise insert X at the last spot on the path traversed.

2 8

1 4

To insert 5, we traverse the tree as it is occurring or not. At the node with


key 4, we go right and insert it as it is correct spot.
6

2 8

1 4

3 5
2.14 B.E./B.Tech. Question Papers

Deletion operation in a Binary search Tree

SearchTree Delete (ElementType X, SearchTree T)

{
  Position TmpCell;
  if (T = = Null)
   Error (“Element not found”);
  else
   if (X<T→Element)
   T→Left = Delete (X,T→Left);
  else
   if (X>T→Element)
   T→Left = Delete (X,T→Left);
  else
   if (T→Left && T→Right)
  {
   TmpCell = FindMin(T→Right);
   T→Element = TmpCell→Element;
   T→Right = Delete (T→Element, T→Right);
  }
else
  {
   TmpCell = T;
   if (T→Left = = Null)
   T = T→Right;
else
  if (T→Right = = Null)
  T = T→Left;
  Free (TmpCell);
  }
  return T;
}

13. (a) AVL tree with Different Rotation:

AVL tree is a binary search tree except that for every node in the tree,
the height of left and right subtrees can differ by atmost 1. The height of
empty tree is defined to be -1.
The different types of rotation are
Data Structures (Nov/Dec 2011) 2.15

(i) Single rotation


(ii) Double rotation

Binary heaps:

→ can also be refered to as heaps

→ has two properties


a) structure property
b) heaporder property

Constructing a min heap tree from 5, 2, 6, 7, 1, 3, 8, 9, 4

Step 1
5 5

Step 2
5 2
2 5

2 5

Step 3
2
2 5 6

5 6

Step 4
2

5 6 2 5 6 7

Step 5
2 1

5 6 ⇒ 2 6 1 2 6 7 5

7 1 7 5
2.16 B.E./B.Tech. Question Papers

Step 6
1 1

2 6 ⇒ 2 3 1 2 3 7 5 6

7 5 7 5 6
3

Step 7
1

1 2 3 7 5 6 8
2 3

7 5 6 8

Step 8
1

1 2 3 7 5 6 8 9
2 3

7 5 6 8

Step 9
1 1
1 2 3 4 5 6 8 9 7


2 3 2 3

7 5 6 8 4 5 6 8

9 4 9 7

Insert 15, 18, 20, 21, 28, 23, 30, 26

Initially:
Data Structures (Nov/Dec 2011) 2.17

15 18 18 18
⇒ ⇒ ⇒

18 20 15 20 15 20 21 20

21 15

21 21 21 21

⇒ ⇒ ⇒ ⇒
18 20 18 20 18 20 18 23

15 15 28 15 28 23 15 28 20

23 23 26

⇒ ⇒
18 21 18 21 23 21

15 28 20 15 28 20 30 18 28 20 30

15

13. (b)  Insert 15, 17, 6, 19, 11, 10, 13, 20, 8, 14, 12 in to a binary heap

Algorithm: Insertion
void insert (int heap[], int x)
{
  int n;
  n = heap[0];
  heap[n + 1] = x;
  heap[0] = n + 1;
  insert(heap, n + 1);
}
Delete Max:
Int delmax(int heap [])
{
  int x,n;
  N = heap[0];
  X = heap[1];
  Heap910 = heap[n];
  Heap[0] = n-1;
  Del(heap,1);
2.18 B.E./B.Tech. Question Papers

  Return(x);
}

Insertion:
1.  Insert
15

2.  Insert
15

17

3.  Insert
15

17 6

(i)  heap property not satisfied


4. 
6

15 17

19

5.
6

15 17

19 11

Heap property not satisfied


6.
6

11 15

17 19
Data Structures (Nov/Dec 2011) 2.19

7.
6

11 15

17 19 10

Heap property not satisfied

8.
6

10 11

15 17 19

7.
6

10 11

15 17 19 13

Heap property not satisfied

8.
6

10 11

13 15 17 19

9.
6

10 11

13 15 17 19

20 8

Heap property not satisfied.


10.
6

8 10

11 13 15 17

19 20

11.
6

8 10

11 13 15 17

19 20 14

Heap property not satisfied.


12.
6

8 10

11 13 14 15

17 19 20

13.
6

8 10

11 12 13 14

15 17 19 20

14. (a) Open addressing hashing


Open Addressing:

It is also called closed hashing which is used to resolve collisions with


linked list. Three common
Data Structures (Nov/Dec 2011) 2.21

Collision resolution strategies are:

•• Linear Probing
•• Quadratic Probing
•• Double Hashing

Linear Probing:

In Linear Probing, F is a linear function of I, typically F(i) = i. The


following fig shows the result of inserting key {89, 18, 49, 58, 69} into
hash table using the same hash function and collision resolution strategy
F(i) = i.

Empty After 89 After 18 After 49 After 58 After 69


Table
0 49 49 49
1 58 58
2 69
3
4
5
6
7
8 18 18 18 18
9 89 89 89 89 89

The first collision occurs when 49 is inserted and it is put in next


available spot, spot 0. The key 58 collides with 18, 89 and then 49 and
so it is put is next available free spot, spot 1.

Disadvantage: Even if the table is empty, blocks of occupied cells start


forming. This is known as primary clustering.

Quadratic Probing:

It eliminates the primary clustering problem of linear probing. The


choice is F(i) = i2. The following fig shows the example of quadratic
probing.
2.22 B.E./B.Tech. Question Papers

Empty After 89 After 18 After 49 After 58 After 69


Table
0 49 49 49
1
2 58 58
3 69
4
5
6
7
8 18 18 18 18
9 89 89 89 89 89

When 49 collides with 89, the next cell is attempted and since it is empty
it is placed there. Next 58, collides at spot 8. Then the next cell is tried,
again collision occur. A vacant cell is formed by i2 is 22 = 4. Thus it is
placed in cell 2.

Disadvantage:

•• Performance degrades

The insertion algorithm with quadratic probing is:


Void
Insert (ElementType Key, HashTable H)
{
  Position Current Pos;
  Int CollisionNum;
  CollisionNum = 0;
  Current Pos = Has(Key, H → Table Size);
  While(H → The Cells [Current Pos].Info ! = Empty &&
   H → TheCells[CurrentPos].Element ! = key)
   {
    CurrentPos + = 2 * + + CollisionNum - 1;
    If(CurrentPos > = H → TableSize)
   }
  returnCurrentPos;
}
Data Structures (Nov/Dec 2011) 2.23

Double Hashing:

The choice is F(i) = i, hash2(x)

Empty After 89 After 18 After 49 After 58 After 69


Table
0 69
1
2 58 58
3
4
5
6 49 49 49
7
8 18 18 18 18
9 89 89 89 89 89

14. (b)  Operations on bit vector representation of sets:

Tree data structure is used to implement the set. The name of a set is
given by the node at the tree. The array implementation of tree represents
that P[i] is the parent of ith element

0 0 0 0
1 2 3 4
1 2 3 4

Disjoint set initialization routine


void initialize (Disjset s)
{
  int i;
  for [i = numsets; i > 0;i--)
   s[i] = 0
}

Find:
2.24 B.E./B.Tech. Question Papers

Set Type Find (Element Type X, Disjset S)


{
  if (S[X] < = 0)
   return X;
  else
   return Find (S[X], S)
}

Union:
SetUnion (DisjSet S, Set Type R, Set Type R2)
{
  set[R2] = R1;
}

15. (a) (i)  EFS:


Depth First Search
Like preorder traversal of tree.
Traversal can start from any vertex, Vi.

Vi is visited and then all vertices adjacent to vi are traversed.

Algorithm:
n←number of nodes
i.  Initialize visited[] to fals (0
  for (i = 0; i<n; i + + )
  visited [i] = 0;
ii.  void EFS (vertex i) [DFS/starting from i]
{
  visitied [i] = 1;
  for each w adjacent to i
  if (!visited[w])
   DFS (w);
}

15. (a) (ii)  BFS:

Breadth First Search

Starts from the given vertex V0

V0 is marked as visited.
Data Structures (Nov/Dec 2011) 2.25

All vertices adjacent to V0 are visited next.


Algorithm:
void BFS(int v)
{
  q: a queue type variable;
  initialize q;
  visited [v] = 1;
  add the vertex v to queue q;
  while(q is not empty)
  {
   v←deleted on empty from the queue;
   for all vertices w adjacent from v
   {
    if(!visited[w])
    {
     visited[w]-1;
     add the vertex w to queue q;
    }
   }
  }
}

15. (b) (i)  Prim’s algorithm

Prim’s algorithm is used to compute minimum spanning tree by making


the tree to grow in successive stages. In each stage, one node is picked
as the root and we add an edge and thus an associated vertex to the tree.

At any point in algorithm, we can see a set of vertices that have been
included in the tree and the rest of vertices have not. The algorithm the
finds at each stage, a new vertex to add to the tree by choosing the edge
(u, v) such that the cost of (u, v) is the smallest among all edges where u
is in the tree and v is not.

Routine for PRIM’S algorithm:

Void Prim(Graph G)
{
  MSTtree t;
  Vertex u,v;
  Set of vertices v;
  Set of tree vertices u;
2.26 B.E./B.Tech. Question Papers

  T = null;
  U = {1}
  While (u≠v)
  {
   Let (u,v) be the lowest cost such that u is in u and v is in v-u;
   T = TU{(u,v)};
   u = UU{v};
  }
}
Void Prim(Table T)
{
  Vertex v,w;
  For(i = 0; i<numvertex; i + + )
  {
   T[i]×known = flase
   T[i] ×dist = infinity
   T[i] ×path = 0;
}
For(;;)
  {
  Let V be the start vertex with smallest distance
  T[v] ×dist = 0
  T[v] ×known = True;
  For each W adjacent to V
  If (!T[w] ×known)
  {
   T[w] ×Dist = Min(T[w] ×Disst, Cvw);
   T[w] ×path = v;
  }
}
}
}

Prim’s algorithm works on undirected graph and so we have to put every


edge in two adjacency list. The running time is 0(|V|2) without heaps
which is optimal for dense graphs and 0(|E|log|V|) using binary heaps.

15. (b) (ii)  Kruskal’s algorithm

Topological ordering of the given graph:

Adjacency Matrix
Data Structures (Nov/Dec 2011) 2.27

Vertex A B C D E F G
A 0 0 0 0 0 0 1
B 0 0 0 0 0 0 0
C 1 0 0 1 0 0 0
D 0 1 0 0 0 0 0
E 0 0 0 0 0 0 0
F 0 1 0 0 0 0 1
G 0 0 0 0 0 0 0

Indegree[A] = 1
Indegree[B] = 0
Indegree[C] = 3
Indegree[D] = 1
Indegree[E] = 0
Indegree[F] = 2
Indegree[G] = 0

Vertex Indegree before dequeue


1 2 3 4 5 6 7
A 1 1 1 0 0 0 0
B 0 0 0 0 0 0 0
C 3 3 2 2 1 0 0
D 1 0 0 0 0 0 0
E 0 0 0 0 0 0 0
F 2 1 1 0 0 0 0
G 0 0 0 0 0 0 0

Enqueue B,E,G E,G,D G,D D,A,F A,F F,C


Dequeue B E G D A FC

Topological order of the given graph is B, E, G, D, A, F, C


B.E./B.Tech. Degree Examination, NOV/DEC 2010

Third Semester
Computer Science and Engineering
(Common to Information Technology)

DATA STRUCTURES
(Regulation 2008)

Time: Three hoursMaximum: 100 marks

Answer ALL Questions

Part A (10 × 2 = 20 marks)


1. Define ADT.

2. Clearly distinguish between linked lists and arrays. Mention their


relative advantages and disadvantages.

3. What is meant by depth and height of a tree?

4. Discuss the application of trees.

5. What are the important factors to be considered in designing the hash


function?

6. What is a disjoint set? Define the ADT for a disjoint set.

7. What is Euler circuit?

8. What are the two ways of representing a graph? Give examples.

9. Define NP-complete problems.

10. What is meant by backtracking?


Data Structures (Nov/Dec 2010) 2.29

Part B (5 × 16 = 80 marks)
11. (a) (i) Derive an ADT to perform insertion and deletion in a singly
linked list. (8)
(ii) Explain cursor implementation of linked lists. Write the essential
operations. (8)
Or
(b) (i) Write an ADT to implement stack of size N using an array. The
elements in the stack are to be integers. The operations to be
supported are PUSH, POP and DISPLAY. Take into account the
exceptions of stack overflow and stack underflow. (8)
(ii) A circular queue has a size of 5 and has 3 elements 10, 20 and
40 where F = 2 and R = 4. After inserting 50 and 60, what is the
value of F and R. Trying to insert 30 at this stage what happens?
Delete 2 elements from the queue and insert 70, 80 & 90. Show
the sequence of steps with necessary diagrams with the value of
F & R. (8)
12. (a) (i) Write an ADT to construct an AVL tree. (8)
(ii) Suppose the following sequences list nodes of a binary tree T in
preorder and inorder, respectively :
Preorder : A, B, D, C, E, G, F, H, J
Inorder : D, B, A, E, G, C, H, F, J
Draw the diagram of the tree. (8)
Or
(b) (i) Write an ADT for performing insert and delete operations in a
Binary Search Tree. (8)
(ii) Describe in detail the binary heaps. Construct a min heap tree
for the following :
5, 2, 6,7, 1, 3, 8, 9, 4 (8)
13. (a) (i) Formulate an ADT to implement separate chaining hashing
scheme. (8)
(ii) Show the result of inserting the keys 2, 3, 5, 7, 11, 13, 15, 6,
4 into an initially empty extendible hashing data structure with
M = 3. (8)
2.30 B.E./B.Tech. Question Papers

Or
(b) (i) Formulate an ADT to perform for the Union and find operations
of disjoint sets. (8)
(ii) Describe about Union-by-rank and Find with path compression
with code. (8)
1 4. (a) (i) Write routines to find shortest path using Dijkstra’s algorithm.
 (8)
(ii) Find all articulation points in the below graph. Show the depth
first spanning tree and the values of DFN and Low for each
vertex. (8)
Or
(b) (i) Write the pseudo code to find a minimum spanning tree using
Kruskal’s algorithm. (8)
(ii) Find the topological ordering of the below graph. (8)
15. (a) (i) Discuss the running time of Divide-and-Conquer merge sort
algorithm. (8)
(ii) Explain the concept of greedy algorithm with Huffman codes
example. (8)
Or
(b) Explain the Dynamic Programming with an example. (16)
Solutions
Part A
1. ADT is a set of operations. Abstract data types are mathematical
abstractions objects such as lists, sets and graphs along with their
operations are viewed as abstract data types.

2. Array:

Collection of similar data under common datatyp

Advantage:

•• Stored in consecutive memory locations.

Disadvantage:

•• Insertion and deletion are expensive


•• Wastage of memory space

Linkedlist:

Consist of series of nodes. Each node contains the element and a pointer
to its successor node.

Advantage:

•• Insertion and deletion are easier.


•• Finding the predecessor and successor of a node is easier

Disadvantage:
•• More memory space is required.

3. Depth of tree is the length of the unique path from root to the node n.
Height of tree is the length of largest path from node n to leaf.

4.
i) Huffman coding.
ii) Scheduling problem.
iii) Approximate Bin Packing.
2.32 B.E./B.Tech. Question Papers

5.
•• A hash function has to distribute hash indexes over the range of
array.
•• When an element is inserted, it hashes to the same value as an
already inserted element, then the collision occurs. So hash should
effectively handle collision.

6. A disjoint set is a data structure that keeps track of a set of elements


partitioned into a number of disjoint subsets. Disjoint set ADT is a
collection of distinguishable objects.

7. Euler circuit is referred to as Euler Path. It exists if it is possible to travel


over every edge of a graph exactly once and return to the starting vertex.

8.
•• adjacency matrix representation i.e., to represent a graph by using a
two-dimensional array.
•• adjacency list representation i.e., for each vertex, we keep a list of
all adjacent vertices.

9. An NP-complete problem has the property that any problem in NP can


be polynomially reduced to it.

e.g., Travelling Salesman Problem.

10. Backtracking is used to find solutions to some computational problem


that incrementally builds the solutions and backtracks as soon as it
determines that solutions is not a valid solution.

e.g., Eight queens puzzle.

Part B
11. (a) (i)  The insertion routine for a singly linked list is as follows:

Void insert (ElementType X, List L, Position P)


{
  Position TmpCell;
  TmpCell = malloc(size of (struct Node));
  If (TmpCell = = Null)
   FatalError(“Out of Space”);
  TmpCell→element = X;
Data Structures (Nov/Dec 2010) 2.33

  TmpCell→Next = P→Next;
  P→Next = TmpCell;
}

The deletion routine for a singly linked list is as follows:

Void delete (ElementType X, List L)


{
  Position P, TmpCell;
  P = FindPrevious (X,L);
  if(!IsLast(P,L))
  {
   TmpCell = P→Next;
   P→Next = TmpCell→Next;
   free(TmpCell);
  }
}

11. (a) (ii) Cursor implementation of linked List:

If the linked lists are required and pointers are not available, then
cursor implementation to linked lists is used. The declaration for cursor
implementation of linked list is as follows:

typedef int PtrToNode;


typedef PtrToNode List;
typedef PtrToNode Position;
void InitializeCursorSpace (Void);
List MakeEmpty(List L);
int IsEmpty(const List L);
int IsLast(const Position P, const List L);
Position Find (ElementType X, const List L);
void Delete (ElementType X, List L);
Position Find Previous (ElementType X, const List L);
void Insert (ElementType X, List L, Position P);
void DeleteList (List L);
Position Header (const List L);
Position First (const Position P);
ElementType Retrieve (const Position P);
struct node
{
  ElementType Element;
2.34 B.E./B.Tech. Question Papers

  Position Next;
};
struct Node CursorSpace [SpaceSize];

11. (b) (i) Array implementation of stack

The stack declaration for array implementation is as follows:

struct StackRecord;
typedef Struct StackRecord *Stack;
int IsEmpty (Stack S);
int IsFull (Stack S);
stack CreateStack (int MaxElements);
void DisposeStack (Stack S);
void MakeEmpty (Stack S);
void Push (ElementType X, Stack S);
ElementType Top(Stack S);
void Pop(Stack S);
ElementType TopAndPop(Stack S);
Struct StackRecord
{int Capacity;
Int Topofstack;
ElementType *Array;
};

The routine to push onto a stack is as follows:

Void Push (ElementType X, Stack S)


{
If (IsFull(s))
Error(“FullStack);
else
S→Array[ + + S→TopofStack] = X;
}

The routine to pop from a stack is

void Pop (Stack S)


{
  if (IsEmpty(s))
  Error (“Empty Stack”;
  else
Data Structures (Nov/Dec 2010) 2.35

  S→TopofStack − −;
}

The routine to display top element of stack is

ElementType Top(Stack S)
{
  if (!IsEmpty(s))
  return s→Array[s→TopofStack];
  Error(“Empty stack”);
  return 0;
}

11. (b) (ii) Circular queue size S = 5; F = 2; R = 4


Q[0]
R
Q[1]
Q[4] 40

20 10
Q[2]
Q[3]
F

Step1→Insert 50
To insert 50
R = [R + 1]%S
= 5%5 = 0
Q[0] = 50
R
Q[0]

50
Q[4] Q[1]
40

20 Q[2]
10
Q[3]
F

Step2→Insert 60

To insert 60;

R = [R + 1]%S = 0 + 1%5 = 1
2.36 B.E./B.Tech. Question Papers

Therefore Q[1] = 60

Now F = 2, R = 1
Q[0]
Q[4]
40 50

Q[1]
20 60 R

Q[3] 10
Q[2]
F

Step 3→ Insert 30,

R = [R + 1]%S = 1 + 1%5 = 2

i.e., R = F

The Queue is overflow

Step4→Deleting the first element, X

X = Q[F] i.e., Q[2] = 10

Now F = (F + 1]%S = 2 + 1%5


= 3%5
i.e., F = 3
Q[0]

50 Q[1]
60 R
Q[4] 40

20
Q[2]
F
Q[3]

Step5→Again deleting the second element

X = Q[3] i.e., 20

Now, F = F + 1%S
= 3 + 1%S
= 4%5
F = 4
Data Structures (Nov/Dec 2010) 2.37

Step 6 → Insert 70

R = (R + 1)%S
= 1 + 1%5
= 2%5
R=2

So 70 is inserted in Q[2]

Step 7→ Insert 80

R = R + 1%S
= 2 + 1%5 = 3%5
R=3

So 80 in inserted in Q[3]

Step 8→ Insert 90

R = R + 1%S
= 4%5
R=4

Now F = R

Therefore, Circular queue overflows.

12. (a) (i) ADT to construct on AVC Tree

AVL tree is a binary search tree with a balance condition i.e., the height
of left and right subtrees can differ by atmost L

The routine to construct an AVL tree is as follows:

AVl tree Insert (ElementType X, AvlTree T)


{
  if (T = = Null)
  {
   T = malloc(sixeof(struct AVlNode));
   if (T = = Null)
   Fatal Error(“Out of Space”);
  else
   {
2.38 B.E./B.Tech. Question Papers

    T→Element = X;
    T→Height = 0;
    T→Left = T→Right = Null;
   }
  }
else if (X<T→Element)
{
  T→Left = Insert(X, T→Left);
  if (Height (T→Left)−Height (T→Right) = = 2)
   if (x<T→Left→Element)
   T = SingleRotateWithLeft(T);
  else
   T = DoubleRotateWithLeft(T);
}
else if (X>T→Element)
{
  T→Right = Insert (X, T→Right);
  if (Height (T→Right)−Height(T→Left) = = 2)
   if (X>T→Right→Element)
   T = SingleRotateWithRight(T);
  else
   T = DoubleRotateWithRight(T);
}
  T→Height = Max(Height(T→Left), Height (T→Right)) + 1;
  return T;
}

12. (a) (ii)

Preorder: A,B,C,D,E,F,G,H,J

Inorder: D,B,A,E,G,C,H,F,J

Binary tree with the following sequence is

B C

D
E F

G H I
Data Structures (Nov/Dec 2010) 2.39

12. (b) (i)  Insertion operation in Binary Search Tree

Algorithm to insert an element in binary tree:

Search Tree
Insert(Element Type X, SearchTree T)
{
  if (T = = NULL)
   {
    T = malloc (size of (struct TreeNode));
    if (T = = NULL)
     FatalError (“Out of Spae”);
    else
     {
      T → Element = X;
      T → Left = T → Right = NULL;
     }
   }
else
  if (X < T → Element)
  T → Left = Insert (X, T → Left);
  else
  if (X > T → Element)
  T → Right = Insert (X, T → Right);
  Return T;
}

e.g.,: 6, 2, 8, 1, 4, 3, 5

To insert X into tree T, proceed down the tree. If X is found, do nothing.


Otherwise insert X at the last spot on the path traversed.

2 8

1 4

To insert 5, we traverse the tree as it is occurring or not. At the node with


key 4, we go right and insert it as it is correct spot.
2.40 B.E./B.Tech. Question Papers

2 8

1 4

3 5

Deletion operation in a Binary search Tree

SearchTree Delete (ElementType X, SearchTree T)

{
  Position TmpCell;
  if (T = = Null)
   Error (“Element not found”);
  else
   if (X<T→Element)
   T→Left = Delete (X,T→Left);
  else
   if (X>T→Element)
   T→Left = Delete (X,T→Left);
  else
   if (T→Left && T→Right)
  {
   TmpCell = FindMin(T→Right);
   T→Element = TmpCell→Element;
   T→Right = Delete (T→Element, T→Right);
  }
else
  {
   TmpCell = T;
   if (T→Left = = Null)
   T = T→Right;
else
  if (T→Right = = Null)
  T = T→Left;
  Free (TmpCell);
  }
  return T;
}
Data Structures (Nov/Dec 2010) 2.41

12. (b) (ii)  Binary heaps:

→ can also be refered to as heaps

→ has two properties


a) structure property
b) heaporder property

Constructing a min heap tree from 5, 2, 6, 7, 1, 3, 8, 9, 4

Step 1
5 5

Step 2
5 2
2 5

2 5

Step 3
2
2 5 6

5 6

Step 4
2

5 6 2 5 6 7

Step 5
2 1

5 6 ⇒ 2 6 1 2 6 7 5

7 1 7 5
2.42 B.E./B.Tech. Question Papers

Step 6
1 1

2 6 ⇒ 2 3 1 2 3 7 5 6

7 5 7 5 6
3

Step 7
1

1 2 3 7 5 6 8
2 3

7 5 6 8

Step 8
1

1 2 3 7 5 6 8 9
2 3

7 5 6 8

Step 9
1 1
1 2 3 4 5 6 8 9 7


2 3 2 3

7 5 6 8 4 5 6 8

9 4 9 7

13. (a) (i)  ADT for separate chaining hashing scheme:

HashTable Intialize Table (int TableSize)


{
  HashTable H;
Data Structures (Nov/Dec 2010) 2.43

  int i;
  if (TableSize < MinTableSize)
  {
   Error (“Table size small”)
   return Null;
  }
   H = malloc(size of (struct HashTbl));
   if (H = = Null)
    Error (“Out of Space”);
   H→TableSize = NextPrime (TableSize);
   H→TheLists = malloc(sizeof(List)*H→TableSize);
   if (H→TheLists = = Null)
    Error (“Out of space”);
   for (i = 0; i<H→TableSize; i + + )
   {
    H→TheLists[i] = malloc(size of (struct List Node));
    if (H→TheLists[i] = = Null)
     Error (“Out of space”);
    else
     H→TheLists[i]→Next = Null;
   }
  return H;
}

1 3. (a) (ii)  Consider the keys as 4 bits


1 To insert 2 (0010)

00 01 10 11

(2)

0010

2 To insert 3 (0011)

00 01 10 11

(2)
0010
0011

3 To insert 5 (0101)
2.44 B.E./B.Tech. Question Papers

00 01 10 11

0010 0101
0011

4 To insert 7 (0111)

00 01 10 11

0010 0101
0011 0111

Similarly if we insert 11, 13, 15, 6 and 4


000 001 010 011 100 101 110 111

0101 0101 0111 1011 1101 1111


0100 0100 0110

13. (b) (i)  Union and find operation of a disjoint set

Routine for Union operation

void SetUnion (DisjSet S, SetType Root1, SetType Root2)


{
  S[Root 2] = Root 1;
}

Routine for Find operation

SetType Find (ElementType X, DisjSet S)


{
  if(S[X]leftimplies0)
   return X;
  else
   return Find (S[X], S);
}

13. (b) (ii)  Union-by-Rank:

Used to track of height, instead of size of each tree and perform unions
by making the shallow tree a subtree of deeper tree.
Data Structures (Nov/Dec 2010) 2.45

void SetUnion(DisjSet S, SetType Root1, Set Type Root2)


{
  if (S[Root 2]<S[Root1])
  S[Root1] = Root2;
else
  {
   if (S[Root 1] = = S[Root2])
   S[Root 1]−−;
   if (S[Root 2]<S[Root1])
  }
}

PathCompression:

Path Compression is performed during a find operation and is used to


perform unions.

SetType Find (ElementType X, DisjSet S)


{
  if (S[X]< = 0)
  return X;
else
  return S[X] = Find (S[X],S);
}

14. (a) (i)  Routine to find shortest path using Dijkstra algorithm

Dijkstra’s algorithms:

void Dijkstra (Table T)


{
  vertex v, w;
  for (;;)
  {
   v = smallest unknown distance vertex;
   if (v = = Not A Vertex)
   break;
   T[V] known = True;
   for each W adjacent to V
   if(!T[W] × known)
   if (T[V] × Dist + CVW < T[W] × Dist)
   {
2.46 B.E./B.Tech. Question Papers

    Decrease(T[W] × Dist + Dist to T[V].Dist + CVW);


    T[W] × Path = V;
   }
  }
}

14. (a) (ii)  A connected undirected graph is biconnected if there are no


vertices whose removal disconnects the rest of the graph. If the graph is
not biconnected, the vertices whose removal would disconnect the graph
are known as articulation points.

The given graph is not biconnected: C and D are articulation points.


The removal of C would disconnect G and the removal of D would
disconnect E and F from the rest of graph.

Depth first search provides a linear-time algorithm to find all articulation


point in a connected graph. First, starting at any vertex, we perform a
depth-first search and number the nodes as they are visited. For each
vertex v, we call this preorder number Num(v). Then for every vertex v
in the depth-first search spanning tree, we compute the lowest-numbered
vertex, call Low(v) that is reachable from v by taking zero or more tree
edges.

Depth first tree for the given graph with Num and Low are:

A, 1/1

B, 2/1

D, 4/1 C, 3/1

G, 7/1
E, 5/4
F, 6/4

14. (b) (i)  Kruskal’s algorithm:

Void Kruskal (Graph G)


{
  int Edges Accepted;
Data Structures (Nov/Dec 2010) 2.47

  Disjset S;
  Priority Queue H;
  Vertex U, V;
  SetType Uset, Vset;
  Edge E;
  Intitialize (S);
  Read Graph Into Heap Array (G, H);
  Build Heap (H);
  Edges Accepted = 0;
  While (Edges Accepted < Numvertex - 1)
  {
   E = Delete Min(H);
   Uset = Find (U, S);
   Vset = Find (V, S);
   if (Uset! = Vset)
   {
    Edges Accepted + + ;
    SetUnion (S, Uset, Vset)
   }
  }
}

Minimum spanning for the given graph

Edge Weight Action


(V1, V4) 1 Accepted
(V6, V7) 1 Accepted
(V1, V2) 2 Accepted
(V3, V4) 2 Accepted
(V2, V4) 3 Rejected
(V1, V3) 4 Rejected
(V4, V7) 4 Accepted
(V3, V6) 5 Rejected
(V5, V7) 6 Accepted
2.48 B.E./B.Tech. Question Papers

V1 V2 V1 V2 V1 V2
1 1

V3 V4 V5 V3 V4 V5 V3 V4 V5

1
V6 V7 V6 V7 V6 V7

2 2 2
V1 V2 V1 V2 V1 V2

1 1 1
V3 V4 V5 V3 V4 V5 V3 V4 V5
2 2 4

V6 V7 V6 V7 V6 V7
1 1 1

2
V1 V2

1
V3 V4 V5
2
4 6
V6 V7
1

14. (b) (ii)  Topological ordering of the given graph:

Adjacency Matrix

Vertex A B C D E F G
A 0 0 0 0 0 0 1
B 0 0 0 0 0 0 0
C 1 0 0 1 0 0 0
D 0 1 0 0 0 0 0
E 0 0 0 0 0 0 0
F 0 1 0 0 0 0 1
G 0 0 0 0 0 0 0

Indegree[A] = 1
Indegree[B] = 0
Indegree[C] = 3
Data Structures (Nov/Dec 2010) 2.49

Indegree[D] = 1
Indegree[E] = 0
Indegree[F] = 2
Indegree[G] = 0

Vertex Indegree before dequeue


1 2 3 4 5 6 7
A 1 1 1 0 0 0 0
B 0 0 0 0 0 0 0
C 3 3 2 2 1 0 0
D 1 0 0 0 0 0 0
E 0 0 0 0 0 0 0
F 2 1 1 0 0 0 0
G 0 0 0 0 0 0 0

Enqueue B,E,G E,G,D G,D D,A,F A,F F,C


Dequeue B E G D A FC

Topological order of the given graph is B, E, G, D, A, F, C

15. (a) (i) Analysis of Merge Sort:

In Merge sort, assume N is a power of 2, i.e., we always split into even


halves. For N = 1, the time to Mergesort is constant and is denoted by
1. Otherwise the time to mergesort N numbers is equal to the time to do
two recursive mergesorts of size N/2 plus the time to merge.

i.e., T(1) = 1

T(N) = 2T(N/2) + N

This is a standard recurrence relation which can be solved in many ways.


The first idea is to divide the recurrence relation through N.

T(N) T(N/2)
i.e.,= +1
N N/2
This equation is valid for any N that is power of 2
2.50 B.E./B.Tech. Question Papers

T(N/2) T(N/4)
i.e., = +1
N/2 N/4
and

T(N/4) T(N/8)
= +1
N/4 N/8

T(2) T(1)
= +1
2 1
Now addup all the equations and the final result’s

T(N) T(1)
= + log N
N 1
Multiplying through N

T(N) = N log N + N
= 0(N log N)
Therefore Mergesort’s running time is = 0(N log N)

15. (a) (ii) Application of Greedy Algorithm


(i) Scheduling problem
(ii) Huffman coding

Scheduling problem:

As an example, consider four jobs with their running time.


Job Time
J1 15
J2 8
J3 3
J4 10

One possible schedule is

j1 j2 j3 j4

0 15 23 26 36
Data Structures (Nov/Dec 2010) 2.51

Based on the shortest job first, we can schedule as:

j3 j2 j4 j1

0 3 11 21 36

We can show that it will yield a optimal schedule. Let the jobs in the
schedule be ji1, ji2, …, jiN. The first job finishes in time ti1. The second
job finishes after ti1 + ti2 and third job finishes after ti1 + ti2 + ti3.

The total cost C, of the schedule is

C = ∑ ( N − K + 1) tik 
N
(1)
k =1

= ( N + 1)∑ tik − ∑ K ⋅ tik 


N N
(2)
k =1 k =1

eqn (2) shows the first sum is independent of job ordering and second
sum affects the total cost. These result indicates that operating system
scheduler generally gives precedence to shorter jobs.

Huffman codes:

It is also known as file compression. Consider a file contains characters


a, e, i, s, t, plus blank space and new lines. Suppose that file has ten a’s,
15 e’s, 12 i’s, 3 s’s, 4t’s, 13 blanks and 1 newline. This file requires 174
bits to represent this:

Character Code Frequency Total bits

a 000 10 30
e 001 15 45
i 010 12 36
s 011 3 9
t 100 4 12
space 101 13 39
newline 110 1 3
174
2.52 B.E./B.Tech. Question Papers

So if the file is large, it may require large number of bits. To provide


better code and to reduce total number of bits, binary code can be
represented by binary tree

a e i s t sp nl

The tree has data only in the leaves. This tree is encoded as 011. the
main question arise is now the tree is constructed. The algorithm to do
as Huffman code

Initial stage of Huffman’s algorithm


10 15 12 3 4 13 1
a e i s t sp nl

After the first merge.

T1 4
10 15 12 4 13
a e i t sp s nl

After the second merge.


T1 T2 8
10 15 12 13
a e i sp s nl t

Huffman’s algorithm after the final merge


T6 58

T5 T4

T3
e i sp
T2
a
T1
t

s nl
Data Structures (Nov/Dec 2010) 2.53

15. (b) Dynamic Programming:

It is a technique for solving problems with overlapping sub problems.


The smaller subproblems are solved once and the results are stored in
the table from which the solution to the original problem is obtained.
Some examples are

•• All pairs shortest path


•• Optimal binary search tree
•• Ordering of matrix multiplication

Ordering of Matrix Multiplication:

Suppose we have four matrices, A, B, C and D of dimensions A = 50 × 10,


B = 10 × 40, C = 40 × 30 and D = 30 × 5. Matrix Multiplication is
associative which means the matrix product ABCD can be parenthesized
and thus evaluated in any order. The five different ways to order the
multiplications is

* (A((BC)D))
* (A(B(CD)))
* ((AB)(CD))
* (((AB)C)D)
* ((A(BC))D)

The calculations show that the best ordering uses roughly one-ninth the
number of multiplications as the worst ordering. We define T(N) to be
the number. Then T(1) = T(2) = 1, T(3) = 2 and T(4) = 5
N−1
i.e., T(N) = ∑ T(i) T(N − i)
i=1

The routine to find optimal ordering of matrix multiplication is as


follows:

void OptMatrix(const long c[], int N, TwoDimArray M, TwoDimArray


LastChange)
{
  int i, k, Left, Right;
  long ThisM;
  for (Left = 1; Left < = N; Left + + )
   M[Left][Left] = 0;
2.54 B.E./B.Tech. Question Papers

  for(k = 1; k<N; k + + )
   for (Left = 1; Left < = N − k; Left + + )
   {
    Right = Left + k;
    M[Left][Right] = Infinity;
    for(i = Left; i<Right; i + + )
    {
     ThisM = M[Left][i] + M[i + 1][Right] + C[Left − 1]
     * C[i]*C[Right]
     if(ThisM < M[Left][Right])
     {
      M[Left][Right] = ThisM;
      LastChange[Left][Right] = i;
     }
    }
   }
}
B.E./B.Tech. Degree Examination, NOV/DEC 2009

Third Semester
Computer Science and Engineering
(Common to Information Technology)

DATA STRUCTURES
(Regulation 2008)

Time: Three hoursMaximum: 100 marks

Answer ALL Questions

Part A (10 × 2 = 20 marks)


1. What is meant by abstract data type (ADT)?

2. What are the postfix and prefix forms of the expression

3. What is a Binary tree?

4. Define expression tree

5. What are the applications of hash table?

6. What is an equivalence relation?

7. Define indegree and out degree of a graph.

8. What is a minimum spanning tree?

9. Compare backtracking and branch-and-bound.

10. List the various decision problems which are NP-Complete.

Part B (5 × 16 = 80 marks)
11. (a) (i) Write the insertion and deletion procedures for cursor based
linked lists. (8)
2.56 B.E./B.Tech. Question Papers

(ii) Write the algorithm for the deletion and reverse operations on
doubly linked list. (8)
Or
11. (b) (i) Write an algorithm for Push and Pop operations on Stack using
Linked List. (8)
(ii) Explain the addition and deletion operations performed on a
circular queue with necessary algorithms. (8)
12. (a) (i) Write the algorithm for pre-order and post-order traversals of a
binary tree. (8)
(ii) Explain the algorithm to convert a postfix expression into an
expression tree with an example. (8)
Or
12. (b) (i) Write an algorithm to insert an item into a binary search tree and
trace the algorithm with the items 6, 2, 8, 1, 4, 3, 5. (8)
(ii) Describe the algorithms used to perform single and double
rotation on AVL tree. (8)
13. (a) D
 iscuss the common collision resolution strategies used in closed
hashing system. (16)
Or
1 3. (b) (i) What is union-by-height? Write the algorithm to implement it.
 (8)
(ii) Explain the path compression with an example. (8)
14. (a) (i) What is topological sort? Write an algorithm to perform
topological sort. (8)
(ii) Write the Dijkstra’s algorithm to find the shortest path. (8)
Or
14. (b) W
 rite the Kruskal’s algorithm and construct a minimum spanning
tree for the following weighted graph. (16)
Data Structures (Nov/Dec 2009) 2.57

2
V1 V2

4 1 3 10

2 7
V3 V4 V5

5 8 4
6

1
V6 V7

15. (a) (i) Formulate an algorithm to multiply n-digit integers using divide
and conquer approach. (8)
(ii) Briefly discuss the applications of greedy algorithm. (8)
Or
(b) F
 ind the optimal tour in the following traveling salesperson problem
using dynamic programming: (16)
5 10
1 2
6 8
8 13

10 15
20 9
4 3
12 9
Solutions
Part A
1. ADT is a set of operations. Abstract data types are mathematical
abstractions objects such as lists, sets and graphs along with their
operations are viewed as abstract data types.

2. A + B * (C - D)/(P - R)
Post fix form: ABCD -*PR -/ +
Prefix form: + A/*B - CD - PR

3. A binary tree is a tree in which no node can have more than two children

E.g.,:
A

B C

D E

4. An expression tree is a tree with leaves as operands such as constants or


variable names and other nodes contain operators.

Eg.:
+

a b

5. Compilers, Graph theory problem, online spelling checkers etc.

6. A equivalence relation is a relation R that satisfies three properties:

(i) Reflexive, aRa, for all a ∈ S


(ii) Symmetric, aRb, if and only if bRa
(iii) Transitive, aRb and bRc implies that aRc

7. Indegree: In a directed graph, for any node v, the number of edges which
have v as their terminal node is called indegree of node v.
Data Structures (Nov/Dec 2009) 2.59

Out degree: In a directed graph, for any node v, the number of edges
which have v as their initial node is called out degree of node v.

8. Minimum spanning tree of an undirected graph G is a tree formed from


graph edges that connects all the vertices of G at lowest total cost.

9.
Back Tracking Branch and Bound
i) State space tree is constructed State space tree is constructed
using depth first search using best-first search
ii) No bounds are associated Bounds are associated with each
with the nodes in state-space and every node in state-space
tree. tree.

10.
•• Node cover
•• Planar Node cover
•• Undirected Hamiltonian cycle
•• Planar Undirected Hamiltonian cycle
•• Minimum edge deletion bipartite subgraph
•• Minimum node deletion bipartite subgraph

Part B
11. (a) (i) Insertion procedures for cursor based linked list

/* Insert (after legal position P) */


/* Header implementation assumed */
/* Parameter L is unused in this implementation */
void inset (Element Type X, list L, Position P)
  {
   Position TmpCell;
   Tmpcell = CursorAlloc();
   if (TmpCell = = 0)
   Fatal Error (“out of space !!!”);
   CursarSpace[TmpCell] × Element = X;
   CursorSpace[ImpCell] × Next = CursorSpace[P] × Next;
   CursorSpace[P] . Next = TmpCell;
  }
2.60 B.E./B.Tech. Question Papers

Deletion procedures for cursor based linked list void

void Delete (ElementType X, List L)


{
  Position P, TmpCell;
  P = FindPrevious(X, L);
  if(!IsLast (P, L))
  {
   TmpCell = CursorSpace[P] × Next;
   CursorSpace[P] × Next = CursorSpace[TmpCell] . Next;
  }CursorFree(TmpCell);
}

11. (a) (ii)  Algorithm for the deleting an element from the doubly
linked list

Function DELETE (TEMPHEAD, KEY)

[This Function delete an element from the doubly list and returns the
address of the first element TEMPHEAD is pointer which points the
first element of the list and KEY specify info of the node which is to be
deleted]

1. [Check for the empty list]


If TEMPHEAD = NULL
Then write (“Empty list”)
Return

2. [Deletion of the first node]


TEMP = TEMPHEAD
RPTR (TEMPHEAD) = TEMPHEAD
PRV (TEMPHEAD) = NULL
Free (TEMP)
Return (TEMPHEAD)

3. [Save the address of the first node]


SAVE = TEMPHEAD

4. [Search for the desire node]


Repeat while thru step 5 RPTR (SAVE)! = NULL

5.  [Check for the information field]


Data Structures (Nov/Dec 2009) 2.61

If INFO (RPTR (SAVE)) = KEY


Then TEMP = RPTR (SAVE)
RPTR (SAVE) = RPTR (RPTR (SAVE))
LPTR (RPTR (SAVE)) = SAVE
Free (TEMP)

6. [Finished]
Return (TEMPHEAD)

11. (b) (i)  push operation:

When an item is added to a stack, It is pushed on to the stack, given a


stack and an item I, performing the operation push (st,I) adds the item I
to the top of stack ‘st’. Push operation is applicable to any stack.

Algorithm for push operation

Variables

Size → Total no of elements

tos → Top of the stack.

val → Information which you want to insert in stack.

Stack[] → Array of stack.

Step 1 [Check that the stack is Full]


If tos = size-1 then
(print message)
Stack is full.
Return.

Step 2 [else]
[Increment tos by 1]
tos ← tos + 1

Step 3 [Input the element to stack]


Stack[tos] ← val

Step 4 [Stop]

pop operation:
2.62 B.E./B.Tech. Question Papers

The pop operation removes the top most item to understand after
removed of top most information new value of the pointer top becomes
the previous value of top that is top = top - 1 and free position is
allocated as free space.

Algorithm:

Step 1 [Check that stack is empty]


if tos = -1 then
(print message)
“Stack is Empty”
return

Step 2 [Else]
[Decrement tos by 1]
tos ← tos - 1
return.

Step 3 [Stop]

11. (b) (ii)  Algorithm for addition and deletion operation on a circular
queue.
procedure ADDQ {item, Q, n, front, rear)
  rear(rear + 1)mod n
if front = rear than call QUEUE-FULL
  Q(rear) item
  end ADDQ
procedure DELETE Q{item, Q, n, front, rear)
  if front = rear then call QUEUE-EMPTY
   front (front + 1) nod n
   item Q(front)

   end DELETE Q

12. (a) (i)  Algorithm for pre-order and post-order traversals of a binary
tree.

Preorder traversal
Struct node * ptr
{
  if (root = = NULL)
  {
Data Structures (Nov/Dec 2009) 2.63

   print f(“Tree is empty”);


  }
   If (ptr ! = NULL)
  {
   ptr → info;
   Preorder (ptr → lchild);
   Preorder (ptr → rchild);
  }
}

Post order Traversal


Struct node * ptr
{
if (root = NULL)
{print f(“Tree is empty”);
}
If (ptr ! = NULL)
{post order post order (ptr → lchild);
Post order (ptr → rchild);
ptr → info;
}

12. (a) (ii)   Algorithm to search an element in binary tree.

Position find (Element Type X, Search Tree T)


{
  if (T = = NULL)
   return NULL;
  if (X < T → Element)
   return Find (X, T → Left);
  else
   if (X > T → Element)
   return Find (X, T → Right);
  else
  return T;
}

12. (b) (i)  AVL Tree insertion

Refer Q. No. 12(a)(i) Nov/Dec 2010.


2.64 B.E./B.Tech. Question Papers

Algorithm to perform single and double rotation on AVL tree.

Single Rotation
Static Position
SingleRotateWithLeft (Position K2)
{
  Position K1;
  K1 = K2 → Left;
  K2 → Left = K1 → Right;
  K1 → Right = K2;
  K2 → Height = Max(Height(K2 → Left), Height (K2 → Right)) + 1;
  K1 → Height = Max(Height(K1 → Left), K2 → Height) + 1;
  Return K1;
}

Double Rotation
Static Position
Double Rotate With Left (Position K3)
{
  K3 → Left = Single Rotate With Right(K3 → Left);
  Return single Rotate With Left(K3);
}

12. (b) (ii)  Refer Q. No. 12(a)(i) Nov/Dec 2010.

13. (a)  Refer Q. No. 14(a) Nov/Dec 2010.

13. (b) (i)  Smart Union Algorithm:


Union-by-height

In Union by height will keep track of height, instead of size of each tree
and perform Unions by making the shallow tree a subtree of deeper tree.

Algorithm:
Void setUnion(Disj Set S, Set Type Root1, Set Type Root2)
{
  if (S[Root2] < S[Root1])
   S[Root1] = Root2;
  else
  {
   If (S[Root1] = = S[Root2])
   S[Root1] --;
Data Structures (Nov/Dec 2009) 2.65

   S[Root2] = Root1;
  }
}
(ii)  Union – by Size

The height of resultant tree representing S1US2, the union of two sets S1
and S2 can be reduced by making a smaller tree a subtree of larger tree.

void Union (int S[], int x, int y)


{
int root1, root2;
  root 1 = Find (S, X);
  root 2 = Find (S, Y);
  if(S[root1] < S[root2])
  S[root2] = root1;
else
  S[root1] = root2;
}

13. (b) (ii)  Path Compression

Path Compression is performed during FIND operation and is used to


perform union. Suppose the operation is Find(x). Then the effect of path
compression is that every node on the path from X to the root has its
parent changed to root.

Example:

The following fig shows the effect of path Compression after Find(15)

2 3 5 9 13 15

4 6 7 10 11 14 16

8 12

The algorithm for Path Compression is setType Find(Element Type X,


DisjSet S)
2.66 B.E./B.Tech. Question Papers

{
  If(S[X] < = 0)
  Return X;
  Else
  Return S[X] = Find (S[X], S);
}

14. (a) (i)  Topological sorting

Topological sort is an ordering of vertices in a directed acylic graph such


that if there is a path from vi to vj, then vj appears after vi in the ordering.

Algorithm:
void Topsort (Graph G);
{
  Queue Q;
  int Counter = 0;
  Vertex v, w;
  Q = GreateQueue(NumVertex);
  MakeEmpty(Q);
  for each vertex V
  if (Indegree[V] = = 0)
   Enqueue(V, Q);
  while(!IsEmpty(Q))
  {
   V = Dequeue(Q);
   TopNum[V] = + + Counter;
   for each W adjacent to V
   if (-- Indegree[W] = = 0)
    Enqueue (W, Q);
  }
if (Counter ! = NumVertex)
Error(“Graph has a cycle”);
DisposeQueue(Q);
}

14. (a) (ii)  Refer Q. No. 14 (a) (i) Nov/Dec 2009.

14. (b)  Refer Q. No. 14 (b) (i) Nov/Dec 2009.

15 (a) (i)  Algorithm to multiply n-digit integers using divide and conquer
approach?
Data Structures (Nov/Dec 2009) 2.67

function mul (X, Y, n:integer): integer;


{X” and Y are signed integers = 2n, n is a power of 2}
  var s:integer;
  m1, m2, m3:integer;
  A, B, C, D:integer;
  begin
   S: = Sign (x) ∗ Sign (y);
   X: = abst (x);
   Y:abst (y) ;
   if n = 1 then
    if (X″ - 1) and (Y = 1) then
    return (s)
   else
    return (0)
else begin
A: = left n/2 bits of X;
B: = right n/2 bits of Y;
C: = left n/2 bits of Y;
D: = right n/2 bits of Y;
m1: = mul (A, C, n/2);
m2: = mul (A–B, D–C, n/2);
m3: = mul(B, D, n/2)
return (S*(m1*2n + (m1 + m2 + m3)*2n/2 + m3))
end
end;

1 5. (a) (ii)  Application of Greedy Algorithm


(i) Scheduling problem
(ii) Huffman coding

Scheduling problem:

As an example, consider four jobs with their running time.


Job Time
J1 15
J2 8
J3 3
J4 10
2.68 B.E./B.Tech. Question Papers

One possible schedule is


j1 j2 j3 j4

0 15 23 26 36

Based on the shortest job first, we can schedule as:


j3 j2 j4 j1

0 3 11 21 36

We can show that it will yield a optimal schedule. Let the jobs in the
schedule be ji1, ji2, …, jiN. The first job finishes in time ti1. The second
job finishes after ti1 + ti2 and third job finishes after ti1 + ti2 + ti3.

The total cost C, of the schedule is

C = ∑ ( N − K + 1) tik 
N
(1)
k =1

= ( N + 1)∑ tik − ∑ K ⋅ tik 


N N
(2)
k =1 k =1

eqn (2) shows the first sum is independent of job ordering and second
sum affects the total cost. These result indicates that operating system
scheduler generally gives precedence to shorter jobs.

Huffman codes:

It is also known as file compression. Consider a file contains characters


a, e, i, s, t, plus blank space and new lines. Suppose that file has ten a’s,
15 e’s, 12 i’s, 3 s’s, 4t’s, 13 blanks and 1 newline. This file requires 174
bits to represent this:

Character Code Frequency Total bits

a 000 10 30
e 001 15 45
i 010 12 36
s 011 3 9
t 100 4 12
space 101 13 39
newline 110 1 3
174
Data Structures (Nov/Dec 2009) 2.69

So if the file is large, it may require large number of bits. To provide


better code and to reduce total number of bits, binary code can be
represented by binary tree

a e i s t sp nl

The tree has data only in the leaves. This tree is encoded as 011. the
main question arise is now the tree is constructed. The algorithm to do
as Huffman code

Initial stage of Huffman’s algorithm


10 15 12 3 4 13 1
a e i s t sp nl

After the first merge.


T1 4
10 15 12 4 13
a e i t sp s nl

After the second merge.


T1 T2 8
10 15 12 13
a e i sp s nl t

Huffman’s algorithm after the final merge


T6 58

T5 T4

T3
e i sp
T2
a
T1
t

s nl
2.70 B.E./B.Tech. Question Papers

15. (b) The adjacency matrix for the given traveling salesman problem is

1 2 3 4
1 0 10 15 20

 
2  5 0 9 10 
3 6 13 0 12 
4  8 8 9 0 
To calculate optimal tour length,
g(2, f) = C21 = 5,
g(3, f) = C31 = 6,
g(4, f) = C41 = 8

From the eqn, g(i, s) = min{Cij + g(j, S – {j})}, j ∈ S


We can obtain,
g(2, {3}) = C23 + g(3, f) = 9 + 6 = 15
\ g(2, {4}) = 18, g(3, {2}) = 18
g(3, {4}) = 20, g(4, {2}) = 13
g(4, {3}) = 15

Next, we compute g(i, S) with S = 2, i ≠ 1, 1 ∈ S and i ∈ S


g(2, {3, 4}) = min{C23 + g(3, {4}), C24 + g(4, {3})} = 25
g(3, {2, 4}) = min{C32 + g(2, {4}), C34 + g(4, {2})} = 25
g(4, {2, 3}) = min{C42 + g(2, {3}) C43 + g(3, {2})} = 23

Finally from the eqn,


g(1, V-{1}) = min{C1k + g(K, V-{1, k})}
2<= k<= n
We obtain g(1, {1, 2, 3, 4}) = min{C12 + g(2, {3, 4})
C13 + g(3, {2, 4}), C14 + g(4, {2, 3})}
= min{35, 40, m43}
= 35

The optimal tour of the graph has length 35. A tour of this length can
be constructed with each g(i, s), The value of j that minimizes the right
hand side of 1.
So the tour starts from 1 and go to 2. The remaining tour can be obtained
from g(2, {3, 4}). So (2, {3, 4}) = 4. Thus the next edge is (2, 4). The
remaining tour is for g(4, {3}) = 3.
The optimal tour is 1, 2, 4, 3, 1
Unsolved model paper 1

Answer ALL Questions

Part B (5 × 16 = 80 marks)
1. List out the areas in which data structures are applied extensively.

2. Convert the expression ((A + B)*C-(D-E)^(F + G)) to equivalent Prefix


and Post fix notations.

3. How many different trees are possible with 10 nodes?

4. What is an almost complete binary tree?

5. In an AVL tree, at what condition the balancing is to be done?

6. What is the bucket size, when the overlapping and collision occur at
same time?

7. Define graph.

8. What is a minimum spanning tree?

9. Define NP hard and NP complete.

10. What is meant by dynamic programming?

Part B (5 × 16 = 80 marks)
11. (a) (i) What is a linked list? Explain with suitable program segments
any four operations of a linked list. (12)
(ii) Explain with a pseudo code how a linear queue could be
converted into a circular queue. (4)
Or
(b) (i) What is a stack ADT? Write in detail about any three applications
of stack. (11)
(ii) With a pseudo code explain how a node can inserted at a user
specified position of a doubly linked list. (5)
2.72 B.E./B.Tech. Question Papers

12. (a) (i) Discuss the various representations of a binary tree in memory
with suitable example. (8)
(ii) What are the basic operations that can be performed on a binary
tree? Explain each of them in detail with suitable example. (8)
Or
(b) (i) Give an algorithm to convert a general tree to binary tree. (8)
(ii) With an example, explain the algorithms of in order and post
order traversals on a binary search tree. (8)
13. (a) What is an AVL tree? Explain the rotations of an AVL tree. (16)
Or
(b) (i) Explain the binary heap in detail. (8)
(ii) What is hashing? Explain any two methods to overcome collision
problem of hashing. (8)
14. (a) (i) Explain Dijkstra’s algorithm and solve the single source shortest
path problem with an example. (12)
(ii) Illustrate with an example, the linked list representation of
graph. (4)
Or
(b) (i) Write the procedures to perform the BFS and DFS search of a
graph. (8)
(ii) Explain Prim’s algorithm to construct a minimum spanning tree
from an undirected graph. (8)
15. (a) (i) With an example, explain how will you measure the efficiency
of an algorithm. (8)
(ii) Analyze the linear search algorithm with an example. (8)
Or
(b) E
 xplain how the travelling salesman problem can be solved using
greedy algorithm. (16)
Unsolved model paper 2

Answer ALL Questions

Part A (10 × 2 = 20 marks)


1. What do you mean by Top Down Design?

2. Write about program verification.

3. Define ADT and give an example.

4. List few applications of stack.

5. Convert the following infix expression into prefix and postfix notations
a * b − c − d + e * f − g h * i.

6. Explain hashing function.

7. Write the time complexities of quick sorting method.

8. Differentiate insertion and shell sort.

9. Define NP hard and NP complete problems.

10. Explain topological sorting on graphs.

Part B (5 × 16 = 80 marks)
11. (a) (i) With an example, explain how will you measure the efficiency
of an algorithm. (8)
(ii) Analyze the linear search algorithm with an example. (8)
Or
(b) E
 xplain the various aspects of problem solving in detail. Also discuss
pros and cons of each. (16)
12. (a) (i) Write suitable routines to perform insertion and deletion
operations in a linked queue. (12)
(ii) Write a suitable C routine to remove and return the top element
of the stack using Array implementation. (4)
2.74 B.E./B.Tech. Question Papers

Or
(b) W
 rite suitable ADT operations to perform insertion and deletion in
a doubly linked list. (16)
13. (a) (i) Explain the various hashing techniques with suitable examples.
 (10)
(ii) When will collisions arise? Discuss. (6)
Or
(b) W
 rite suitable ADT’s to perform the following operations in an AVL
Tree.
(i) Insert a node. (8)
(ii) Delete a node. (8)
14. (a) W
 rite ADT operations for Heap Sort. Also simulate the following
numbers using Heap Sort. What is the time complexity? (16)

35  45  25  11  6  85  17  38  102  178


Or
(b) (i) Explain Merge sort with an example. (8)
(ii) Explain External sorting. (8)
15. (a) W
 rite suitable ADT operation for shortest path problem. Show the
simulation of shortest path with an example graph. (16)
Or
(b) (i) How do you construct a minimum cost spanning tree with Prim’s
algorithm? (8)
(ii) Explain depth first search on a graph with necessary data
structures. (8)
Unsolved model paper 3

Answer ALL Questions

Part A (10 × 2 = 20 marks)


1. What is an algorithm?

2. What is program verification?

3. Convert the infix expression to postfix: (a + b ∧ c ∧ d ) * (e + f d ) .

4. List few applications of queues.

5. Construct an expression tree for the expression: ( a + b * c) +


((d * e + f ) * g ).

6. What is hashing?

7. What is the time complexity of Insertion sort algorithm?

8. Differentiate between internal sorting and external sorting.

9. What are articulation points?

10. Define NP-complete problem.

Part B (5 × 16 = 80 marks)
11. (a) Discuss in detail about the various aspects of problem solving. (16)
Or
(b) State and explain the algorithm to convert a decimal integer to its octal
equivalent. Trace the algorithm with an example. (16)
12. (a) G
 iven a singly linked list L, formulate separate routines/algorithms
to
(i)  insert an element X after a position P in the list. (8)
(ii) delete the first occurrence of an element Y from the list. Trace
the routine/algorithm with an example. (8)
2.76 B.E./B.Tech. Question Papers

Or
(b) (i) Formulate a routine in C/C + + to implement a stack using a
linked list and to pop an element from the stack. (8)
(ii) Write a routine to implement a queue using arrays and to
enqueue an element into it. (8)
13. (a) (i) Given an Unix file system as an input, formulate a routine to list
a directory. (8)
(ii) Write a routine/algorithm to insert an element into a binary
search tree. (8)
Or
(b) (i) Discuss in detail about the working of different hashing
functions. (10)
(ii) Write a function to perform deletion of an element from a binary
heap. (6)
14. (a) S
 tate and explain the algorithm to perform Heap sort with an
example. (16)
Or
(b) S
 tate and explain the algorithm to perform Merge sort with an
example. (16)
15. (a) S
 tate the pseudo code for Dijkstra’s algorithm. Trace the algorithm
with an example. (16)
Or
(b) (i) State the Kruskal’s algorithm to compute the MST of a graph.
(ii) Write short notes on Biconnectivity. (16)

Potrebbero piacerti anche