Sei sulla pagina 1di 8

LOVELY PROFESSIONAL UNIVERSITY

MODEL HOME WORK: #4


Course No: CSE205 Course Title: DATA
STRUCTURES
School: Department:
Name of the faculty member:Makul Mahajan
Class:B.Tech Term:210112
Section:F1901,F1902 Batch:2009
Max. Marks:7 DOA:6/4/11
DOS:18/4/11

Part-A

1. Discuss the significance of various shortest path algorithms and


make a comparison of the complexities of those algorithms.
Ans:-
Searching: searching refers to the operation of finding locations of a
given item in the collection of items.

• Sequential search: this is the most natural searching method. The


most intuitive way to search for a given item in data is to compare
item with each element of data one by one. Its complexity is of
order O (n). It is used where collection of item is less.
• Binary search: Suppose the data is present in the array which is
sorted in ascending or descending order then there is extremely
efficient method called binary search which is used to find the
location of a given item of information in data. The complexity of
binary search is of order O (log2n). This requires two condition for
the application of binary search
i. The list must be sorted
ii. One must have direct access to the middle element in any
sub list.
• Binary search tree: A binary tree can be called a binary search tree
if each node has property the value of node is greater than every
value in the left sub tree and less than every value in the right sub
tree. The complexity of finding an element in binary search tree is
O (log n).

Sorting: Sorting refers to arranging of data elements in some given order.

• Insertion Sort: In this sorting we can read the given elements from
1 to n, inserting each element into its proper position. This sorting
algorithm is frequently used when n is small. The insertion sort
algorithm scans A from A[l] to A[N], inserting each element A[K]
into its proper position in the previously sorted sub array A[l],
A[2], . . . , A[K-1]. Its complexity is of order O (n2).
• Selection Sort: In this sorting we find the smallest element in this
list and put it in the first position. Then find the second smallest
element in the list and put it in the second position. And so on. A
very simple algorithm, to code, and a very simple one to explain,
but a little slow. Note that you can do this using the smallest value,
and swapping it with the first unsorted element. The complexity of
this one is truly O (n2), where best case and worst case are the
same, because even if the list is sorted, the same number of
selections must still be performed.
• Merge Sort: Combing the two lists is called as merging. For
example A is a sorted list with r elements and B is a sorted list with
s elements. The operation that combines the elements of A and B
into a single sorted list C with n = r + s elements is called merging.
Its complexity is O (n log n). The disadvantage of using merge sort
is that it requires two arrays of the same size and type for the
merge phase.
• Radix Sort: Radix sort is the method that many people intuitively
use or alphabetizing a large list of names. Specifically, the list of
names is first sorted according to the begin to use when first letter
of each name. Radix sort is one of the linear sorting algorithms for
integers. It functions by sorting the input numbers on each digit, for
each of the digits in the numbers. However, the process adopted by
this sort method is somewhat counterintuitive, in the sense that the
numbers are sorted on the least-significant digit first, followed by
the second-least significant digit and so on till the most significant
digit. The drawback of radix sort is that one may need d*n memory
locations. This comes from the fact that all the items may be "sent
to the same pocket" during a given pass. This drawback may be
minimized by using linked lists rather than arrays to store the items
during a given pass. However, one will still require 2*n memory
locations. Its complexity is O (n).
• Quick Sort: This is the most widely used internal sorting algorithm.
It is based on divide-and-conquer type i.e. Divide the problem into
sub-problems, until solved sub problems are found. The Quick sort
algorithm uses the O (n log2 n) comparisons.
• Heap Sort: A heap is a complete binary tree, in which each node
satisfies the condition that the key of each node is greater than or
equal to the key in its children. Thus the root node will have the
largest key value. This utilizes, just about the only good use of
heaps that is finding the maximum element, in a max heap (or the
minimum of a min heap). Is in every way as good as the straight
selection sort, but faster. Its complexity is O (n log n).
• Bubble sort: In this sorting algorithm, multiple swapping take place
in one iteration. Smaller elements move or ‘bubble’ up to the top of
the list. In this method, we compare the adjacent members of the
list to be sorted, if the item on top is greater than the item
immediately below it, they are swapped. The Bubble sort algorithm
uses the O (n2) comparisons on average.

2. Make a comparison of the various searching techniques and discuss


the significance of the Hashing Functions.

Ans:-
Linear search: it is the method of finding the value in the list consist
of checking every element of it, at a time till the desired is found or
the list ends.
Binary Search: it locates the position of an item in sorted list. Binary
search works by comparing an input value to the middle element of
the array. The comparison determines whether the element equals the
input, less than the input or greater. When the element being compared
to equals the input the search stops and typically returns the position
of the element. If the element is not equal to the input then a
comparison is made to determine whether the input is less than or
greater than the element. Depending on which it is the algorithm then
starts over but only searching the top or bottom. If the input is not
located within the array the algorithm will usually output a unique
value indicating this. Binary search algorithms typically halve the
number of items to check with each successive iteration, thus locating
the given item (or determining its absence) in logarithmic time.
Hashing: this algorithm does not depends on the number of elements.
It saves item in the key indexed table. It has no space limitation and
time limitation

Significance of Hashing functionthe general idea of using the key to


determine the address of records is an excellent idea but it must be
modified so that a great deal of space is not wasted. This modification
takes the form of a function H from the set K of keys into the set L of
memory address. Such a function
H:KL
Is called a hashing unction. The two principal criteria of selecting H
are:
• H should be very easily and quick to compute.
• H should as far as possible, uniformly distribute the hash
addresses throughout the set L so that there is minimum
number of collision.

3. Give a comparison between the different ways in which the Graphs


can be implemented.

Ans:-

4. Discuss the application of the Algorithms Breadth First search and


Depth First Search; also make a comparative study on both of them.

Ans:-
Breadth-first search can be used to solve many problems in graph
theory, for example:

• Finding all nodes within one connected component


• Copying Collection, Cheney's algorithm
• Finding the shortest path between two nodes u and v
• Testing a graph for bipartiteness
Algorithms that use depth-first search as a building block include:

• Finding connected components.


• Topological sorting.
• Finding 2-(edge or vertex)-connected components.
• Finding strongly connected components.
• Planarity Testing[citation needed]
• Solving puzzles with only one solution, such as mazes.
• Maze generation may use a randomized depth-first search.
• Finding bi-connectivity in graphs.

Depth First Search Breadth First Search


Depth-first search (DFS) is an In graph theory, breadth-first
algorithm for traversing or search (BFS) is a graph search
searching a tree, tree structure, or algorithm that begins at the root
graph. One starts at the root and node and explores all the
explores as far as possible along neighboring nodes. Then for each
each branch before backtracking. of those nearest nodes, it explores
their unexplored neighbor nodes,
and so on, until it finds the goal.
DFS uses less memory, but could BFS uses a lot of memory, but
take a really long time given a will find the first best answer.
really deep tree.
DFS searches a tree by going BFS searches a tree by searching
through the left hand side of a tree every node, 1 level at a time.
all the way to the leaves.

Part-B

5. If you have been given a task to do some project and to do the basic
operations like searching and sorting there are many algorithms,
which algorithm you will select in which circumstances and why,
discuss in detail.
Ans:- When the task of searching is assigned to me then I will first
look up at data to be searched if the data is in tabular form then I will
be using hashing and if the data is given in linear format then I will be
using linear or binary search depending upon the data. If the number
of data in the list is less then I will be using linear search and if the
number of data in the list is very large then I will be using binary
search. Further if the data in the list is in a sorted order then also I will
be using the binary search.

Suppose I have been assigned a task of sorting then I will look upon the
data if it is in nearly sorted order then I will be using Insertion sort or
bubble sort. Quick sort is complicated but effective sorting algorithm. It
cannot be applied because of its complications. Implementation of
selection sort is very simple, and a very simple one to explain, but a little
slow. Note that you can do this using the smallest value, and swapping it
with the first unsorted element. Heap sort utilizes, just about the only
good use of heaps that is finding the maximum element, in a max heap
(or the minimum of a min heap). Is in every way as good as the straight
selection sort, but faster. Radix sort is the god of sorting algorithms. It
will search the largest list, with the biggest numbers, and has a
guaranteed O (n) time complexity. And it aren't very complex to
understand or implement. This can be implemented everywhere.
6. Discuss in detail that how the selection of the data structure impacts
the quality of any searching or sorting algorithm.

Ans:-

7. Discuss the application areas where the Hashing Functions can be


applied, with the help of an example.

Ans:-Applications of hash functions are abundant in programming


practice.

• Digital Signatures: digital signatures were the first application of


cryptographically secure hash functions. Hash functions serve a
dual role in practical signature schemes: they expand the domain of
messages that can be signed by a scheme and they are an essential
element of the scheme’s security.
• MAC: Message-authentication code (MAC) is a keyed hash
function satisfying certain cryptographic properties. Not
surprisingly, a MAC can be based on hash function
• Password Tables: A common method of client authentication is to
require the client to present a password previously registered with
the server. Storing passwords of all users on the server poses an
obvious security risk. Fortunately, the server need not know the
passwords—it may store their hashes and use the information to
match it with the hashes of alleged passwords.
• Hashing Table: Hash functions are primarily used in hash tables, to
quickly locate a data record given its search key. Specifically, the
hash function is used to map the search key to the hash. The index
gives the place where the corresponding record should be stored.
Hash tables, in turn, are used to implement associative arrays and
dynamic sets. In general, a hashing function may map several
different keys to the same index. Therefore, each slot of a hash
table is associated with a set of records, rather than a single record.
For this reason, each slot of a hash table is often called a bucket,
and hash values are also called bucket indices. Thus, the hash
function only hints at the record's location—it tells where one
should start looking for it. Still, in a half-full table, a good hash
function will typically narrow the search down to only one or two
entries.
• Finding Duplicate Record: When storing records in a large
unsorted file, one may use a hash function to map each record to an
index into a table T, and collect in each bucket T[i] a list of the
numbers of all records with the same hash value i. Once the table is
complete, any two duplicate records will end up in the same
bucket. The duplicates can then be found by scanning every bucket
T[i] which contains two or more members, fetching those records,
and comparing them. With a table of appropriate size, this method
is likely to be much faster than any alternative approach.
• Associative Array: Hash tables are commonly used to implement
many types of in-memory tables. They are used to implement
associative arrays.
Complexity of hashing function is O (1).

Ques8:Write a program in C++ to sort the strings of characters using


Selection Sort Algorithm.
ANS:
#include<iostream.h>
#include<conio.h>
#include<stdio.h>
#include<string.h>
void main()
{char a[255],i,l;
selectionsort(char a,int n)
{int iPos;
int iMin;
for (iPos = 0; iPos < n; iPos++)
{iMin = iPos;
for (i = iPos+1; i < n; i++)
{if (a[i] < a[iMin])
{iMin = i;
}
}
if ( iMin != iPos )
{
swap(a, iPos, iMin);
}
}}
swap(char a[], int b,int c)
{int temp;
temp=a[b];
a[b]=a[c];
a[c]=temp;
}
cout<<"enter the string:";
gets(a);
l=strlen(a);
selectionsort(a,l);
for(i=0;i<=l;i++)
cout<<a[i];
getche();
}

Potrebbero piacerti anche