Sei sulla pagina 1di 30

# How does Big-O Notation work, and can you provide an example?

What is Big O Analysis a tutorial: When solving a computer science problem there will usually be more than just one solution. These solutions will often be in the form of different algorithms, and you will generally want to compare the algorithms to see which one is more efficient. This is where Big O analysis helps it gives us some basis for measuring the efficiency of an algorithm. A more detailed explanation of Big O analysis would be this: it measures the efficiency of an algorithm based on thetime it takes for the algorithm to run as a function of the input size. Think of the input simply as what goes into a function whether it be an array of numbers, a linked list, etc. Sounds quite boring, right? Its really not that bad at all and it is something best illustrated by an example with actual code samples. Example of Big O Analysis Lets suppose that we are given a problem in which we want to create a function that, when given an array of integers greater than 0, will return the integer that is the smallest in that array. In order to best illustrate the way Big-O analysis works, we will come up with twodifferent solutions to this problem, each with a different Big-O efficiency. Heres our first function that will simply return the integer that is the smallest in the array. The algorithm will just iterate through all of the values in the array and keep track of the smallest integer in the array in the variable called curMin. Lets assume that the array being passed to our function contains 10 elements this number is something we arbitrarily chose. We could have said it contains 100, or 100000 elements either way it would have made no difference for our purposes here. int CompareSmallestNumber (int array[ ]) { int x, curMin;

## // set smallest value to first item in array curMin = array;

// iterate through array to find smallest value for (x = 1; x < 10; x++) { if( array[x] < curMin) { curMin = array[x]; } }

## // return smallest value in the array return curMin; }

As promised, we want to show you another solution to the problem. In this solution, we will use a different algorithm. What we do is compare each value in the array to all of the other numbers in the array, and if that value is less than or equal to all of the other numbers in the array then we know that it is the smallest number in the array. int CompareToAllNumbers (int array[ ]) { bool is Min;

int x, y;

## for (int y = 0; y < 10; y++) {

/* compare the value in array[x] to the other values if we find that array[x] is greater than any of the values in array[y] then we know that the value in array[x] is not the minimum

remember that the 2 arrays are exactly the same, we are just taking out one value with index 'x' and comparing to the other values in the array with index 'y' */

## if( array[x] > array[y]) isMin = false;

if(isMin) break; }

return array[x]; } Now, you've seen 2 functions that solve the same problem - but each one uses a different algorithm. We want to be able to say which algorithm is more efficient, and Big-O analysis allows us to do exactly that. Big O analysis in action For our purposes, we assumed an input size of 10 for the array. But when doing Big O analysis, we don't want to use specific numbers for the input size - so we say that the input is of size n. Remember that Big-O analysis is used to measure the efficiency of an algorithm based on the time it takes for the algorithm to run as a function of the input size. When doing Big-O analysis, "input" can mean a lot of different things depending on the problem being solved. In our examples above, the input is the array that is passed into the different functions. But, input could also be the number of elements of a linked list, the nodes in a tree, or whatever data structure you are dealing with. Since input is of size n, and in our example the input is an array - we will say that the array is of size n. We will use the 'n' to denote input size in our Big-O analysis. So, the real question is how Big-O analysis measures efficiency. Basically, Big-O will want to express how many times the 'n' input items are 'touched'. The word 'touched' can mean different things in different algorithms - in some algorithms it may mean the number of times a constant is multiplied by an input item, the number of times an input is added to a data structure, etc. But in our functions CompareSmallestNumber and CompareToAllNumbers, it just means the number of times an array value is compared to another value.

In the function CompareSmallestNumber, the n (we used 10 items, but lets just use the variable 'n' for now) input items are each 'touched' only once when each one is compared to the minimum value. In Big O notation, this would be written as O(n) - which is also known as linear time. Linear time means that the time taken to run the algorithm increases in direct proportion to the number of input items. So, 80 items would take longer to run than 79 items or any quantity less than 79. You might also see that in the CompareSmallestNumber function, we initialize the curMin variable to the first value of the input array. And that does count as 1 'touch' of the input. So, you might think that our Big O notation should be O(n + 1). But actually, Big O is concerned with the running time as the number of inputs - which is 'n' in this case - approaches infinity. And as 'n' approaches infinity the constant '1' becomes very insignificant - so we actually drop the constant. Thus, we can say that the CompareSmallestNumber function has O(n) and not O(n + 1). Also, if we have n 3 + n, then as n approaches infinity it's clear that the "+ n" becomes very insignificant - so we will drop the "+ n", and instead of having O(n 3 + n), we will have O(n3). Now, let's do the Big O analysis of the CompareToAllNumbers function. Let's just say that we want to find the worst case running time for this function and use that as the basis for the Big O notation. So, for this function, let's assume that the smallest integer is in the very last element of the array. Since we are taking each element in the array and comparing it to every other element in the array, that means we will be doing 100 comparisons, if we are assuming our input size is 10 (10 * 10 = 100). Or, if we use a variable that will n 2 'touches' of the input size. Thus, this function uses a O(n 2 ) algorithm. Big O analysis measures efficiency Now, let's compare the 2 functions: CompareToAllNumbers is O(n2) and CompareSmallestNumber is O(n). So, let's say that we have 10,000 input elements, then CompareSmallestNumber will 'touch' on the order of 10,000 elements, whereas CompareToAllNumbers will 'touch' 10,000 squared or 100,000,000 elements. That's a huge difference, and you can imagine how much faster CompareSmallestNumber must run when compared to CompareToAllNumbers - especially when given a very large number of inputs. Efficiency is something that can make a huge difference and it's important to be aware of how to create efficient solutions. In an interview, you may be asked what the Big-O of an algorithm that you've come up with is. And even if not directly asked, you should provide that information in order to show that you are well aware of the need to come up with an efficient solution whenever possible.

## Whats the difference between DFS and BFS?

DFS (Depth First Search) and BFS (Breadth First Search) are search algorithms used for graphs and trees. When you have an ordered tree or graph, like a BST, its quite easy to search the data structure to find the node that you want. But, when given an unordered tree or graph, the BFS and DFS search algorithms can come in handy to find what youre looking for. The decision to choose one over the other should be based on the type of data that one is dealing with. In a breadth first search, you start at the root node, and then scan each node in the first level starting from the leftmost node, moving towards the right. Then you continue scanning the second level (starting from the left) and the third level, and so on until youve scanned all the nodes, or until you find the actual node that you were searching for. In a BFS, when traversing one level, we need some way of knowing which nodes to traverse once we get to the next level. The way this is done is by storing the pointers to a levels child nodes while searching that level. The pointers are stored in FIFO (First-In-First-Out) queue. This, in turn, means that BFS uses a large amount of memory because we have to store the pointers. Subscribe to our newsletter on the left to receive more free interview questions! An example of BFS Heres an example of what a BFS would look like. The numbers represent the order in which the nodes are accessed in a BFS:

In a depth first search, you start at the root, and follow one of the branches of the tree as far as possible until either the node you are looking for is found or you hit a leaf node ( a node with

no children). If you hit a leaf node, then you continue the search at the nearest ancestor with unexplored children.

An example of DFS Heres an example of what a DFS would look like. The numbers represent the order in which the nodes are accessed in a DFS:

Comparing BFS and DFS, the big advantage of DFS is that it has much lower memory requirements than BFS, because its not necessary to store all of the child pointers at each level. Depending on the data and what you are looking for, either DFS or BFS could be advantageous. For example, given a family tree if one were looking for someone on the tree whos still alive, then it would be safe to assume that person would be on the bottom of the tree. This means that a BFS would take a very long time to reach that last level. A DFS, however, would find the goal faster. But, if one were looking for a family member who died a very long time ago, then that person would be closer to the top of the tree. Then, a BFS would usually be faster than a DFS. So, the advantages of either vary depending on the data and what youre looking for. What are the differences between a hash table and a binary search tree? Suppose that you are trying to figure out which of those data structures to use when designing the address book for a cell phone that has limited memory. Which data structure would you use? A hash table can insert and retrieve elements in O(1) (for a big-O refresher read here). A binary search tree can insert and retrieve elements in O(log(n)), which is quite a bit slower than the hash table which can do it in O(1).

A hash table is an unordered data structure When designing a cell phone, you want to keep as much data as possible available for data storage. A hash table is an unordered data structure which means that it does not keep its elements in any particular order. So, if you use a hash table for a cell phone address book, then you would need additional memory to sort the values because you would definitely need to display the values in alphabetical order it is an address book after all. So, by using a hash table you have to set aside memory to sort elements that would have otherwise be used as storage space.

A binary search tree is a sorted data structure Because a binary search tree is already sorted, there will be no need to waste memory or processing time sorting records in a cell phone. As we mentioned earlier, doing a lookup or an insert on a binary tree is slower than doing it with a hash table, but a cell phone address book will almost never have more than 5,000 entries. With such a small number of entries, a binary search trees O(log(n)) will definitely be fast enough. So, given all that information, a binary search tree is the data structure that you should use in this scenario, since it is a better choice than a hash table. Suppose that you are given a linked list that is either circular or or not circular (another word for not circular is acyclic). Take a look at the figures below if you are not sure what a circular linked list looks like.

Write a function that takes as an input a pointer to the head of a linked list and determines whether the list is circular or if the list has an ending node. If the linked list is circular then your function should return true, otherwise your function should return false if the linked list is not circular. You can not modify the linked list in any way. This is an acyclic (non-circular) linked list:

## This is a circular linked list:

You should start out this problem by taking a close look at the pictures of a circular linked list and a singly linked list so that you can understand the difference between the 2 types of lists. The difference between the 2 types of linked lists is at the very last node of the list. In the circular linked list, you can see that the very last node (37) links right back to the first node in the list which makes it circular. However, in the acyclic or non-circular linked list you can see that the end node does not point to another node in the linked list and just ends. It is easy enough to know when you are in an acyclic linked list the pointer in the end node will just be pointing to NULL. However, knowing when you are in a circularly linked list is more difficult because there is no end node, so your function would wind up in an infinite loop if it just searches for an end node. So, there has to be a solution other than just looking for a node that points to NULL, since that clearly will not work.

the big question here is how do we determine whether or not we have already visited a node in an algorithm?

one will eventually catch up to the slower one. So, this is another solution, and here is the pseudocode for this problem: 2 pointers travelling at different speeds start from the head of the linked list Iterate through a loop If the faster pointer reaches a NULL pointer then return that list is acyclic and not circular If the faster pointer is ever equal to the slower pointer or the faster pointer's next pointer is ever equal to the slower pointer then return that the list is circular

Advance the slower pointer one node Advance the faster pointer by 2 nodes If we write the actual code for this, it would look like this: bool findCircular(Node *head) { Node *slower, * faster; slower = head; faster = head; while(true) {

// if the faster pointer encounters a NULL element if( !faster || !faster->next) return false;

//if faster pointer ever equals slower or faster's next //pointer is ever equal to slow then it's a circular list else if (faster == slower || faster->next == slower) return true; else{ // advance the pointers slower = slower->next; faster = faster->next->next; } } } And there is our solution. What is the Big-O of our solution? Well, whats the worst case if we know that the list is circular? In this case, the slower pointer will never go around any loop more than once so it will examine a maximum of n nodes. The faster pointer, however, will examine 2n nodes and will have pass the slower pointer regardless of the size of the circle which makes it a worse case of 3n nodes. This is O(n). And what about the worst case when the list is not circular acyclic? Then the faster pointer will have come to the end after examining n nodes, while the slower pointer will have examined n/2 nodes for a total of 3/2n nodes, which is also O(n). Thus, the algorithm is O(n) for both worst case scenarios. Suppose that you are given a binary tree like the one shown in the figure below. Write some code in Java that will do a preorder traversal for any binary tree and print out the nodes as they are encountered. So, for the binary tree in the figure below, the algorithm will print the nodes in this order: 2, 7, 2, 6, 5, 11, 5, 9, 4

When trying to figure out what the algorithm for this problem should be, you should take a close look at the way the nodes are traversed a preorder traversal keeps traversing the left most part of the tree until a leaf node (which has no child nodes) is encountered, then it goes up the tree, goes to the right child node (if any), and then goes up the tree again, and then as far left as possible, and this keeps repeating until all of the nodes are traversed. So, it looks like each sub-tree within the larger tree is being traversed in the same pattern which should make you start thinking in terms of breaking this problem down into sub-trees. And anytime a problem is broken down into smaller problems that keep repeating, you should immediately start thinking in recursion to find the most efficient solution. So, lets take a look at the 2 largest sub-trees and see if we can come up with an appropriate algorithm. You can see in the figure above that the sub-trees of 7 and 5 (child nodes of the root at 2) are the 2 largest subtrees.

Lets start by making observations and see if we can convert those observations into an actual algorithm. First off, you can see that all of the nodes in the subtree rooted at 7 (including 7 itself) are printed out before the subtree rooted at 5. So, we can say that for any given node, the subtree of its left child is printed out before the subtree of its right child. This sounds like a legitimate algorithm, so we can say that when doing a preorder traversal, for any node we would print the node itself, then follow the left subtree, and after that follow the right subtree. Lets write that out in steps: 1. Print out the root's value, regardless of whether you are at the actual root or just the subtree's root.

2. Go to the left child node, and then perform a pre-order traversal on that left child node's subtree.

3. Go to the right child node, and then perform a pre-order traversal on that right child node's subtree.

4. Do this recursively. This sounds simple enough. Lets now start writing some actual code. But first, we must have a Node class that represents each individual node in the tree, and that Node class must also have some methods that would allow us to go to the left and right nodes. This is what it would look like in Java psuedocode: public class Node {

## public Node ( ) { // a Java constructor }

public Node leftNode() {return left;} public Node rightNode() {return right;}

## public int getNodeValue() {return nodeValue;}

} Given the Node class above, lets write a recursive method that will actually do the preorder traversal for us. In the code below, we also assume that we have a method called printNodeValue which will print out the Nodes value for us. void preOrder (Node root) {

## if(root == null) return;

root.printNodeValue();

## preOrder( root.leftNode() ); preOrder( root.rightNode() );

} Because every node is examined once, the running time of this algorithm is O(n). Suppose that you are given a binary tree like the one shown in the figure below. Write some code in Java that will do a postorder traversal for any binary tree and print out the nodes as they are encountered. So, for the binary tree in the figure below, the algorithm will print the nodes in this order: 2, 5, 11, 6, 7, 4, 9, 5, 2 where the very last node visited is the root node

When trying to figure out what the algorithm for this problem should be, you should take a close look at the way the nodes are traversed there is a pattern in the way that the nodes are traversed. If you break the problem down into subtrees you can see that these are the operations that are being performed recursively at each node: 1. 2. 3. Traverse the left subtree Traverse the right subtree Visit the root

This sounds simple enough. Lets now start writing some actual code. But first, we must have a Node class that represents each individual node in the tree, and that Node class must also have some methods that would allow us to go to the left and right nodes. This is what it would look like in Java: public class Node { private Node right; private Node left; private int nodeValue; public Node ( ) { // a Java constructor }

public Node leftNode() {return left;} public Node rightNode() {return right;} public int getNodeValue() {return nodeValue;} }
Given the Node class above, lets write a recursive method that will actually do the postorder traversal for us. In the code below, we also assume that we have a method called printNodeValue which will print out the Nodes value for us. void postOrder (Node root) {

## postOrder( root.leftNode() ); postOrder( root.rightNode() );

root.printNodeValue();

} Because every node is examined once, the running time of this algorithm is O(n).

Suppose that you are given a binary tree like the one shown in the figure below. Write some code in Java that will do an inorder traversal for any binary tree and print out the nodes as they are encountered. So, for the binary tree in the figure below, the algorithm will print the nodes in this order: 2, 7, 5, 6, 11, 2, 5, 4, 9 note that the very first 2 that is printed out is the left child of 7, and NOT the 2 in the root node

When trying to figure out what the algorithm for this problem should be, you should take a close look at the way the nodes are traversed. In an inorder traversal If a given node is the parent of some other node(s) then we traverse to the left child. If there is no left child, then go to the right child, and traverse the subtree of the right child until you encounter the leftmost node in that subtree. Then process that left child. And then you process the current parent node. And then, the traversal pattern is repeated. So, it looks like each sub-tree within the larger tree is being traversed in the same pattern which should make you start thinking in terms of breaking this problem down into sub-trees. And anytime a problem is broken down into smaller problems that keep repeating, you should immediately start thinking in recursion to find the most efficient solution. So, lets take a look at the 2 largest sub-trees and see if we can come up with an appropriate algorithm. You can see in the figure above that the sub-trees of 7 and 5 (child nodes of the root at 2) are the 2 largest subtrees.

1. Go the left subtree, and perform an inorder traversal on that node. 2. Print out the value of the current node.

3. Go to the right child node, and then perform an inorder traversal on that right child node's subtree. This sounds simple enough. Lets now start writing some actual code. But first, we must have a Node class that represents each individual node in the

tree, and that Node class must also have some methods that would allow us to go to the left and right nodes, and also a method that would allow us to print a Nodes value. This is what it would look like in Java psuedocode: public class Node { private Node right; private Node left; private int nodeValue; public Node ( ) { // a Java constructor } public Node leftNode() {return left;} public Node rightNode() {return right;} public int getNodeValue() {return nodeValue;} Given the Node class above, lets write a recursive method that will actually do the inorder traversal for us. In the code below, we also assume that we have a method called printNodeValue which will print out the Nodes value for us. void inOrder (Node root) { if(root == null) return; inOrder( root.leftNode() ); root.printNodeValue(); inOrder( root.rightNode() ); } Because every node is examined once, the running time of this algorithm is O(n).

## Whats the difference between a stack and a heap?

The differences between the stack and the heap can be confusing for many people. So, we thought we would have a list of questions and answers about stacks and heaps that we thought would be very helpful.

## Where are the stack and heap stored?

They are both stored in the computers RAM (Random Access Memory). For a refresher on RAM and virtual memory, read here: How Virtual Memory Works

How do threads interact with the stack and the heap? How do the stack and heap work in multithreading?
In a multi-threaded application, each thread will have its own stack. But, all the different threads will share the heap. Because the different threads share the heap in a multi-threaded application, this also means that there has to be some coordination between the threads so that they dont try to access and manipulate the same piece(s) of memory in the heap at the same time.

## Can an object be stored on the stack instead of the heap?

Yes, an object can be stored on the stack. If you create an object inside a function without using the new operator then this will create and store the object on the stack, and not on the heap. Suppose we have a C++ class called Member, for which we want to create an object. We also have a function called somefunction( ). Here is what the code would look like:

## Code to create an object on the stack:

void somefunction( ) { /* create an object "m" of class Member this will be put on the stack since the "new" keyword is not used, and we are creating the object inside a function */ Member m; } //the object "m" is destroyed once the function ends

So, the object m is destroyed once the function has run to completion or, in other words, when it goes out of scope. The memory being used for the object m on the stack will be removed once the function is done running.

If we want to create an object on the heap inside a function, then this is what the code would look like:

## Code to create an object on the heap:

void somefunction( ) { /* create an object "m" of class Member this will be put on the heap since the "new" keyword is used, and we are creating the object inside a function */ Member* m = new Member( ) ; /* the object "m" must be deleted otherwise a memory leak occurs */ delete m; }

In the code above, you can see that the m object is created inside a function using the new keyword. This means that m will be created on the heap. But, since m is created using the new keyword, that also means that we must delete the m object on our own as well otherwise we will end up with a memory leak.

How long does memory on the stack last versus memory on the heap
Once a function call runs to completion, any data on the stack created specifically for that function call will automatically be deleted. Any data on the heap will remain there until its manually deleted by the programmer.

Can the stack grow in size? Can the heap grow in size?

The stack is set to a fixed size, and can not grow past its fixed size (although some languages have extensions that do allow this). So, if there is not enough room on the stack to handle the memory being assigned to it, a stack overflow occurs. This often happens when a lot of nested functions are being called, or if there is an infinite recursive call.

If the current size of the heap is too small to accommodate new memory, then more memory can be added to the heap by the operating system. This is one of the big differences between the heap and the stack.

## How are the stack and heap implemented?

The implementation really depends on the language, compiler, and run-time the small details of the implementation for a stack and a heap will always be different depending on what language and compiler are being used. But, in the big picture, the stacks and heaps in one language are used to accomplish the same things as stacks and heaps in another language.

## Which is faster the stack or the heap? And why?

The stack is much faster than the heap. This is because of the way that memory is allocated on the stack. Allocating memory on the stack is as simple as moving the stack pointer up.

## How is memory deallocated on the stack and heap?

Data on the stack is automatically deallocated when variables go out of scope. However, in languages like C and C++, data stored on the heap has to be deleted manually by the programmer using one of the built in keywords like free, delete, or delete[ ]. Other languages like Java and .NET use garbage collection to automatically delete memory from the heap, without the programmer having to do anything..

## What can go wrong with the stack and the heap?

If the stack runs out of memory, then this is called a stack overflow and could cause the program to crash. The heap could have the problem of fragmentation, which occurs when the available memory on the heap is being stored as noncontiguous (or disconnected) blocks because used blocks of memory are in between the unused memory blocks. When excessive fragmentation occurs, allocating new memory may be impossible because of the fact that even though there is enough memory for the desired allocation, there may not be enough memory in one big block for the desired amount of memory.

## Which one should I use the stack or the heap?

For people new to programming, its probably a good idea to use the stack since its easier. Because the stack is small, you would want to use it when you know exactly how much memory you will need for your data, or if you know the size of your data is very small. Its better to use the heap when you know that you will need a lot of memory for your data, or you just are not sure how much memory you will need (like with a dynamic array).

## 1. What is data structure?

A data structure is a way of organizing data that considers not only the items stored, but also their relationship to each other. Advance knowledge about the relationship between data items allows designing of efficient algorithms for the manipulation of data.

2. List out the areas in which data structures are applied extensively?

1. 2. 3. 4. 5. 6. 7. 8.

Compiler Design, Operating System, Database Management System, Statistical analysis package, Numerical Analysis, Graphics, Artificial Intelligence, Simulation

3. What are the major data structures used in the following areas : RDBMS, Network data model and Hierarchical data model.

1. 2. 3.

RDBMS = Array (i.e. Array of structures) Network data model = Graph Hierarchical data model = Trees

4. If you are using C language to implement the heterogeneous linked list, what pointer type will you use?

The heterogeneous linked list contains different data types in its nodes and we need a link, pointer to connect them. It is not possible to use ordinary pointers for this. So we go for void pointer. Void pointer is capable of storing pointer to any type as it is a generic pointer type.

## 5. Minimum number of queues needed to implement the priority queue?

Two. One queue is used for actual storing of data and another for storing priorities.

## 6. What is the data structures used to perform recursion?

Stack. Because of its LIFO (Last In First Out) property it remembers its 'caller' so knows whom to return when the function has to return. Recursion makes use of system stack for storing the return addresses of the function calls.

Every recursive function has its equivalent iterative (non-recursive) function. Even when such equivalent iterative procedures are written, explicit stack is to be used.

7. What are the notations used in Evaluation of Arithmetic Expressions using prefix and postfix forms?

## Polish and Reverse Polish notations.

8. Convert the expression ((A + B) * C - (D - E) ^ (F + G)) to equivalent Prefix and Postfix notations.

1. 2.

## Prefix Notation: - * +ABC ^ - DE + FG Postfix Notation: AB + C * DE - FG + ^ -

9. Sorting is not possible by using which of the following methods? (Insertion, Selection, Exchange, Deletion)

Sorting is not possible in Deletion. Using insertion we can perform insertion sort, using selection we can perform selection sort, using exchange we can perform the bubble sort (and other similar sorting methods). But no sorting method can be done just using deletion.

1. 2. 3. 4.

1. 2. 3.

## The manipulation of Arithmetic expression, Symbol Table construction, Syntax analysis.

12. List out few of the applications that make use of Multilinked Structures?

1. 2.

## Sparse matrix, Index generation.

13. In tree construction which is the suitable efficient data structure? (Array, Linked list, Stack, Queue)

## Linked list is the suitable efficient data structure.

14. What is the type of the algorithm used in solving the 8 Queens problem?

Backtracking.

## 15. In an AVL tree, at what condition the balancing is to be done?

If the 'pivotal value' (or the 'Height factor') is greater than 1 or less than -1.

16. What is the bucket size, when the overlapping and collision occur at same time?

One. If there is only one entry possible in the bucket, when the collision occurs, there is no way to accommodate the colliding value. This results in the overlapping of values.

17. Classify the Hashing Functions based on the various methods by which the key value is found.

1. 2. 3. 4. 5. 6. 7.

Direct method, Subtraction method, Modulo-Division method, Digit-Extraction method, Mid-Square method, Folding method, Pseudo-random method.

18. What are the types of Collision Resolution Techniques and the methods used in each of the type?

## 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.

Open addressing (closed hashing), The methods used include: Overflow block. Closed addressing (open hashing), The methods used include: Linked list, Binary tree. 19. In RDBMS, what is the efficient data structure used in the internal storage representation? B+ tree. Because in B+ tree, all the data is stored only in leaf nodes, that makes searching easier. This corresponds to the records that shall be stored in leaf nodes. 20. What is a spanning Tree? A spanning tree is a tree associated with a network. All the nodes of the graph appear on the tree once. A minimum spanning tree is a spanning tree organized so that the total edge weight between nodes is minimized. 21. Does the minimum spanning tree of a graph give the shortest distance between any 2 specified nodes? No. The Minimal spanning tree assures that the total weight of the tree is kept at its minimum. But it doesn't mean that the distance between any two nodes involved in the minimum-spanning tree is minimum. 22. Which is the simplest file structure? (Sequential, Indexed, Random) Sequential is the simplest file structure. 23. Whether Linked List is linear or Non-linear data structure? According to Access strategies Linked list is a linear one. According to Storage Linked List is a Non-linear one.

Data-structure Test

www.careerride.com

FILO

FIFO

LILO

Yes

No

FILO

FIFO

LILO

Insertion

Deletion

Updation

Retrieval

## 5. Which of the following statements hold true for binary trees?

The left subtree of a node contains only nodes with keys less than the node's key

The right subtree of a node contains only nodes with keys greater than the node's key.

## Both a and b above

Noth left and right subtree nodes contains only nodes with keys less than the node's key

6. The time required in best case for search operation in binary tree is

O(n)

O(log n)

O(2n)

O(log 2n)

Yes

No

## In red-black trees, the root do not contain data.

In red-black trees, the leaf nodes are not relevant and do not contain data.

In red-black trees, the leaf nodes are relevant but do not contain data.

## left sub tree-> root->right sub tree

11. Which of the following linked list below have last node of the list pointing to the first node?

## 12. Items in a priority queue are entered in a _____________ order

random

order of priority

False

True

Binary trees

Stacks

Graphs

## Both a and c above

15. A ___________ tree is a tree where for each parent node, there is only one associated child node

## complete binary tree

degenerate tree

16. In graphs, A hyperedge is an edge that is allowed to take on any number of _____________

Vertices

edges

labels

nodes

data

## 18. Key value pair is usually seen in

Hash tables

Heaps

Both a and b

Skip list

19. In a heap, element with the greatest key is always in the ___________ node

leaf

root

## first node of right sub tree

20. In _____________tree, the heights of the two child subtrees of any node differ by at most one

Binary tree

Splay tree

AVL tree

Marks: 11 of 20