Sei sulla pagina 1di 20

C.

ABDUL HAKEEM COLLEGE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CS8451 – DESIGN AND ANALYSIS OF ALGORITHMS

PART – A QUESTIONS WITH ANSWERS

UNIT – I INTRODUCTION

1. Define algorithm.

An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for


obtaining a required output for any legitimate input in a finite amount of time.

2. Give notion of an algorithm.


Problem

Algorithm

Input Computer Output

3. What is algorithm design Technique?


An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving
problems algorithmically that is applicable to a variety of problems from different areas of computing.

4. Write the Euclid’s algorithm for GCD calculation?

ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n ≠ 0 do
r ← m mod n
m←n
n←r
return m
5. Define time efficiency and space efficiency.
Time efficiency, indicating how fast the algorithm runs.
Space efficiency refers to the amount of memory units required by the algorithm in addition to
the space needed for its input and output.

6. What are the characteristics of an algorithm?


 Simplicity
 Generality
 Optimality

7. What are the important problem types?


 Sorting
 Searching
 String processing
 Graph problems
 Combinatorial problems
 Geometric problems
 Numerical problems

8. What are the properties of sorting algorithms?


 Stable
 In place

9. How will you measure an input size of an algorithm?


Parameter n indicating the algorithm’s input size such as size of the list, the number of
polynomial coefficients, order of the matrix, the number of characters, the number’s magnitude, etc.

10. How will you measure the running time of an algorithm?


First, identify the most important operation of the algorithm, called the basic operation, the
operation contributing the most to the total running time, and compute the number of times the basic
operation is executed.

Let cop be the execution time of an algorithm’s basic operation on a particular computer, and let
C(n) be the number of times this operation needs to be executed for this algorithm. Then we can
estimate the running time T(n) of a program implementing this algorithm on that computer by the
formula
T(n) ≈ cop C(n).
11. Define Order of growth.
The order of growth of the running time of an algorithm gives a simple characterization of the
algorithm's efficiency and also allows us to compare the relative performance (efficiencies) of
alternative algorithms.

12. Define best, worst and average case efficiency?


The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which
is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs of that
size.
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the longest among all possible
inputs of that size.
The average-case efficiency provides that neither the worst-case analysis nor its best-case
counterpart yields the necessary information about an algorithm’s behavior on a “typical” or “random”
input.

13. Define amortized efficiency.


Amortized efficiency applies not to a single run of an algorithm but rather to a sequence of
operations performed on the same data structure. It turns out that in some situations a single operation
can be expensive, but the total time for an entire sequence of n such operations is always significantly
better than the worst-case efficiency of that single operation multiplied by n.

14. What is mean by asymptotic notations? What are its types?


Asymptotic Notations are languages that allow us to analyze an algorithm's running time by
identifying its behavior as the input size for the algorithm increases. This is also known as an
algorithm’s growth rate i.e., it allows us to compare and rank orders of growth.
Types of Asymptotic Notations are:
 Big Oh (Ο)
 Big Omega (Ω)
 Big Theta (Θ)

15. Define recurrence relation.


A recurrence relation is an equation that defines a sequence based on a rule that gives the next
term as a function of the previous term(s).
The simplest form of a recurrence relation is the case where the next term depends only on the
immediately previous term. If we denote the nth term in the sequence by xn, such a recurrence relation
is of the form

xn+1 = f(xn)
for some function f.
16. Define Big Oh Notation. Give examples.
A function t(n) is said to be in O(g(n)), denoted t(n) ∈ O(g(n)), if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that
t(n) ≤ cg(n) for all n ≥ n0.

Examples:
n ∈ O(n2), 100n + 5 ∈ O(n2), n(n − 1) ∈ O(n2).

17. Define Big Omega Notation. Give examples.


A function t(n) is said to be in Ω(g(n)), denoted t(n) ∈ Ω(g(n)), if t(n) is bounded below by
some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and
some nonnegative integer n0 such that
t(n) ≥ cg(n) for all n ≥ n0.

Examples:
n3 ∈ Ω(n2), n(n − 1) ∈ Ω(n2).
18. Define Big Theta Notation. Give examples.
A function t(n) is said to be in Θ(g(n)), denoted t(n) ∈ Θ(g(n)), if t(n) is bounded both above
and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive
constants c1 and c2 and some nonnegative integer n0 such that
c2 g(n) ≤ t(n) ≤ c1 g(n) for all n ≥ n0.

Example:
n(n − 1) ∈ Θ(n2).

19. Compare the orders of growth of n(n − 1) and n2.

= (1 - )

=
Since the limit is equal to a positive constant, the functions have the same order of growth or,
symbolically, n(n − 1) ∈ Θ(n2).

20. List out the basic asymptotic efficiency classes.


Constant 1
Logarithmic log n
Linear n
Linearithmic n log n
Quadratic n2
Cubic n3
Exponential 2n
Factorial n!
21. Give the general plan for Analyzing the Time Efficiency of Nonrecursive Algorithms.
 Decide on a parameter (or parameters) indicating an input’s size.
 Identify the algorithm’s basic operation. (As a rule, it is located in the inner-most loop.)
 Check whether the number of times the basic operation is executed depends only on the size
of an input. If it also depends on some additional property, the worst-case, average-case,
and, if necessary, best-case efficiencies have to be investigated separately.
 Set up a sum expressing the number of times the algorithm’s basic operation is executed.
 Using standard formulas and rules of sum manipulation, either find a closed-form formula
for the count or, at the very least, establish its order of growth.

22. Give the general plan for Analyzing the Time Efficiency of Recursive Algorithms.
 Decide on a parameter (or parameters) indicating an input’s size.
 Identify the algorithm’s basic operation.
 Check whether the number of times the basic operation is executed can vary on different
inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies
must be investigated separately.
 Set up a recurrence relation, with an appropriate initial condition, for the number of times
the basic operation is executed.
 Solve the recurrence or, at least, ascertain the order of growth of its solution.

23. Give the general plan for Empirical Analysis of Algorithm Time Efficiency.

 Understand the experiment’s purpose.


 Decide on the efficiency metric M to be measured and the measurement unit (an operation
count vs. a time unit).
 Decide on characteristics of the input sample (its range, size, and so on).
 Prepare a program implementing the algorithm (or algorithms) for the experimentation.

24. Write an algorithm for computing random number generation.

ALGORITHM Random(n, m, seed, a, b)


r0←seed
for i ←1 to n do
ri ← (a * ri−1 + b) mod m

25. Define algorithm visualization. Mention its types.

An algorithm visualization can be defined as the use of images to convey some useful
information about algorithms. That information can be a visual illustration of an algorithm’s
operation, of its performance on different kinds of inputs, or of its execution speed versus that
of other algorithms for the same problem.
There are two principal variations of algorithm visualization:
 Static algorithm visualization
 Dynamic algorithm visualization, also called algorithm animation
UNIT II BRUTE FORCE AND DIVIDE – AND – CONQUER

1. Define Brute – Force method.


Brute force is a straightforward approach to solving a problem, usually directly based on the
problem statement and definitions of the concepts involved.
Examples:
 Definition-based algorithm for matrix multiplication
 Selection sort
 Sequential search
 Straightforward string-matching algorithm

2. Define Closest-Pair problem.


The closest-pair problem calls for finding the two closest points in a set of n points. It is the
simplest of a variety of problems in computational geometry that deals with proximity of points in the
plane or higher-dimensional spaces.
Applications:
 Cluster analysis in statistics.
 Air-Traffic Control
 Database, etc.

3. Define convex set.


A set of points (finite or infinite) in the plane is called convex if for any two points p and q in
the set, the entire line segment with the endpoints at p and q belongs to the set.

4. Define convex hull.


The convex hull of a set S of points is the smallest convex set containing S. (The “smallest”
requirement means that the convex hull of S must be a subset of any convex set containing S). The
convex hull of any set S of n > 2 points not all on the same line is a convex polygon with the vertices at
some of the points of S.

5. What is mean by exhaustive search?


Exhaustive search is a brute-force approach to combinatorial problems. It suggests generating
each and every combinatorial object of the problem, selecting those of them that satisfy all the
constraints, and then finding a desired object (e.g., the one that optimizes some objective function).

6. Define Knapsack problem.


Given n items of known weights w1 , w2 , . . . , wn and values v1 , v2 , . . . , vn and a knapsack of
capacity W, find the most valuable subset of the items that fit into the knapsack.
7. List the general plan of Divide-and-Conquer methodology.
1. A problem is divided into several subproblems of the same type, ideally of about equal size.
2. The subproblems are solved (typically recursively, though sometimes a different algorithm
is employed, especially when subproblems become small enough).
3. If necessary, the solutions to the subproblems are combined to get a solution to the original
problem.

11. Give the general divide-and-conquer recurrence equation.


The recurrence for the running time T(n):
T(n) = aT(n/b) + f(n)
where a and b are constants and f(n) is a function that accounts for the time spent on dividing an
instance of size n into instances of size n/b and combining their solutions. Assume that size n is
a power of b.

12. Define Divide-and-Conquer strategy.


Divide-and-conquer is a general algorithm design technique that solves a problem by dividing it
into several smaller subproblems of the same type (ideally, of about equal size), solving each of them
recursively, and then combining their solutions to get a solution to the original problem. Many efficient
algorithms are based on this technique, although it can be both inapplicable and inferior to simpler
algorithmic solutions.

13. Define Merge Sort.


Merge sort is a divide-and-conquer sorting algorithm. It works by dividing an input array into
two halves, sorting them recursively, and then merging the two sorted halves to get the original array
sorted. The algorithm’s time efficiency is in Θ (n log n) in all cases, with the number of key
comparisons being very close to the theoretical minimum. Its principal drawback is a significant extra
storage requirement.

14. Define Quick Sort.


Quick sort is a divide-and-conquer sorting algorithm that works by partitioning its input
elements according to their value relative to some preselected element. Quick sort is noted for its
superior efficiency among n log n algorithms for sorting randomly ordered arrays but also for the
quadratic worst-case efficiency.

15. List out the stages of Heap Sort.

Stage 1 (heap construction): Construct a heap for a given array.


Stage 2 (maximum deletions): Apply the root-deletion operation n − 1 times to the remaining
heap.
UNIT III DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE

1. Define Dynamic Programming technique.


Dynamic programming is a technique for solving problems with overlapping subproblems.
Typically, these subproblems arise from a recurrence relating a solution to a given problem with
solutions to its smaller subproblems of the same type. Dynamic programming suggests solving each
smaller subproblem once and recording the results in a table from which a solution to the original
problem can be then obtained.

2. Define Principle of Optimality.


An optimal solution has the property that whatever the initial state and initial decision are, the
remaining decisions must constitute an optimal solution with regard to the state resulting from the first
decision.

3. What is the use of memory functions?


The memory function technique seeks to combine the strengths of the top-down and bottom-up
approaches to solving problems with overlapping subproblems. It does this by solving, in the top-down
fashion but only once, just the necessary subproblems of a given problem and recording their solutions
in a table.

4. Define optimal binary search tree. Give an example.


An optimal binary search tree (BST), sometimes called a weight-balanced binary tree, is a
binary search tree which provides the smallest possible search time (or expected search time) for a
given sequence of accesses (or access probabilities).
Example:
5. Define transitive closure. Give an example.
The transitive closure of a directed graph with n vertices can be defined as the n × n boolean
matrix T = {tij }, in which the element in the ith row and the jth column is 1 if there exists a nontrivial
path (i.e., directed path of a positive length) from the ith vertex to the jth vertex; otherwise, tij is 0.
Example:

6. Write down the recurrence equation used in Warshall’s algorithm.


Formula for generating the elements of matrix R(k) from the elements of matrix R(k−1):

= or ( and )

7. Write down the recurrence equation used in Floyd’s algorithm.


Formula for generating the elements of matrix D(k) from the elements of matrix D(k−1):

= min { , + } for k ≥ 1,

= .

8. Compare Dynamic Programming with Divide and Conquer technique.

S. No. Divide – and – Conquer Dynamic Programming


Dynamic Programming is dividing a large Dynamic Programming is dividing a
problem into sub-problems. But here, each large problem into sub-problems. But
1
sub-problem may be solved more than here, each sub-problem is solved only
once. once.

2 There is a recursion. There is no recursion.

Need to compute the result of same The key is remembering because it stores
3
subproblem more than one time. the result of sub-problems in a table.
The sub-problems are independent of each The sub-problems are not independent of
4
other. each other.
9. What is meant by All-Pairs Shortest path problem?
Given a directed, connected weighted graph G(V,E), for each edge ⟨u,v⟩ ∈ E, a weight w(u,v) is
associated with the edge. The all pairs shortest paths problem (APSP) is to find a shortest path from u
to v for every pair of vertices u and v in V.

10. Define greedy technique.


The greedy approach suggests constructing a solution through a sequence of steps, each
expanding a partially constructed solution obtained so far, until a complete solution to the problem is
reached. On each step, the choice made must be:

 feasible, i.e., it has to satisfy the problem’s constraints


 locally optimal, i.e., it has to be the best local choice among all feasible choices
available on that step
 irrevocable, i.e., once made, it cannot be changed on subsequent steps of the algorithm.

11. Define minimum spanning tree.


A spanning tree of an undirected connected graph is its connected acyclic subgraph (i.e., a tree)
that contains all the vertices of the graph. If such a graph has weights assigned to its edges, a minimum
spanning tree is its spanning tree of the smallest weight, where the weight of a tree is defined as the
sum of the weights on all its edges.

12. List down the various operations performed on disjoint sets.


 makeset(x) creates a one-element set {x}. It is assumed that this operation can be applied to
each of the elements of set S only once.
 find(x) returns a subset containing x.
 union(x, y) constructs the union of the disjoint subsets Sx and Sy containing x and y,
respectively, and adds it to the collection to replace Sx and Sy , which are deleted from it.

13. Define union by size and union by rank.


Union by size attaching a smaller tree to the root of a larger one (in terms number of nodes),
with ties broken arbitrarily.
Union by rank always attach smaller depth tree under the root of the deeper tree.

14. What is meant by Dijkstra’s algorithm?


Dijkstra’s algorithm solves the single-source shortest-path problem of finding shortest paths
from a given vertex (the source) to all the other vertices of a weighted graph or digraph. Dijkstra’s
algorithm always yields a correct solution for a graph with nonnegative weights.
15. What is meant by Prim’s algorithm?
Prim’s algorithm is a greedy algorithm for constructing a minimum spanning tree of a weighted
connected graph. It works by attaching to a previously constructed subtree a vertex closest to the
vertices already in the tree.

16. What is meant by Kruskal’s algorithm?


Kruskal’s algorithm is a greedy algorithm for constructing a minimum spanning tree problem. It
constructs a minimum spanning tree by selecting edges in nondecreasing order of their weights
provided that the inclusion does not create a cycle.

17. Define fixed-length encoding and variable length encoding.


 Fixed-length encoding, which assigns a bit string of the same length m to each symbol.
 Variable-length encoding, which assigns codewords of different lengths to different
symbols.

18. What is mean by codeword?


Assigning some sequence of bits to each of the text’s symbols for encoding is called the
codeword.

19. What is meant by prefix free codes?


A prefix code is defined as no codeword is a prefix of a codeword of another symbol.

20. Define Huffman tree.


A Huffman tree is a binary tree that minimizes the weighted path length from the root to the
leaves of predefined weights.

21. Define Huffman code.


A Huffman code is an optimal prefix-free variable-length encoding scheme that assigns bit
strings to symbols based on their frequencies in a given text. This is accomplished by a greedy
construction of a binary tree whose leaves represent the alphabet symbols and whose edges are labeled
with 0’s and 1’s.

22. Distinguish between Dynamic Programming and Greedy Technique.

S. No. Greedy Technique Dynamic Programming


The selection of optimal solution is done The optimal solution can be generated as
1
by comparing all the possible solutions a sequence of decisions
2 Only one decision sequence is generated Many decision sequences are generated
3 Top – Down approach Bottom – Up approach
4 Easier to code Difficult to code
5 It may generate optimal solution Definitely it generates optimal solution
UNIT IV ITERATIVE IMPROVEMENT

1. What is mean by Iterative improvement technique? Give examples.


The iterative-improvement technique involves finding a solution to an optimization problem by
generating a sequence of feasible solutions with improving values of the problem’s objective function.
Each subsequent solution in such a sequence typically involves a small, localized change in the
previous feasible solution. When no such change improves the value of the objective function, the
algorithm returns the last feasible solution as optimal and stops.
Examples:
 Linear Programming Problem
 Maximum Flow Problem
 Maximum Matching Problem

2. What is mean by Linear Programming Problem (LPP)?


A linear programming problem consists of a linear function to be maximized or minimized
subject to certain constraints in the form of linear equations or inequalities. The general form is

maximize (or minimize)


c1x1 + . . . + cnxn
subject to
ai1x1 + . . . + ainxn ≤ (or ≥ or =) bi for i = 1, . . . , m

x 1 ≥ 0, . . . , x n ≥ 0.

3. Define Simplex Method.


The simplex method is the classic method for solving the general linear programming problem.
It works by generating a sequence of adjacent extreme points of the problem’s feasible region with
improving values of the objective function.

4. Define Maximum Flow problem.


The maximum-flow problem is defined as finding the maximum flow possible in a network, a
weighted directed graph with a source and a sink.

5. Define Maximum cardinality matching.


A maximum cardinality matching is the largest subset of edges in a graph such that no two
edges share the same vertex.

6. Define Bipartite Graph.


All the vertices in a graph can be partitioned into two disjoint sets V and U, not necessarily of
the same size, so that every edge connects a vertex in one of these sets to a vertex in the other set.
7. Define the terms feasible solution, feasible region and optimal solution in LPP.
Feasible Solution is any point (x, y) that satisfies all the constraints of the LP problem.
Feasible Region is the set of all its feasible points.
Optimal Solution is a point in the feasible region with the largest value of the objective
function.

8. Define Extreme Point Theorem.


Any linear programming problem with a nonempty bounded feasible region has an optimal
solution; moreover, an optimal solution can always be found at an extreme point of the problem’s
feasible region.

9. Define flow network.


A digraph satisfying the following properties is called a flow network.

 It contains exactly one vertex with no entering edges; this vertex is called the source and
assumed to be numbered 1 or s.
 It contains exactly one vertex with no leaving edges; this vertex is called the sink and
assumed to be numbered n or t.
 The weight uij of each directed edge (i, j) is a positive integer, called the edge capacity.

10. What is mean by flow conservation requirement?


The total amount of the material entering an intermediate vertex must be equal to the total
amount of the material leaving the vertex. This condition is called the flow-conservation requirement.

11. Define stable marriage problem.


A marriage matching M is a set of n(m, w) pairs whose members are selected from disjoint
n-element sets Y and X in a one-one fashion, i.e., each man m from Y is paired with exactly one
woman w from X and vice versa. The stable marriage problem is to find a stable marriage matching
(i.e., no blocking pair) for men’s and women’s given preferences.

12. Define blocking pair.


A pair (m, w), where m ∈ Y, w ∈ X, is said to be a blocking pair for a marriage matching M if
man m and woman w are not matched in M but they prefer each other to their mates in M.

For example, (Bob, Lea) is a blocking pair for the marriage matching M = {(Bob, Ann),
(Jim, Lea), (Tom, Sue)} because they are not matched in M while Bob prefers Lea to Ann and Lea
prefers Bob to Jim.
13. Define Cut of a graph.
A cut of a graph is defined as by partitioning vertices of a network into some subset X
containing the source and , the complement of X, containing the sink is the set of all the edges with a
tail in X and a head in . We denote a cut C(X, ) or simply C. The capacity of a cut C(X, ), denoted
c(X, ), is defined as the sum of capacities of the edges that compose the cut.

14. Define preflow.


A preflow is a flow that satisfies the capacity constraints but not the flow-conservation
requirement.

15. Define perfect matching.


A perfect matching of a graph is a matching (i.e., an independent edge set) in which every
vertex of the graph is incident to exactly one edge of the matching.
UNIT V COPING WITH THE LIMITATIONS OF ALGORITHM POWER

1. Define trivial lower bound.


A trivial lower bound is based on counting the number of items in the problem’s input that must
be processed and the number of output items that need to be produced.

2. What is meant by decision trees?


A decision tree is a graph that uses a branching method to illustrate every possible outcome of a
decision. A decision tree is a flowchart-like structure in which each internal node represents a "test" on
an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the
test and each leaf node represents a class label (decision taken after computing all attributes). The paths
from root to leaf represent classification rules.

3. What are tractable and intractable problems?


 Problems that can be solved in polynomial time are called tractable.
 Problems that cannot be solved in polynomial time are called intractable.

4. Define class P.
Class P is a class of decision problems that can be solved in polynomial time by (deterministic)
algorithms. This class of problems is called polynomial.

5. Define Hamiltonian Circuit problem.


Determining a path that starts and ends at the same vertex and passes through all the other
vertices exactly once in a graph is called as Hamiltonian Circuit problem

6. Define Traveling Salesman Problem.


Finding the shortest tour through n cities with known positive integer distances between them in
a complete graph is known as Traveling Salesman Problem.

7. Define Knapsack problem.


Finding the most valuable subset of n items of given positive integer weights and values that fit
into a knapsack of a given positive integer capacity is called as Knapsack problem.

8. Define class NP.


Class NP is the class of decision problems that can be solved by nondeterministic polynomial
algorithms. This class of problems is called nondeterministic polynomial.
This class includes all the problems in P:
P ⊆ NP.
9. What is meant by Halting problem.
The halting problem is a decision problem about properties of computer programs on a fixed
Turing-complete model of computation, i.e., all programs that can be written in some given
programming language that is general enough to be equivalent to a Turing machine. The problem is to
determine, given a program and an input to the program, whether the program will eventually halt
when run with that input.

10. List out the two stages of a nondeterministic algorithm.


A nondeterministic algorithm is a two-stage procedure that takes as its input an instance I of a
decision problem and does the following.

 Nondeterministic (“guessing”) stage: An arbitrary string S is generated that can be thought of


as a candidate solution to the given instance I.

 Deterministic (“verification”) stage: A deterministic algorithm takes both I and S as its input
and outputs yes if S represents a solution to instance I.

11. Define Reduction.


A decision problem D1 is said to be polynomially reducible to a decision problem D2 , if there
exists a function t that transforms instances of D1 to instances of D2 such that:
1. t maps all yes instances of D1 to yes instances of D2 and all no instances of D1 to
no instances of D2.
2. t is computable by a polynomial time algorithm.

12. Define class NP-Complete.


A decision problem D is said to be NP-complete if:
1. it belongs to class NP.
2. every problem in NP is polynomially reducible to D.

13. Define State Space tree.


State space is the set of all paths from root node to other nodes
Solution states are the problem states s for which the path from the root node to s defines a
tuple in the solution space
– In variable tuple size formulation tree, all nodes are solution states
– In fixed tuple size formulation tree, only the leaf nodes are solution states
– Partitioned into disjoint sub-solution spaces at each internal node
Answer states are those solution states s for which the path from root node to s defines a tuple
that is a member of the set of solutions
– These states satisfy implicit constraints
State space tree is the tree organization of the solution space.
14. What is meant by Backtracking technique?
Backtracking constructs its state-space tree in the depth-first-search fashion in the majority of
its applications. If the sequence of choices represented by a current node of the state-space tree can be
developed further without violating the problem’s constraints, it is done by considering the first
remaining legitimate option for the next component. Otherwise, the method backtracks by undoing the
last component of the partially built solution and replaces it by the next alternative.

15. Define promising node and non-promising node.


A node in a state-space tree is said to be promising if it corresponds to a partially constructed
solution that may still lead to a complete solution; otherwise, it is called non-promising.

16. Define n-Queens problem.


The problem is to place n queens on an n × n chessboard so that no two queens attack each
other by being in the same row or in the same column or on the same diagonal.

17. Define Subset-Sum problem.


Finding a subset of a given set A = {a1 , . . . , an } of n positive integers whose sum is equal to a
given positive integer d is known as Subset-Sum problem.

For example, for A = {1, 2, 5, 6, 8} and d = 9, there are two solutions: {1, 2, 6} and {1, 8}.

18. Define Branch-and-Bound technique.


Branch-and-bound is an algorithm design technique that enhances the idea of generating a state-
space tree with the idea of estimating the best value obtainable from a current node of the decision tree:
if such an estimate is not superior to the best solution seen up to that point in the processing, the node is
eliminated from further consideration.

19. Define Assignment problem.


Assignment problem is defined as assigning n people to n jobs so that the total cost of the
assignment is as small as possible.

20. Define Approximation algorithms.


A polynomial-time approximation algorithm is said to be a c-approximation algorithm, where
c ≥ 1,if the accuracy ratio of the approximation it produces does not exceed c for any instance of the
problem :
r(sa) ≤ c.
21. Define class NP hard.
NP-hardness (non-deterministic polynomial-time hard), in computational complexity theory, is
a class of problems that are, informally, "at least as hard as the hardest problems in NP". That is, a
problem H is NP-hard when every problem L in NP can be reduced in polynomial time to H.

22. Write down the equation of accuracy ratio.


The accuracy ratio equation is

r( ) =

where is an approximation solution to the problem and


is an exact solution to the problem.

23. List out the steps involved in Nearest-neighbor algorithm.


Step 1 Choose an arbitrary city as the start.
Step 2 Repeat the following operation until all the cities have been visited:
Go to the unvisited city nearest the one visited last (ties can be broken arbitrarily).
Step 3 Return to the starting city.

24. List out the steps involved in Multifragment-heuristic algorithm.


Step 1 Sort the edges in increasing order of their weights. (Ties can be broken arbitrarily.)
Initialize the set of tour edges to be constructed to the empty set.
Step 2 Repeat this step n times, where n is the number of cities in the instance being solved:
Add the next edge on the sorted edge list to the set of tour edges, provided this addition
does not create a vertex of degree 3 or a cycle of length less than n;
otherwise, skip the edge.
Step 3 Return the set of tour edges.

25. List out the steps involved in Twice-around-the-tree algorithm.


Step 1 Construct a minimum spanning tree of the graph corresponding to a given instance of the
traveling salesman problem.
Step 2 Starting at an arbitrary vertex, perform a walk around the minimum spanning tree
recording all the vertices passed by. (This can be done by a DFS traversal.)
Step 3 Scan the vertex list obtained in Step2 and eliminate from it all repeated occurrences of
the same vertex except the starting one at the end of the list. (This step is equivalent to
making shortcuts in the walk.) The vertices remaining on the list will form a
Hamiltonian circuit, which is the output of the algorithm.
26. Write down greedy algorithm for the discrete knapsack algorithm.
Step 1 Compute the value-to-weight ratios ri = vi /wi , i = 1, . . . , n, for the items given.
Step 2 Sort the items in nonincreasing order of the ratios computed in Step1.
Step 3 Repeat the following operation until no item is left in the sorted list:
If the current item on the list fits into the knapsack, place it in the knapsack and proceed
to the next item; otherwise, just proceed to the next item.

27. Write down greedy algorithm for the continuous knapsack algorithm.
Step 1 Compute the value-to-weight ratios ri = vi /wi , i = 1, . . . , n, for the items given.
Step 2 Sort the items in nonincreasing order of the ratios computed in Step1.
Step 3 Repeat the following operation until the knapsack is filled to its full capacity or no item
is left in the sorted list:
If the current item on the list fits into the knapsack in its entirety, take it and proceed to
the next item; otherwise, take its largest fraction to fill the knapsack to its full capacity
and stop.

28. What is meant by Local search heuristics?


Local search heuristics—the 2-opt, 3-opt, and Lin-Kernighan algorithms - work by replacing a
few edges in the current tour to find a shorter one until no such replacement can be found.

These algorithms are capable of finding in seconds a tour that is within a few percent of
optimum for large Euclidean instances of the traveling salesman problem.

_________

Potrebbero piacerti anche