Get Latest Exam Updates, Free Study materials and Tips

[MCQ] Analysis Of Algorithms

Module 1

1. An Algorithm is ___________
a) A procedure for solving a problem
b) A problem
c) A real-life mathematical problem
d) None of the mentioned
Answer: a
Explanation: An algorithm is a stepwise solution to the problem.

2. An algorithm in which we divide the problem into subproblem and then we combine the subsolutions to form solution to the original problem is known as _________
a) Brute Force
b) Divide and Conquer
c) GreedyAlgorithm
d) None of the mentioned
Answer: b
Explanation: In Divide and Conquer we divide the problem and then recombine the solution.

3. An algorithm which uses the past results and uses them to find the new results is _________
a) Brute Force
b) Divide and Conquer
c) Dynamic programming algorithms
d) None of the mentioned
Answer: c
Explanation: In Dynamic programming algorithms we utilize previous results for new ones.

4. A Complexity of algorithm depends upon _________
a) Time only
b) Space only
c) Both Time and Space
d) None of the mentioned
Answer: c
Explanation: For Complexity, we calculate both time and space consumed.

5. An algorithm which tries all the possibilities unless results are satisfactory is and generally is time-consuming is _________
a) Brute Force
b) Divide and Conquer
c) Dynamic programming algorithms
d) None of the mentioned
Answer: a
Explanation: In Brute force, all the possibilities are tried.


6. For a recursive algorithm _________
a) a base case is necessary and is solved without recursion.
b) a base case is not necessary
c) does not solve a base case directly
d) none of the mentioned
Answer: b
Explanation: Base case ends recursion and therefore it is necessary for finite recursion.

7. Optimization of algorithm means _________
a) making that algorithm fast by time and compact by space
b) making that algorithm slow by time and large by space
c) making that algorithm fast by time and large by space
d) making that algorithm slow by time and compact by space
Answer: a
Explanation: An Algorithm should be fast and compact.

8. For an algorithm which is the most important characteristic that makes it acceptable _________
a) Fast
b) Compact
c) Correctness and Precision
d) None of the mentioned
Answer: c
Explanation: An algorithm should be correct otherwise it’s of no use even if it is fast and compact.

9. An algorithm: can be represented through _________
a) flow charts
b) pseudo-codes
c) instructions in common language
d) all of the mentioned
Answer: d
Explanation: Algorithm is represented through pseudo codes, normal language sentences or flow charts.

10. There are two algorithms suppose A takes 1.41 milliseconds while B takes 0.9 milliseconds, which one of them is better considering all other things the same?
a) A is better than B
b) B is better than A
c) Both are equally good
d) None of the mentioned
Answer: b
Explanation: B takes less time than A for the same task.


11. Which of the following case does not exist in complexity theory?
a) Best case
b) Worst case
c) Average case
d) Null case
Answer: d
Explanation: Null case does not exist in complexity theory.

12. The complexity of linear search algorithm is _________
a) O(n)
b) O(log n)
c) O(n2)
d) O(n log n)
Answer: a
Explanation: The worst-case complexity of the linear search is O(n).

13. The complexity of Binary search algorithm is _________
a) O(n)
b) O(log)
c) O(n2)
d) O(n log n)
Answer: b
Explanation: The complexity of the binary search is O(logn).

14. The complexity of merge sort algorithm is _________
a) O(n)
b) O(log n)
c) O(n2)
d) O(n log n)
Answer: d
Explanation: The worst-case complexity for merge sort is O(nlogn).

15. The complexity of Bubble sort algorithm is _________
a) O(n)
b) O(log n)
c) O(n2)
d) O(n log n)
Answer: c
Explanation: The worst-case complexity for Bubble sort is O(n2) and the best case is O(n).

16. The Worst case occur in linear search algorithm when _________
a) Item is somewhere in the middle of the array
b) Item is not in the array at all
c) Item is the last element in the array
d) Item is the last element in the array or is not there at all
Answer: d
Explanation: The Worst case occurs in linear search algorithm when Item is the last element in the array or is not there at all.

17. The worst case complexity for insertion sort is _________
a) O(n)
b) O(log n)
c) O(n2)
d) O(n log n)
Answer: c
Explanation: In the worst case, nth comparison is required to insert the nth element into the correct position.

18. The complexity of Fibonacci series is _________
a) O(2n)
b) O(log n)
c) O(n2)
d) O(n log n)
Answer: a
Explanation: Fibonacci is f(n) = f(n-1) + f(n-2), f(0) = 0, f(1) = 1. Let g(n) = 2n. Now prove inductively that f(n) > = g(n).

19. The worst case occurs in quick sort when _________
a) Pivot is the median of the array
b) Pivot is the smallest element
c) Pivot is the middle element
d) None of the mentioned
Answer: b
Explanation: This happens when the pivot is the smallest (or the largest) element. Then one of the partitions is empty, and we repeat recursively the procedure for N-1 elements.

20. The worst case complexity of quick sort is _________
a) O(n)
b) O(log n)
c) O(n2)
d) O(n log n)
Answer: c
Explanation: The worst-case complexity of quicksort is O(n2).


21. Which is used to measure the Time complexity of an algorithm Big O notation?
a) describes limiting behaviour of the function
b) characterises a function based on growth of function
c) upper bound on the growth rate of the function
d) all of the mentioned
Answer: d
Explanation: Big O notation describes limiting behaviour, and also gives upper bound on growth rate of a function.

22. If for an algorithm time complexity is given by O(1) then the complexity of it is ____________
a) constant
b) polynomial
c) exponential
d) none of the mentioned
Answer: a
Explanation: The growth rate of that function will be constant.

23. If for an algorithm time complexity is given by O(log2n) then complexity will be ___________
a) constant
b) polynomial
c) exponential
d) none of the mentioned
Answer: d
Explanation: The growth rate of that function will be logarithmic therefore complexity will be logarithmic.

24. If for an algorithm time complexity is given by O(n) then the complexity of it is ___________
a) constant
b) linear
c) exponential
d) none of the mentioned
Answer: b
Explanation: The growth rate of that function will be linear.

25. If for an algorithm time complexity is given by O(n2) then complexity will ___________
a) constant
b) quadratic
c) exponential
d) none of the mentioned
Answer: b
Explanation: The growth rate of that function will be quadratic therefore complexity will be quadratic.


26. If for an algorithm time complexity is given by O((3⁄2)n) then complexity will be ___________
a) constant
b) quadratic
c) exponential
d) none of the mentioned
Answer: c
Explanation: The growth rate of that function will be exponential therefore complexity will be exponential.

27. The time complexity of binary search is given by ___________
a) constant
b) quadratic
c) exponential
d) none of the mentioned
Answer: d
Explanation: It is O(log2n), therefore complexity will be logarithmic.

28. The time complexity of the linear search is given by ___________
a) O(log2n)
b) O(1)
c) exponential
d) none of the mentioned
Answer: d
Explanation: It is O(n), therefore complexity will be linear.

29. Which algorithm is better for sorting between bubble sort and quicksort?
a) bubble sort
b) quick sort
c) both are equally good
d) none of the mentioned
Answer: b
Explanation: Running time of quicksort is logarithmic whereas for bubble sort it is quadratic.

30. Time complexity of the binary search algorithm is constant.
a) True
b) False
Answer: b
Explanation: It is O(log2n), therefore complexity will be logarithmic.


31. If f(x) = (x3 – 1) / (3x + 1) then f(x) is?
a) O(x2)
b) O(x)
c) O(x2 / 3)
d) O(1)
Answer: a
Explanation: 0 < (x3 – 1) / (3x + 1) < x2.

32. If f(x) = 3×2 + x3logx, then f(x) is?
a) O(x2)
b) O(x3)
c) O(x)
d) O(1)
Answer: b
Explanation: 0 < 3×2 < x3, it follows that 0 < 3×2 + x3logx < x3. Consequently, f(x) = O(x3).

33. The big-O notation for f(n) = (nlogn + n2)(n3 + 2) is?
a) O(n2)
b) O(3n)
c) O(n4)
d) O(n5)
Answer: d
Explanation: 0 < n3 + 2 < n3, it follows that (nlogn + n2)(n3 + 2) is less than equal to n5.

34. The big-theta notation for function f(n) = 2n3 + n – 1 is?
a) n
b) n2
c) n3
d) n4
Answer: c
Explanation: 2n3 + n – 1 is less than equal to n3.

35. The big-theta notation for f(n) = nlog(n2 + 1) + n2logn is?
a) n2logn
b) n2
c) logn
d) nlog(n2)
Answer: a
Explanation: n2logn < n3, it follows that nlog(n2 + 1) + n2logn is less than n3 and greater than n2logn.

35. The big-omega notation for f(x, y) = x5y3 + x4y4 + x3y5 is?
a) x5y3
b) x5y5
c) x3y3
d) x4y4
Answer: c
Explanation: x5y3, x4y4 and x3y5 is greater than or equal to x3y3.


36. If f1(x) is O(g(x)) and f2(x) is o(g(x)), then f1(x) + f2(x) is?
a) O(g(x))
b) o(g(x))
c) O(g(x)) + o(g(x))
d) None of the mentioned
Answer: a
Explanation: f2(x) is less than O(g(x)). So, f1(x) + f2(x) upper bound is O(g(x)).

37. The little-o notation for f(x) = xlogx is?
a) x
b) x3
c) x2
d) xlogx
Answer: c
Explanation: Find the limit for xlogx / x2 as x tends to infinity.

38. The big-O notation for f(n) = 2log(n!) + (n2 + 1)logn is?
a) n
b) n2
c) nlogn
d) n2logn
Answer: d
Explanation: log(n!) < n2logn, it follows that 2log(n!) + (n2 + 1)logn is less than or equal n2logn.

39. The big-O notation for f(x) = 5logx is?
a) 1
b) x
c) x2
d) x3
Answer: b
Explanation: logx < x, it follows that 5logx < x.

40. The big-Omega notation for f(x) = 2×4 + x2 – 4 is?
a) x2
b) x3
c) x
d) x4
Answer: d
Explanation: 2×4 + x2 – 4 is greater than or equal to x4.


41. The worst-case efficiency of solving a problem in polynomial time is?
a) O(p(n))
b) O(p( n log n))
c) O(p(n2))
d) O(p(m log n))
Answer: a
Explanation: The worst-case efficiency of solving an problem in polynomial time is O(p(n)) where p(n) is the polynomial time of input size.

42. Problems that can be solved in polynomial time are known as?
a) intractable
b) tractable
c) decision
d) complete
Answer: b
Explanation: Problems that can be solved in polynomial time are known as tractable. Problems that cannot be solved in polynomial time are intractable.

43. The sum and composition of two polynomials are always polynomials.
a) true
b) false
Answer: a
Explanation: One of the properties of polynomial functions states that the sum and composition of two polynomials are always polynomials.

44. _________ is the class of decision problems that can be solved by non-deterministic polynomial algorithms.
a) NP
b) P
c) Hard
d) Complete
Answer: a
Explanation: NP problems are called as non-deterministic polynomial problems. They are a class of decision problems that can be solved using NP algorithms.

45. Problems that cannot be solved by any algorithm are called?
a) tractable problems
b) intractable problems
c) undecidable problems
d) decidable problems
Answer: c
Explanation: Problems cannot be solved by any algorithm are called undecidable problems. Problems that can be solved in polynomial time are called Tractable problems.


46. The Euler’s circuit problem can be solved in?
a) O(N)
b) O( N log N)
c) O(log N)
d) O(N2)
Answer: d
Explanation: Mathematically, the run time of Euler’s circuit problem is determined to be O(N2).

47. To which class does the Euler’s circuit problem belong?
a) P class
b) NP class
c) Partition class
d) Complete class
Answer: a
Explanation: Euler’s circuit problem can be solved in polynomial time. It can be solved in O(N2).

48. Halting problem is an example for?
a) decidable problem
b) undecidable problem
c) complete problem
d) trackable problem
Answer: b
Explanation: Halting problem by Alan Turing cannot be solved by any algorithm. Hence, it is undecidable.

49. How many stages of procedure does a non-deterministic algorithm consist of?
a) 1
b) 2
c) 3
d) 4
Answer: b
Explanation: A non-deterministic algorithm is a two-stage procedure- guessing stage and verification stage.

50. A non-deterministic algorithm is said to be non-deterministic polynomial if the time-efficiency of its verification stage is polynomial.
a) true
b) false
Answer: a
Explanation: One of the properties of NP class problems states that A non-deterministic algorithm is said to be non-deterministic polynomial if the time-efficiency of its verification stage is polynomial.


51. How many conditions have to be met if an NP- complete problem is polynomially reducible?
a) 1
b) 2
c) 3
d) 4
Answer: b
Explanation: A function t that maps all yes instances of decision problems D1 and D2 and t should be computed in polynomial time are the two conditions.

52. To which of the following class does a CNF-satisfiability problem belong?
a) NP class
b) P class
c) NP complete
d) NP hard
Answer: c
Explanation: The CNF satisfiability problem belongs to NP complete class. It deals with Boolean expressions.

53. How many steps are required to prove that a decision problem is NP complete?
a) 1
b) 2
c) 3
d) 4
Answer: b
Explanation: First, the problem should be NP. Next, it should be proved that every problem in NP is reducible to the problem in question in polynomial time.

54. Which of the following problems is not NP complete?
a) Hamiltonian circuit
b) Bin packing
c) Partition problem
d) Halting problem
Answer: d
Explanation: Hamiltonian circuit, bin packing, partition problems are NP complete problems. Halting problem is an undecidable problem.

55. The choice of polynomial class has led to the development of an extensive theory called ________
a) computational complexity
b) time complexity
c) problem complexity
d) decision complexity
Answer: a
Explanation: An extensive theory called computational complexity seeks to classify problems according to their inherent difficulty.

56. Master’s theorem is used for?
a) solving recurrences
b) solving iterative relations
c) analysing loops
d) calculating the time complexity of any code
Answer: a
Explanation: Master’s theorem is a direct method for solving recurrences. We can solve any recurrence that falls under any one of the three cases of master’s theorem.

57. How many cases are there under Master’s theorem?
a) 2
b) 3
c) 4
d) 5
Answer: b
Explanation: There are primarily 3 cases under master’s theorem. We can solve any recurrence that falls under any one of these three cases.

58. What is the result of the recurrences which fall under first case of Master’s theorem (let the recurrence be given by T(n)=aT(n/b)+f(n) and f(n)=nc?
a) T(n) = O(n^logba)
b) T(n) = O(nc log n)
c) T(n) = O(f(n))
d) T(n) = O(n2)
Answer: a
Explanation: In first case of master’s theorem the necessary condition is that c < logba. If this condition is true then T(n) = O(n^logba).

59. What is the result of the recurrences which fall under second case of Master’s theorem (let the recurrence be given by T(n)=aT(n/b)+f(n) and f(n)=nc?
a) T(n) = O(nlogba)
b) T(n) = O(nc log n)
c) T(n) = O(f(n))
d) T(n) = O(n2)
Answer: b
Explanation: In second case of master’s theorem the necessary condition is that c = logba. If this condition is true then T(n) = O(nc log n)


60. What is the result of the recurrences which fall under third case of Master’s theorem (let the recurrence be given by T(n)=aT(n/b)+f(n) and f(n)=nc?
a) T(n) = O(nlogba)
b) T(n) = O(nc log n)
c) T(n) = O(f(n))
d) T(n) = O(n2)
Answer: c
Explanation: In third case of master’s theorem the necessary condition is that c > logba. If this condition is true then T(n) = O(f(n)).

61. We can solve any recurrence by using Master’s theorem.
a) true
b) false
Answer: b
Explanation: No we cannot solve all the recurrences by only using master’s theorem. We can solve only those which fall under the three cases prescribed in the theorem.

62. Under what case of Master’s theorem will the recurrence relation of merge sort fall?
a) 1
b) 2
c) 3
d) It cannot be solved using master’s theorem
Answer: b
Explanation: The recurrence relation of merge sort is given by T(n) = 2T(n/2) + O(n). So we can observe that c = Logba so it will fall under case 2 of master’s theorem.

63. Under what case of Master’s theorem will the recurrence relation of stooge sort fall?
a) 1
b) 2
c) 3
d) It cannot be solved using master’s theorem
Answer: a
Explanation: The recurrence relation of stooge sort is given as T(n) = 3T(2/3n) + O(1). It is found too be equal to O(n2.7) using master’s theorem first case.

64. Which case of master’s theorem can be extended further?
a) 1
b) 2
c) 3
d) No case can be extended
Answer: b
Explanation: The second case of master’s theorem can be extended for a case where f(n) = nc (log n)k and the resulting recurrence becomes T(n)= O(nc (log n))k+1.

65. What is the result of the recurrences which fall under the extended second case of Master’s theorem (let the recurrence be given by T(n)=aT(n/b)+f(n) and f(n)=nc(log n)k?
a) T(n) = O(nlogba)
b) T(n) = O(nc log n)
c) T(n)= O(nc (log n)k+1
d) T(n) = O(n2)
Answer: c
Explanation: In the extended second case of master’s theorem the necessary condition is that c = logba. If this condition is true then T(n)= O(nc(log n))k+1.


66. Under what case of Master’s theorem will the recurrence relation of binary search fall?
a) 1
b) 2
c) 3
d) It cannot be solved using master’s theorem
Answer: b
Explanation: The recurrence relation of binary search is given by T(n) = T(n/2) + O(1). So we can observe that c = Logba so it will fall under case 2 of master’s theorem.

67. Consider the recurrence relation a1=4, an=5n+an-1. The value of a64 is _________
a) 10399
b) 23760
c) 75100
d) 53700
Answer: a
Explanation: an=5n+an-1
= 5n + 5(n-1) + … + an-2
= 5n + 5(n-1) + 5(n − 2) +…+ a1
= 5n + 5(n-1) + 5(n − 2) +…+ 4 [since, a1=4]
= 5n + 5(n-1) + 5(n − 2) +…+ 5.1 – 1
= 5(n + (n − 1)+…+2 + 1) – 1
= 5 * n(n+1)/ 2 – 1
an = 5 * n(n+1)/ 2 – 1
Now, n=64 so the answer is a64 = 10399.

68. Determine the solution of the recurrence relation Fn=20Fn-1 − 25Fn-2 where F0=4 and F1=14.
a) an = 14*5n-1
b) an = 7/2*2n−1/2*6n
c) an = 7/2*2n−3/4*6n+1
d) an = 3*2n−1/2*3n
Answer: b
Explanation: The characteristic equation of the recurrence relation is → x2−20x+36=0
So, (x-2)(x-18)=0. Hence, there are two real roots x1=2 and x2=18. Therefore the solution to the recurrence relation will have the form: an=a2n+b18n. To find a and b, set n=0 and n=1 to get a system of two equations with two unknowns: 4=a20+b180=a+b and 3=a21+b61=2a+6b. Solving this system gives b=-1/2 and a=7/2. So the solution to the recurrence relation is,
an = 7/2*2n−1/2*6n.

69. What is the recurrence relation for 1, 7, 31, 127, 499?
a) bn+1=5bn-1+3
b) bn=4bn+7!
c) bn=4bn-1+3
d) bn=bn-1+1
Answer: c
Explanation: Look at the differences between terms: 1, 7, 31, 124,…. and these are growing by a factor of 4. So, 1⋅4=4, 7⋅4=28, 31⋅4=124, and so on. Note that we always end up with 3 less than the next term. So, bn=4bn-1+3 is the recurrence relation and the initial condition is b0=1.

70. If Sn=4Sn-1+12n, where S0=6 and S1=7, find the solution for the recurrence relation.
a) an=7(2n)−29/6n6n
b) an=6(6n)+6/7n6n
c) an=6(3n+1)−5n
d) an=nn−2/6n6n
Answer: b
Explanation: The characteristic equation of the recurrence relation is → x2−4x-12=0
So, (x-6)(x+2)=0. Only the characteristic root is 6. Therefore the solution to the recurrence relation will have the form: an=a.6n+b.n.6n. To find a and b, set n=0 and n=1 to get a system of two equations with two unknowns: 6=a60+b.0.60=a and 7=a61+b.1.61=2a+6b. Solving this system gives a=6 and b=6/7. So the solution to the recurrence relation is, an=6(6n)−6/7n6n.


71. Find the value of a4 for the recurrence relation an=2an-1+3, with a0=6.
a) 320
b) 221
c) 141
d) 65
Answer: c
Explanation: When n=1, a1=2a0+3, Now a2=2a1+3. By substitution, we get a2=2(2a0+3)+3.
Regrouping the terms, we get a4=141, where a0=6.

72. The solution to the recurrence relation an=an-1+2n, with initial term a0=2 are _________
a) 4n+7
b) 2(1+n)
c) 3n2
d) 5*(n+1)/2
Answer: b
Explanation: When n=1, a1=a0+2. By substitution we get, a2=a1+2 ⇒ a2=(a0+2)+2 and so on. So the solution to the recurrence relation, subject to the initial condition should be an=2+2n=2(1+n).

73. Determine the solution for the recurrence relation bn=8bn-1−12bn-2 with b0=3 and b1=4.
a) 7/2*2n−1/2*6n
b) 2/3*7n-5*4n
c) 4!*6n
d) 2/8n
Answer: a
Explanation: Rewrite the recurrence relation bn-8bn-1+12bn-2=0. Now from the characteristic equation: x2−8x+12=0 we have x: (x−2)(x−6)=0, so x=2 and x=6 are the characteristic roots. Therefore the solution to the recurrence relation will have the form: bn=b2n+c6n. To find b and c, set n=0 and n=1 to get a system of two equations with two unknowns: 3=b20+c60=b+c, and 4=b21+c61=2b+6c. Solving this system gives c=-1/2 and b=7/2. So the solution to the recurrence relation is, bn=7/2*2n−1/2*6n.

74. What is the solution to the recurrence relation an=5an-1+6an-2?
a) 2n2
b) 6n
c) (3/2)n
d) n!*3
Answer: b
Explanation: Check for the left side of the equation with all the options into the recurrence relation. Then, we get that 6n is the required solution to the recurrence relation an=5an-1 + 6an-2.

75. Determine the value of a2 for the recurrence relation an = 17an-1 + 30n with a0=3.
a) 4387
b) 5484
c) 238
d) 1437
Answer: d
Explanation: When n=1, a1=17a0+30, Now a2=17a1+30*2. By substitution, we get a2=17(17a0+30)+60. Then regrouping the terms, we get a2=1437, where a0=3.

76. Determine the solution for the recurrence relation an = 6an-1−8an-2 provided initial conditions a0=3 and a1=5.
a) an = 4 * 2n – 3n
b) an = 3 * 7n – 5*3n
c) an = 5 * 7n
d) an = 3! * 5n
Answer: b
Explanation: The characteristic polynomial is x2−6x+8. By solving the characteristic equation, x2−6x+8=0 we get x=2 and x=4, these are the characteristic roots. Therefore we know that the solution to the recurrence relation has the form an=a*2n+b*4n, for some constants a and b. Now, by using the initial conditions a0 and a1 we have: a=7/2 and b=-1/2. Therefore the solution to the recurrence relation is: an = 4 * 2n – 1*3n = 7/2 * 2n – 1/2*3n.

Module 2

1. The approach of dynamic programming is
similar to
a. Parsing
b. Hash table
c. Divide and Conquer algorithm
d. Greedy algorithm
Answer: C
Divide and Conquer algorithm

2.The algorithms like merge sort, quick sort
and binary search are based on
a. Greedy algorithm
b. Divide and Conquer algorithm
c. Hash table
d. Parsing
Answer: D
Divide and Conquer algorithm

3.The step(s) in the Divide and conquer
process that takes a recursive approach is said to
be
a. Conquer/Solve
b. Merge/Combine
c. Divide/Break
d. Both B and C
Answer : C
Divide/Break

4.The sub-problems in the dynamic
programming are solved
a. Dependently
b. Independently
c. Parallel
d. Concurrent
Answer: D
Concurrent

5. In the Divide and Conquer process, breaking
the problem into smaller sub-problems is the
responsibility of
a. Divide/Break
b. Sorting/Divide
c. Conquer/Solve
d. Merge/Combine
Answer: A
Divide/Break


6.Which of the following algorithms is NOT a divide & conquer algorithm by nature?
a.Euclidean algorithm to compute the greatest common divisor
b.Heap Sort
C.Cooley-Tukey fast Fourier transform
d.Quick Sort
Answer b
Heap Sort

7.Consider the following C program
int main()
{
int x, y, m, n;
scanf (“%d %d”, &x, &y);
/* x > 0 and y > 0 */
m = x; n = y;
while (m != n)
{
if(m>n)
m = m – n;
else
n = n – m;
}
printf(“%d”, n);
}

What does the program compute? (GATE CS 2004)
a.x + y using repeated subtraction
b.x mod y using repeated subtraction
c.the greatest common divisor of x and y
d.the least common multiple of x and y
Answer C
the greatest common divisor of x and y

8.Consider the polynomial p(x) = a0 + a1x + a2x^2 +a3x^3, where ai != 0, for all i. The minimum number of multiplications needed to evaluate p on an input x is:
a.3
b.4
c.6
d.9
Answer A
3

9.Maximum Subarray Sum problem is to find the subarray with maximum sum. For example, given an array {12, -13, -5, 25, -20, 30, 10}, the maximum subarray sum is 45. The naive solution for this problem is to calculate sum of all subarrays starting with every element and return the maximum of all. We can solve this using Divide and Conquer, what will be the worst case time complexity using Divide and Conquer.
a.O(n)
b.O(nLogn)
c.O(Logn)
d.O(n^2)
Answer B
O(nLogn)

10.Consider a situation where you don’t have function to calculate power (pow() function in C) and you need to calculate x^n where x can be any number and n is a positive integer. What can be the best possible time complexity of your power function?
a.O(n)
b.O(nLogn)
c.O(LogLogn)
d.O(Logn)
Answer D
O(Logn)


11.Consider the problem of searching an element x in an array ‘arr[]’ of size n. The problem can be solved in O(Logn) time if. 1) Array is sorted 2) Array is sorted and rotated by k. k is given to you and k <= n 3) Array is sorted and rotated by k. k is NOT given to you and k <= n 4) Array is not sorted
a.1 Only
b.1 & 2 only
c.1, 2 and 3 only
d.1, 2, 3 and 4
Answer C
1, 2 and 3 only

12.The secant method is used to find the root of an equation f(x) = 0. It is started from two distinct estimates xa and xb for the root. It is an iterative procedure involving linear interpolation to a root. The iteration stops if f(xb) is very small and then xb is the solution. The procedure is given below. Observe that there is an expression which is missing and is marked by? Which is the suitable expression that is to be put in place of?
So that it follows all steps of the secant method? Secant
Initialize: xa, xb, ε, N // ε = convergence indicator
fb = f(xb) i = 0
while (i < N and |fb| > ε) do
i = i + 1 // update counter
xt = ? // missing expression for
// intermediate value
xa = xb // reset xa
xb = xt // reset xb
fb = f(xb) // function value at new xb
end while
if |fb| > ε
then // loop is terminated with i = N
write “Non-convergence”
else
write “return xb”
end if
a.xb – (fb– f(xa)) fb/ (xb – xa)
b.xa– (fa– f(xa)) fa/ (xb – xa)
c.xb – (fb – xa) fb/ (xb – fb(xa)
d.xa – (xb – xa) fa/ (fb – f(xa))
Answer D
xa – (xb – xa) fa/ (fb – f(xa))

13.Consider the problem of computing min-max in an unsorted array where min and max are minimum and maximum elements of array. Algorithm A1 can compute min-max in a1 comparisons without divide and conquer. Algorithm A2 can compute min-max in a2 comparisons by scanning the array linearly. What could be the relation between a1 and a2 considering the worst case scenarios?
a.a1 < a2
b.a1 > a2
c.a1 = a2
d.Depends on the input
Answer B
a1 > a2

Module 3

1. Which of the following algorithms is the best approach for solving Huffman codes?
a) exhaustive search
b) greedy algorithm
c) brute force algorithm
d) divide and conquer algorithm
Answer: b
Explanation: Greedy algorithm is the best approach for solving the Huffman codes problem since it greedily searches for an optimal solution.

2. How many printable characters does the ASCII character set consists of?
a) 120
b) 128
c) 100
d) 98
Answer: c
Explanation: Out of 128 characters in an ASCII set, roughly, only 100 characters are printable while the rest are non-printable.

3. Which bit is reserved as a parity bit in an ASCII set?
a) first
b) seventh
c) eighth
d) tenth
Answer: c
Explanation: In an ASCII character set, seven bits are reserved for character representation while the eighth bit is a parity bit.

4. How many bits are needed for standard encoding if the size of the character set is X?
a) log X
b) X+1
c) 2X
d) X2
Answer: a
Explanation: If the size of the character set is X, then [log X] bits are needed for representation in a standard encoding.

5. The code length does not depend on the frequency of occurrence of characters.
a) true
b) false
Answer: b
Explanation: The code length depends on the frequency of occurrence of characters. The more frequent the character occurs, the less is the length of the code.


6. In Huffman coding, data in a tree always occur?
a) roots
b) leaves
c) left sub trees
d) right sub trees
Answer: b
Explanation: In Huffman encoding, data is always stored at the leaves of a tree inorder to compute the codeword effectively.

7. From the following given tree, what is the code word for the character ‘a’?
huffman-code-questions-answers-q7
a) 011
b) 010
c) 100
d) 101
Answer: a
Explanation: By recording the path of the node from root to leaf, the code word for character ‘a’ is found to be 011.

8. From the following given tree, what is the computed codeword for ‘c’?
huffman-code-questions-answers-q8
a) 111
b) 101
c) 110
d) 011
Answer: c
Explanation: By recording the path of the node from root to leaf, assigning left branch as 0 and right branch as 1, the codeword for c is 110.

9. What will be the cost of the code if character ci is at depth di and occurs at frequency fi?
a) cifi
b) ∫cifi
c) ∑fidi
d) fidi
Answer: c
Explanation: If character ci is at depth di and occurs at frequency fi, the cost of the codeword obtained is ∑fidi.

10. An optimal code will always be present in a full tree.
a) true
b) false
Answer: a
Explanation: An optimal tree will always have the property that all nodes are either leaves or have two children. Otherwise, nodes with one child could move up a level.


11. The type of encoding where no character code is the prefix of another character code is called?
a) optimal encoding
b) prefix encoding
c) frequency encoding
d) trie encoding
Answer: b
Explanation: Even if the character codes are of different lengths, the encoding where no character code is the prefix of another character code is called prefix encoding.

12. What is the running time of the Huffman encoding algorithm?
a) O(C)
b) O(log C)
c) O(C log C)
d) O( N log C)
Answer: c
Explanation: If we maintain the trees in a priority queue, ordered by weight, then the running time is given by O(C log C).

13. What is the running time of the Huffman algorithm, if its implementation of the priority queue is done using linked lists?
a) O(C)
b) O(log C)
c) O(C log C)
d) O(C2)
Answer: d
Explanation: If the implementation of the priority queue is done using linked lists, the running time of Huffman algorithm is O(C2).

14. Kruskal’s algorithm is used to ______
a) find minimum spanning tree
b) find single-source shortest path
c) find all pair shortest path algorithm
d) traverse the graph
Answer: a
Explanation: Kruskal’s algorithm is used to find the minimum spanning tree of the connected graph. It constructs the MST by finding the edge having the least possible weight that connects two trees in the forest.

15. Kruskal’s algorithm is a ______
a) divide and conquer algorithm
b) dynamic programming algorithm
c) greedy algorithm
d) approximation algorithm
Answer: c
Explanation: Kruskal’s algorithm uses a greedy algorithm approach to find the MST of the connected weighted graph. In the greedy method, we attempt to find an optimal solution in stages.


16.Consider the given graph.
3
What is the weight of the minimum spanning tree using the Kruskal’s algorithm?
a) 24
b) 23
c) 15
d) 19
Answer: d
Explanation: Kruskal’s algorithm constructs the minimum spanning tree by constructing by adding the edges to spanning tree one-one by one. The MST for the given graph is,
Kruskal’s-algorithm-questions-answers-q3a
3-0
So, the weight of the MST is 19.

17. What is the time complexity of Kruskal’s algorithm?
a) O(log V)
b) O(E log V)
c) O(E2)
d) O(V log E)
Answer: b
Explanation: Kruskal’s algorithm involves sorting of the edges, which takes O(E logE) time, where E is a number of edges in graph and V is the number of vertices. After sorting, all edges are iterated and union-find algorithm is applied. union-find algorithm requires O(logV) time. So, overall Kruskal’s algorithm requires O(E log V) time.

18. Consider the following graph. Using Kruskal’s algorithm, which edge will be selected first?
5
a) GF
b) DE
c) BE
d) BG
Answer: c
Explanation: In Krsuskal’s algorithm the edges are selected and added to the spanning tree in increasing order of their weights. Therefore, the first edge selected will be the minimal one. So, correct option is BE.

19. Which of the following edges form minimum spanning tree on the graph using kruskals algorithm?
5
Kruskal’s-algorithm-questions-answers-q5
a) (B-E)(G-E)(E-F)(D-F)
b) (B-E)(G-E)(E-F)(B-G)(D-F)
c) (B-E)(G-E)(E-F)(D-E)
d) (B-E)(G-E)(E-F)(D-F)(D-G)

Answer: a
Explanation: Using Krushkal’s algorithm on the given graph, the generated minimum spanning tree is shown below.
kruskals-algorithm-questions-answers-q6
6-0
So, the edges in the MST are, (B-E)(G-E)(E-F)(D-F).

20. Which of the following is true?
a) Prim’s algorithm can also be used for disconnected graphs
b) Kruskal’s algorithm can also run on the disconnected graphs
c) Prim’s algorithm is simpler than Kruskal’s algorithm
d) In Kruskal’s sort edges are added to MST in decreasing order of their weights
Answer: b
Explanation: Prim’s algorithm iterates from one node to another, so it can not be applied for disconnected graph. Kruskal’s algorithm can be applied to the disconnected graphs to construct the minimum cost forest. Kruskal’s algorithm is comparatively easier and simpler than prim’s algorithm.


21. Which of the following is false about Kruskal’s algorithm?
a) It is a greedy algorithm
b) It constructs MST by selecting edges in increasing order of their weights
c) It can accept cycles in the MST
d) It uses a union-find data structure
Answer: c
Explanation: Kruskal’s algorithm is a greedy algorithm to construct the MST of the given graph. It constructs the MST by selecting edges in increasing order of their weights and rejects an edge if it may form the cycle. So, using Kruskal’s algorithm is never formed.

22. Kruskal’s algorithm is best suited for dense graphs than the prim’s algorithm.
a) True
b) False
Answer: b
Explanation: Prim’s algorithm outperforms the Kruskal’s algorithm in case of the dense graphs. It is significantly faster if graph has more edges than the Kruskal’s algorithm.

23. Consider the following statements.
S1. Kruskal’s algorithm might produce a non-minimal spanning tree.
S2. Kruskal’s algorithm can efficiently implemented using the disjoint-set data structure.
a) S1 is true but S2 is false
b) Both S1 and S2 are false
c) Both S1 and S2 are true
d) S2 is true but S1 is false
Answer: d
Explanation: In Kruskal’s algorithm, the disjoint-set data structure efficiently identifies the components containing a vertex and adds the new edges. And Kruskal’s algorithm always finds the MST for the connected graph.

24. Fractional knapsack problem is also known as __________
a) 0/1 knapsack problem
b) Continuous knapsack problem
c) Divisible knapsack problem
d) Non continuous knapsack problem
Answer: b
Explanation: Fractional knapsack problem is also called continuous knapsack problem. Fractional knapsack is solved using dynamic programming.

25. Fractional knapsack problem is solved most efficiently by which of the following algorithm?
a) Divide and conquer
b) Dynamic programming
c) Greedy algorithm
d) Backtracking
Answer: c
Explanation: Greedy algorithm is used to solve this problem. We first sort items according to their value/weight ratio and then add item with highest ratio until we cannot add the next item as a whole. At the end, we add the next item as much as we can.


26. What is the objective of the knapsack problem?
a) To get maximum total value in the knapsack
b) To get minimum total value in the knapsack
c) To get maximum weight in the knapsack
d) To get minimum weight in the knapsack
Answer: a
Explanation: The objective is to fill the knapsack of some given volume with different materials such that the value of selected items is maximized.

27. Which of the following statement about 0/1 knapsack and fractional knapsack problem is correct?
a) In 0/1 knapsack problem items are divisible and in fractional knapsack items are indivisible
b) Both are the same
c) 0/1 knapsack is solved using a greedy algorithm and fractional knapsack is solved using dynamic programming
d) In 0/1 knapsack problem items are indivisible and in fractional knapsack items are divisible
Answer: d
Explanation: In fractional knapsack problem we can partially include an item into the knapsack whereas in 0/1 knapsack we have to either include or exclude the item wholly.

28. Time complexity of fractional knapsack problem is ____________
a) O(n log n)
b) O(n)
c) O(n2)
d) O(nW)
Answer: a
Explanation: As the main time taking a step is of sorting so it defines the time complexity of our code. So the time complexity will be O(n log n) if we use quick sort for sorting.

29. Fractional knapsack problem can be solved in time O(n).
a) True
b) False
Answer: a
Explanation: It is possible to solve the problem in O(n) time by adapting the algorithm for finding weighted medians.

30. Given items as {value,weight} pairs {{40,20},{30,10},{20,5}}. The capacity of knapsack=20. Find the maximum value output assuming items to be divisible.
a) 60
b) 80
c) 100
d) 40
Answer: a
Explanation: The value/weight ratio are-{2,3,4}. So we include the second and third items wholly into the knapsack. This leaves only 5 units of volume for the first item. So we include the first item partially.
Final value = 20+30+(40/4)=60.


31. The result of the fractional knapsack is greater than or equal to 0/1 knapsack.
a) True
b) False
Answer: a
Explanation: As fractional knapsack gives extra liberty to include the object partially which is not possible with 0/1 knapsack, thus we get better results with a fractional knapsack.

32. The main time taking step in fractional knapsack problem is ___________
a) Breaking items into a fraction
b) Adding items into a knapsack
c) Sorting
d) Looping through sorted items
Answer: c
Explanation: The main time taking a step is to sort the items according to their value/weight ratio. It defines the time complexity of the code.

33. Given items as {value,weight} pairs {{60,20},{50,25},{20,5}}. The capacity of knapsack=40. Find the maximum value output assuming items to be divisible and nondivisible respectively.
a) 100, 80
b) 110, 70
c) 130, 110
d) 110, 80
Answer: d
Explanation: Assuming items to be divisible-
The value/weight ratio are {3, 2, 4}.So we include the third and first items wholly. So, now only 15 units of volume are left for the second item. So we include it partially.
Final volume = 20+60+50x(15/25)=80+30=110
Assuming items to be indivisible- In this case, we will have to leave one item due to insufficient capacity.
Final volume = 60 + 20 = 80.

Module 4

1. The Bellmann Ford algorithm returns _______ value.
a) Boolean
b) Integer
c) String
d) Double
Answer: a
Explanation: The Bellmann Ford algorithm returns Boolean value whether there is a negative weight cycle that is reachable from the source.

2. Bellmann ford algorithm provides solution for ____________ problems.
a) All pair shortest path
b) Sorting
c) Network flow
d) Single source shortest path
Answer: d
Explanation: Bellmann ford algorithm is used for finding solutions for single source shortest path problems. If the graph has no negative cycles that are reachable from the source then the algorithm produces the shortest paths and their weights.

3. Bellmann Ford algorithm is used to indicate whether the graph has negative weight cycles or not.
a) True
b) False
Answer: a
Explanation: Bellmann Ford algorithm returns true if the graph does not have any negative weight cycles and returns false when the graph has negative weight cycles.

4. How many solution/solutions are available for a graph having negative weight cycle?
a) One solution
b) Two solutions
c) No solution
d) Infinite solutions
Answer: c
Explanation: If the graph has any negative weight cycle then the algorithm indicates that no solution exists for that graph.

5. What is the running time of Bellmann Ford Algorithm?
a) O(V)
b) O(V2)
c) O(ElogV)
d) O(VE)
Answer: d
Explanation: Bellmann Ford algorithm runs in time O(VE), since the initialization takes O(V) for each of V-1 passes and the for loop in the algorithm takes O(E) time. Hence the total time taken by the algorithm is O(VE).

6. How many times the for loop in the Bellmann Ford Algorithm gets executed?
a) V times
b) V-1
c) E
d) E-1
Answer: b
Explanation: The for loop in the Bellmann Ford Algorithm gets executed for V-1 times. After making V-1 passes, the algorithm checks for a negative weight cycle and returns appropriate boolean value.

7. Dijikstra’s Algorithm is more efficient than Bellmann Ford Algorithm.
a) True
b) False
Answer: a
Explanation: The running time of Bellmann Ford Algorithm is O(VE) whereas Dijikstra’s Algorithm has running time of only O(V2).

8. Identify the correct Bellmann Ford Algorithm.
a)

for i=1 to V[g]-1
do for each edge (u,v) in E[g]
do Relax(u,v,w)
for each edge (u,v) in E[g]
do if d[v]>d[u]+w(u,v)
then return False
return True
b)

for i=1 to V[g]-1
for each edge (u,v) in E[g]
do if d[v]>d[u]+w(u,v)
then return False
return True
c)

for i=1 to V[g]-1
do for each edge (u,v) in E[g]
do Relax(u,v,w)
for each edge (u,v) in E[g]
do if d[v]<d[u]+w(u,v)
then return true
return True
d)

for i=1 to V[g]-1
do for each edge (u,v) in E[g]
do Relax(u,v,w)
return True
Answer: a
Explanation: After initialization, the algorithm makes v-1 passes over the edges of the graph. Each pass is one iteration of the for loop and consists of relaxing each edge of the graph once. Then it checks for the negative weight cycle and returns an appropriate Boolean value.

9. What is the basic principle behind Bellmann Ford Algorithm?
a) Interpolation
b) Extrapolation
c) Regression
d) Relaxation
Answer: d
Explanation: Relaxation methods which are also called as iterative methods in which an approximation to the correct distance is replaced progressively by more accurate values till an optimum solution is found.

10. Bellmann Ford Algorithm can be applied for _____________
a) Undirected and weighted graphs
b) Undirected and unweighted graphs
c) Directed and weighted graphs
d) All directed graphs
Answer: c
Explanation: Bellmann Ford Algorithm can be applied for all directed and weighted graphs. The weight function in the graph may either be positive or negative.


11. Bellmann Ford algorithm was first proposed by ________
a) Richard Bellmann
b) Alfonso Shimbe
c) Lester Ford Jr
d) Edward F. Moore
Answer: b
Explanation: Alfonso Shimbe proposed Bellmann Ford algorithm in the year 1955. Later it was published by Richard Bellmann in 1957 and Lester Ford Jr in the year 1956. Hence it is called Bellmann Ford Algorithm.

12. Consider the following graph. What is the minimum cost to travel from node A to node C?
12
a) 5
b) 2
c) 1
d) 3
Answer: b
Explanation: The minimum cost to travel from node A to node C is 2.
A-D, cost=1
D-B, cost=-2
B-C, cost=3
Hence the total cost is 2.

13. In the given graph, identify the path that has minimum cost to travel from node a to node f.
13
a) a-b-c-f
b) a-d-e-f
c) a-d-b-c-f
d) a-d-b-c-e-f
Answer: d
Explanation: The minimum cost taken by the path a-d-b-c-e-f is 4.
a-d, cost=2
d-b, cost=-2
b-c, cost=1
c-e, cost= 2
e-f, cost=1
Hence the total cost is 4.

14. Bellmann Ford Algorithm is an example for ____________
a) Dynamic Programming
b) Greedy Algorithms
c) Linear Programming
d) Branch and Bound
Answer: a
Explanation: In Bellmann Ford Algorithm the shortest paths are calculated in bottom up manner which is similar to other dynamic programming problems.

15. A graph is said to have a negative weight cycle when?
a) The graph has 1 negative weighted edge
b) The graph has a cycle
c) The total weight of the graph is negative
d) The graph has 1 or more negative weighted edges
Answer: c
Explanation: When the total weight of the graph sums up to a negative number then the graph is said to have a negative weight cycle. Bellmann Ford Algorithm provides no solution for such graphs.


16. Which of the following methods can be used to solve the assembly line scheduling problem?
a) Recursion
b) Brute force
c) Dynamic programming
d) All of the mentioned
Answer: d
Explanation: All of the above-mentioned methods can be used to solve the assembly line scheduling problem.

17. What is the time complexity of the brute force algorithm used to solve the assembly line scheduling problem?
a) O(1)
b) O(n)
c) O(n2)
d) O(2n)
Answer: d
Explanation: In the brute force algorithm, all the possible ways are calculated which are of the order of 2n.

18. In the dynamic programming implementation of the assembly line scheduling problem, how many lookup tables are required?
a) 0
b) 1
c) 2
d) 3
Answer: c
Explanation: In the dynamic programming implementation of the assembly line scheduling problem, 2 lookup tables are required one for storing the minimum time and the other for storing the assembly line number.

19. Consider the following assembly line problem:

time_to_reach[2][3] = {{17, 2, 7}, {19, 4, 9}}
time_spent[2][4] = {{6, 5, 15, 7}, {5, 10, 11, 4}}
entry_time[2] = {8, 10}
exit_time[2] = {10, 7}
num_of_stations = 4
For the optimal solution which should be the starting assembly line?
a) Line 1
b) Line 2
c) All of the mentioned
d) None of the mentioned
Answer: b
Explanation: For the optimal solution, the starting assembly line is line 2.

20. Consider the following assembly line problem:
time_to_reach[2][3] = {{17, 2, 7}, {19, 4, 9}}
time_spent[2][4] = {{6, 5, 15, 7}, {5, 10, 11, 4}}
entry_time[2] = {8, 10}
exit_time[2] = {10, 7}
num_of_stations = 4
For the optimal solution, which should be the exit assembly line?
a) Line 1
b) Line 2
c) All of the mentioned
d) None of the mentioned
Answer: b
Explanation: For the optimal solution, the exit assembly line is line 2.


21. Consider the following assembly line problem:
time_to_reach[2][3] = {{17, 2, 7}, {19, 4, 9}}
time_spent[2][4] = {{6, 5, 15, 7}, {5, 10, 11, 4}}
entry_time[2] = {8, 10}
exit_time[2] = {10, 7}
num_of_stations = 4
What is the minimum time required to build the car chassis?
a) 40
b) 41
c) 42
d) 43
Answer: d
Explanation: The minimum time required is 43.
The path is S2,1 -> S1,2 -> S2,3 -> S2,4, where Si,j : i = line number, j = station number

22. Consider the following code:

#include<stdio.h>
int get_min(int a, int b)
{
if(a<b)
return a;
return b;
}
int minimum_time_required(int reach[][3],int spent[][4], int *entry, int *exit, int n)
{
int t1[n], t2[n],i;
t1[0] = entry[0] + spent[0][0];
t2[0] = entry[1] + spent[1][0];
for(i = 1; i < n; i++)
{
t1[i] = get_min(t1[i-1]+spent[0][i], t2[i-1]+reach[1][i-1]+spent[0][i]);
__________;
}
return get_min(t1[n-1]+exit[0], t2[n-1]+exit[1]);
}
Which of the following lines should be inserted to complete the above code?
a) t2[i] = get_min(t2[i-1]+spent[1][i], t1[i-1]+reach[0][i-1]+spent[1][i])
b) t2[i] = get_min(t2[i-1]+spent[1][i], t1[i-1]+spent[1][i])
c) t2[i] = get_min(t2[i-1]+spent[1][i], t1[i-1]+reach[0][i-1])
d) none of the mentioned
Answer: a
Explanation: The line t2[i] = get_min(t2[i-1]+spent[1][i], t1[i-1]+reach[0][i-1]+spent[1][i]) should be added to complete the above code.

23. What is the time complexity of the above dynamic programming implementation of the assembly line scheduling problem?
a) O(1)
b) O(n)
c) O(n2)
d) O(n3)
Answer: b
Explanation: The time complexity of the above dynamic programming implementation of the assembly line scheduling problem is O(n).

24. What is the space complexity of the above dynamic programming implementation of the assembly line scheduling problem?
a) O(1)
b) O(n)
c) O(n2)
d) O(n3)
Answer: b
Explanation: The space complexity of the above dynamic programming implementation of the assembly line scheduling problem is O(n).

25. What is the output of the following code?

#include<stdio.h>
int get_min(int a, int b)
{
if(a<b)
return a;
return b;
}
int minimum_time_required(int reach[][3],int spent[][4], int *entry, int *exit, int n)
{
int t1[n], t2[n], i;
t1[0] = entry[0] + spent[0][0];
t2[0] = entry[1] + spent[1][0];
for(i = 1; i < n; i++)
{
t1[i] = get_min(t1[i-1]+spent[0][i], t2[i-1]+reach[1][i-1]+spent[0][i]);
t2[i] = get_min(t2[i-1]+spent[1][i], t1[i-1]+reach[0][i-1]+spent[1][i]);
}
return get_min(t1[n-1]+exit[0], t2[n-1]+exit[1]);
}
int main()
{
int time_to_reach[][3] = {{6, 1, 5},
{2, 4, 7}};
int time_spent[][4] = {{6, 5, 4, 7},
{5, 10, 2, 6}};
int entry_time[2] = {5, 6};
int exit_time[2] = {8, 9};
int num_of_stations = 4;
int ans = minimum_time_required(time_to_reach, time_spent,
entry_time, exit_time, num_of_stations);
printf(“%d”,ans);
return 0;
}
a) 32
b) 33
c) 34
d) 35
Answer: c
Explanation: The program prints the optimal time required to build the car chassis, which is 34.


26. What is the value stored in t1[2] when the following code is executed?
#include<stdio.h>
int get_min(int a, int b)
{
if(a<b)
return a;
return b;
}
int minimum_time_required(int reach[][3],int spent[][4], int *entry, int *exit, int n)
{
int t1[n], t2[n],i;
t1[0] = entry[0] + spent[0][0];
t2[0] = entry[1] + spent[1][0];
for(i = 1; i < n; i++)
{
t1[i] = get_min(t1[i-1]+spent[0][i], t2[i-1]+reach[1][i-1]+spent[0][i]);
t2[i] = get_min(t2[i-1]+spent[1][i], t1[i-1]+reach[0][i-1]+spent[1][i]);
}
return get_min(t1[n-1]+exit[0], t2[n-1]+exit[1]);
}
int main()
{
int time_to_reach[][3] = {{6, 1, 5},
{2, 4, 7}};
int time_spent[][4] = {{6, 5, 4, 7},
{5, 10, 2, 6}};
int entry_time[2] = {5, 6};
int exit_time[2] = {8, 9};
int num_of_stations = 4;
int ans = minimum_time_required(time_to_reach, time_spent,
entry_time, exit_time, num_of_stations);
printf(“%d”,ans);
return 0;
}
a) 16
b) 18
c) 20
d) 22
Answer: c
Explanation: The value stored in t1[2] when the above code is executed is 20.

27. What is the value stored in t2[3] when the following code is executed?
#include<stdio.h>
int get_min(int a, int b)
{
if(a<b)
return a;
return b;
}
int minimum_time_required(int reach[][3],int spent[][4], int *entry, int *exit, int n)
{
int t1[n], t2[n],i;
t1[0] = entry[0] + spent[0][0];
t2[0] = entry[1] + spent[1][0];
for(i = 1; i < n; i++)
{
t1[i] = get_min(t1[i-1]+spent[0][i], t2[i-1]+reach[1][i-1]+spent[0][i]);
t2[i] = get_min(t2[i-1]+spent[1][i], t1[i-1]+reach[0][i-1]+spent[1][i]);
}
return get_min(t1[n-1]+exit[0], t2[n-1]+exit[1]);
}
int main()
{
int time_to_reach[][3] = {{6, 1, 5},
{2, 4, 7}};
int time_spent[][4] = {{6, 5, 4, 7},
{5, 10, 2, 6}};
int entry_time[2] = {5, 6};
int exit_time[2] = {8, 9};
int num_of_stations = 4;
int ans = minimum_time_required(time_to_reach, time_spent,
entry_time, exit_time, num_of_stations);
printf(“%d”,ans);
return 0;
}
a) 19
b) 23
c) 25
d) 27
Answer: c
Explanation: The value stored in t2[3] when the above code is executed is 25.

28. What is the output of the following code?
#include<stdio.h>
int get_min(int a, int b)
{
if(a<b)
return a;
return b;
}
int minimum_time_required(int reach[][4],int spent[][5], int *entry, int *exit, int n)
{
int t1[n], t2[n], i;
t1[0] = entry[0] + spent[0][0];
t2[0] = entry[1] + spent[1][0];
for(i = 1; i < n; i++)
{
t1[i] = get_min(t1[i-1]+spent[0][i], t2[i-1]+reach[1][i-1]+spent[0][i]);
t2[i] = get_min(t2[i-1]+spent[1][i], t1[i-1]+reach[0][i-1]+spent[1][i]);
}
return get_min(t1[n-1]+exit[0], t2[n-1]+exit[1]);
}
int main()
{
int time_to_reach[][4] = {{16, 10, 5, 12},
{12, 4, 17, 8}};
int time_spent[][5] = {{13, 5, 20, 19, 9},
{15, 10, 12, 16, 13}};
int entry_time[2] = {12, 9};
int exit_time[2] = {10, 13};
int num_of_stations = 5;
int ans = minimum_time_required(time_to_reach, time_spent,
entry_time, exit_time, num_of_stations);
printf(“%d”,ans);
return 0;
}
a) 62
b) 69
c) 75
d) 88
Answer: d
Explanation: The program prints the optimal time required to build the car chassis, which is 88.

29. Floyd Warshall’s Algorithm is used for solving ____________
a) All pair shortest path problems
b) Single Source shortest path problems
c) Network flow problems
d) Sorting problems
Answer: a
Explanation: Floyd Warshall’s Algorithm is used for solving all pair shortest path problems. It means the algorithm is used for finding the shortest paths between all pairs of vertices in a graph.

30. Floyd Warshall’s Algorithm can be applied on __________
a) Undirected and unweighted graphs
b) Undirected graphs
c) Directed graphs
d) Acyclic graphs
Answer: c
Explanation: Floyd Warshall Algorithm can be applied in directed graphs. From a given directed graph, an adjacency matrix is framed and then all pair shortest path is computed by the Floyd Warshall Algorithm.


31. What is the running time of the Floyd Warshall Algorithm?
a) Big-oh(V)
b) Theta(V2)
c) Big-Oh(VE)
d) Theta(V3)
Answer: d
Explanation: The running time of the Floyd Warshall algorithm is determined by the triply nested for loops. Since each execution of the for loop takes O(1) time, the algorithm runs in time Theta(V3).

32. What approach is being followed in Floyd Warshall Algorithm?
a) Greedy technique
b) Dynamic Programming
c) Linear Programming
d) Backtracking
Answer: b
Explanation: Floyd Warshall Algorithm follows dynamic programming approach because the all pair shortest paths are computed in bottom up manner.

33. Floyd Warshall Algorithm can be used for finding _____________
a) Single source shortest path
b) Topological sort
c) Minimum spanning tree
d) Transitive closure
Answer: d
Explanation: One of the ways to compute the transitive closure of a graph in Theta(N3) time is to assign a weight of 1 to each edge of E and then run the Floyd Warshall Algorithm.

34. What procedure is being followed in Floyd Warshall Algorithm?
a) Top down
b) Bottom up
c) Big bang
d) Sandwich
Answer: b
Explanation: Bottom up procedure is being used to compute the values of the matrix elements dij(k). The input is an n x n matrix. The procedure returns the matrix D(n) of the shortest path weights.

35. Floyd- Warshall algorithm was proposed by ____________
a) Robert Floyd and Stephen Warshall
b) Stephen Floyd and Robert Warshall
c) Bernad Floyd and Robert Warshall
d) Robert Floyd and Bernad Warshall
Answer: a
Explanation: Floyd- Warshall Algorithm was proposed by Robert Floyd in the year 1962. The same algorithm was proposed by Stephen Warshall during the same year for finding the transitive closure of the graph.


36. Who proposed the modern formulation of Floyd-Warshall Algorithm as three nested loops?
a) Robert Floyd
b) Stephen Warshall
c) Bernard Roy
d) Peter Ingerman
Answer: d
Explanation: The modern formulation of Floyd-Warshall Algorithm as three nested for-loops was proposed by Peter Ingerman in the year 1962.

37. Complete the program.
n=rows[W]
D(0)=W
for k=1 to n
do for i=1 to n
do for j=1 to n
do ________________________________
return D(n)
a) dij(k)=min(dij(k-1), dik(k-1) – dkj(k-1))
b) dij(k)=max(dij(k-1), dik(k-1) – dkj(k-1))
c) dij(k)=min(dij(k-1), dik(k-1) + dkj(k-1))
d) dij(k)=max(dij(k-1), dik(k-1) + dkj(k-1))
Answer: c
Explanation: In order to compute the shortest path from vertex i to vertex j, we need to find the minimum of 2 values which are dij(k-1) and sum of dik(k-1) and dkj(k-1).

38. What happens when the value of k is 0 in the Floyd Warshall Algorithm?
a) 1 intermediate vertex
b) 0 intermediate vertex
c) N intermediate vertices
d) N-1 intermediate vertices
Answer: b
Explanation: When k=0, a path from vertex i to vertex j has no intermediate vertices at all. Such a path has at most one edge and hence dij(0) = wij.

39. Using logical operator’s instead arithmetic operators saves time and space.
a) True
b) False
Answer: a
Explanation: In computers, logical operations on single bit values execute faster than arithmetic operations on integer words of data.

40. The time taken to compute the transitive closure of a graph is Theta(n2).
a) True
b) False
Answer: b
Explanation: The time taken to compute the transitive closure of a graph is Theta(n3). Transitive closure can be computed by assigning weight of 1 to each edge and by running Floyd Warshall Algorithm.


41. In the given graph, what is the minimum cost to travel from vertex 1 to vertex 3?
13-0
a) 3
b) 2
c) 10
d) -3
Answer: d
Explanation: The minimum cost required to travel from node 1 to node 5 is -3.
1-5, cost is -4
5-4, cost is 6
4-3, cost is -5
Hence total cost is -4 + 6 + -5 = -3.

42. In the given graph, how many intermediate vertices are required to travel from node a to node e at a minimum cost?
14
a) 2
b) 0
c) 1
d) 3
Answer: c
Explanation: The minimum cost to travel from node a to node e is 1 by passing via nodes b and c.
a-b, cost 5
b-c, cost 3
c-e, cost -7
Hence the total cost is 1.

43. What is the formula to compute the transitive closure of a graph?
a) tij(k) = tij(k-1) AND (tik(k-1) OR tkj(k-1))
b) tij(k) = tij(k-1) OR (tik(k-1) AND tkj(k-1))
c) tij(k) = tij(k-1) AND (tik(k-1) AND tkj(k-1))
d) tij(k) = tij(k-1) OR (tik(k-1) OR tkj(k-1))
Answer: b
Explanation: Transitive closure of a graph can be computed by using Floyd Warshall algorithm. This method involves substitution of logical operations (logical OR and logical AND) for arithmetic operations min and + in Floyd Warshall Algorithm.
Floyd Warshall Algorithm: dij(k)=min(dij(k-1), dik(k-1) + dkj(k-1))
Transitive closure: tij(k)= tij(k-1) OR (tik(k-1) AND tkj(k-1)).

42. Which of the following methods can be used to solve the longest common subsequence problem?
a) Recursion
b) Dynamic programming
c) Both recursion and dynamic programming
d) Greedy algorithm
Answer: c
Explanation: Both recursion and dynamic programming can be used to solve the longest subsequence problem.

43. Consider the strings “PQRSTPQRS” and “PRATPBRQRPS”. What is the length of the longest common subsequence?
a) 9
b) 8
c) 7
d) 6
Answer: c
Explanation: The longest common subsequence is “PRTPQRS” and its length is 7.

44. Which of the following problems can be solved using the longest subsequence problem?
a) Longest increasing subsequence
b) Longest palindromic subsequence
c) Longest bitonic subsequence
d) Longest decreasing subsequence
Answer: b
Explanation: To find the longest palindromic subsequence in a given string, reverse the given string and then find the longest common subsequence in the given string and the reversed string.

45. Longest common subsequence is an example of ____________
a) Greedy algorithm
b) 2D dynamic programming
c) 1D dynamic programming
d) Divide and conquer
Answer: b
Explanation: Longest common subsequence is an example of 2D dynamic programming.


46. What is the time complexity of the brute force algorithm used to find the longest common subsequence?
a) O(n)
b) O(n2)
c) O(n3)
d) O(2n)
Answer: d
Explanation: The time complexity of the brute force algorithm used to find the longest common subsequence is O(2n).

47. Consider the following dynamic programming implementation of the longest common subsequence problem:
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lcs(char *str1, char *str2)
{
int i,j,len1,len2;
len1 = strlen(str1);
len2 = strlen(str2);
int arr[len1 + 1][len2 + 1];
for(i = 0; i <= len1; i++)
arr[i][0] = 0;
for(i = 0; i <= len2; i++)
arr[0][i] = 0;
for(i = 1; i <= len1; i++)
{
for(j = 1; j <= len2; j++)
{
if(str1[i-1] == str2[j – 1])
______________;
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len1][len2];
}
int main()
{
char str1[] = ” abcedfg”, str2[] = “bcdfh”;
int ans = lcs(str1,str2);
printf(“%d”,ans);
return 0;
}
Which of the following lines completes the above code?
a) arr[i][j] = 1 + arr[i][j].
b) arr[i][j] = 1 + arr[i – 1][j – 1].
c) arr[i][j] = arr[i – 1][j – 1].
d) arr[i][j] = arr[i][j].
Answer: b
Explanation: The line, arr[i][j] = 1 + arr[i – 1][j – 1] completes the above code.

48. What is the time complexity of the following dynamic programming implementation of the longest common subsequence problem where length of one string is “m” and the length of the other string is “n”?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lcs(char *str1, char *str2)
{
int i,j,len1,len2;
len1 = strlen(str1);
len2 = strlen(str2);
int arr[len1 + 1][len2 + 1];
for(i = 0; i <= len1; i++)
arr[i][0] = 0;
for(i = 0; i <= len2; i++)
arr[0][i] = 0;
for(i = 1; i <= len1; i++)
{
for(j = 1; j <= len2; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len1][len2];
}
int main()
{
char str1[] = ” abcedfg”, str2[] = “bcdfh”;
int ans = lcs(str1,str2);
printf(“%d”,ans);
return 0;
}
a) O(n)
b) O(m)
c) O(m + n)
d) O(mn)
Answer: d
Explanation: The time complexity of the above dynamic programming implementation of the longest common subsequence is O(mn).

49. What is the space complexity of the following dynamic programming implementation of the longest common subsequence problem where length of one string is “m” and the length of the other string is “n”?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lcs(char *str1, char *str2)
{
int i,j,len1,len2;
len1 = strlen(str1);
len2 = strlen(str2);
int arr[len1 + 1][len2 + 1];
for(i = 0; i <= len1; i++)
arr[i][0] = 0;
for(i = 0; i <= len2; i++)
arr[0][i] = 0;
for(i = 1; i <= len1; i++)
{
for(j = 1; j <= len2; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len1][len2];
}
int main()
{
char str1[] = ” abcedfg”, str2[] = “bcdfh”;
int ans = lcs(str1,str2);
printf(“%d”,ans);
return 0;
}
a) O(n)
b) O(m)
c) O(m + n)
d) O(mn)
Answer: d
Explanation: The space complexity of the above dynamic programming implementation of the longest common subsequence is O(mn).

50. What is the output of the following code?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lcs(char *str1, char *str2)
{
int i,j,len1,len2;
len1 = strlen(str1);
len2 = strlen(str2);
int arr[len1 + 1][len2 + 1];
for(i = 0; i <= len1; i++)
arr[i][0] = 0;
for(i = 0; i <= len2; i++)
arr[0][i] = 0;
for(i = 1; i <= len1; i++)
{
for(j = 1; j <= len2; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len1][len2];
}
int main()
{
char str1[] = “hbcfgmnapq”, str2[] = “cbhgrsfnmq”;
int ans = lcs(str1,str2);
printf(“%d”,ans);
return 0;
}
a) 3
b) 4
c) 5
d) 6
Answer: b
Explanation: The program prints the length of the longest common subsequence, which is 4.


51. Which of the following is the longest common subsequence between the strings “hbcfgmnapq” and “cbhgrsfnmq” ?
a) hgmq
b) cfnq
c) bfmq
d) fgmna
Answer: d
Explanation: The length of the longest common subsequence is 4. But ‘fgmna’ is not the longest common subsequence as its length is 5.

52. What is the value stored in arr[2][3] when the following code is executed?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lcs(char *str1, char *str2)
{
int i,j,len1,len2;
len1 = strlen(str1);
len2 = strlen(str2);
int arr[len1 + 1][len2 + 1];
for(i = 0; i <= len1; i++)
arr[i][0] = 0;
for(i = 0; i <= len2; i++)
arr[0][i] = 0;
for(i = 1; i <= len1; i++)
{
for(j = 1; j <= len2; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len1][len2];
}
int main()
{
char str1[] = “hbcfgmnapq”, str2[] = “cbhgrsfnmq”;
int ans = lcs(str1,str2);
printf(“%d”,ans);
return 0;
}
a) 1
b) 2
c) 3
d) 4
Answer: a
Explanation: The value stored in arr[2][3] is 1.

53. What is the output of the following code?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lcs(char *str1, char *str2)
{
int i,j,len1,len2;
len1 = strlen(str1);
len2 = strlen(str2);
int arr[len1 + 1][len2 + 1];
for(i = 0; i <= len1; i++)
arr[i][0] = 0;
for(i = 0; i <= len2; i++)
arr[0][i] = 0;
for(i = 1; i <= len1; i++)
{
for(j = 1; j <= len2; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len1][len2];
}
int main()
{
char str1[] = “abcd”, str2[] = “efgh”;
int ans = lcs(str1,str2);
printf(“%d”,ans);
return 0;
}
a) 3
b) 2
c) 1
d) 0
Answer: d
Explanation: The program prints the length of the longest common subsequence, which is 0.

54. Which of the following methods can be used to solve the longest palindromic subsequence problem?
a) Dynamic programming
b) Recursion
c) Brute force
d) Dynamic programming, Recursion, Brute force
Answer: d
Explanation: Dynamic programming, Recursion, Brute force can be used to solve the longest palindromic subsequence problem.

55. Which of the following is not a palindromic subsequence of the string “ababcdabba”?
a) abcba
b) abba
c) abbbba
d) adba
Answer: d
Explanation: ‘adba’ is not a palindromic sequence.

55. For which of the following, the length of the string is not equal to the length of the longest palindromic subsequence?
a) A string that is a palindrome
b) A string of length one
c) A string that has all the same letters(e.g. aaaaaa)
d) Some strings of length two
Answer: d
Explanation: A string of length 2 for eg: ab is not a palindrome.


56. What is the length of the longest palindromic subsequence for the string “ababcdabba”?
a) 6
b) 7
c) 8
d) 9
Answer: b
Explanation: The longest palindromic subsequence is “abbabba” and its length is 7.

57. What is the time complexity of the brute force algorithm used to find the length of the longest palindromic subsequence?
a) O(1)
b) O(2n)
c) O(n)
d) O(n2)
Answer: b
Explanation: In the brute force algorithm, all the subsequences are found and the length of the longest palindromic subsequence is calculated. This takes exponential time.

58. For every non-empty string, the length of the longest palindromic subsequence is at least one.
a) True
b) False
Answer: a
Explanation: A single character of any string can always be considered as a palindrome and its length is one.

59. Longest palindromic subsequence is an example of ______________
a) Greedy algorithm
b) 2D dynamic programming
c) 1D dynamic programming
d) Divide and conquer
Answer: b
Explanation: Longest palindromic subsequence is an example of 2D dynamic programming.

60. Consider the following code:
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lps(char *str1)
{
int i,j,len;
len = strlen(str1);
char str2[len + 1];
strcpy(str2, str1);
______________;
int arr[len + 1][len + 1];
for(i = 0; i <= len; i++)
arr[i][0] = 0;
for(i = 0; i <= len; i++)
arr[0][i] = 0;
for(i = 1; i <= len; i++)
{
for(j = 1; j <= len; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len][len];
}
int main()
{
char str1[] = “ababcdabba”;
int ans = lps(str1);
printf(“%d”,ans);
return 0;
}
Which of the following lines completes the above code?
a) strrev(str2)
b) str2 = str1
c) len2 = strlen(str2)
d) strlen(str2)
Answer: a
Explanation: To find the longest palindromic subsequence, we need to reverse the copy of the string, which is done by strrev.


61. What is the time complexity of the following dynamic programming implementation to find the longest palindromic subsequence where the length of the string is n?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lps(char *str1)
{
int i,j,len;
len = strlen(str1);
char str2[len + 1];
strcpy(str2, str1);
strrev(str2);
int arr[len + 1][len + 1];
for(i = 0; i <= len; i++)
arr[i][0] = 0;
for(i = 0; i <= len; i++)
arr[0][i] = 0;
for(i = 1; i <= len; i++)
{
for(j = 1; j <= len; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len][len];
}
int main()
{
char str1[] = “ababcdabba”;
int ans = lps(str1);
printf(“%d”,ans);
return 0;
}
a) O(n)
b) O(1)
c) O(n2)
d) O(2)
Answer: c
Explanation: The time complexity of the above dynamic programming implementation to find the longest palindromic subsequence is O(n2).

62. What is the space complexity of the following dynamic programming implementation to find the longest palindromic subsequence where the length of the string is n?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lps(char *str1)
{
int i,j,len;
len = strlen(str1);
char str2[len + 1];
strcpy(str2, str1);
strrev(str2);
int arr[len + 1][len + 1];
for(i = 0; i <= len; i++)
arr[i][0] = 0;
for(i = 0; i <= len; i++)
arr[0][i] = 0;
for(i = 1; i <= len; i++)
{
for(j = 1; j <= len; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len][len];
}
int main()
{
char str1[] = “ababcdabba”;
int ans = lps(str1);
printf(“%d”,ans);
return 0;
}
a) O(n)
b) O(1)
c) O(n2)
d) O(2)
Answer: c
Explanation: The space complexity of the above dynamic programming implementation to find the longest palindromic subsequence is O(n2).

63. What is the value stored in arr[3][3] when the following code is executed?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lps(char *str1)
{
int i,j,len;
len = strlen(str1);
char str2[len + 1];
strcpy(str2, str1);
strrev(str2);
int arr[len + 1][len + 1];
for(i = 0; i <= len; i++)
arr[i][0] = 0;
for(i = 0; i <= len; i++)
arr[0][i] = 0;
for(i = 1; i <= len; i++)
{
for(j = 1; j <= len; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len][len];
}
int main()
{
char str1[] = “ababcdabba”;
int ans = lps(str1);
printf(“%d”,ans);
return 0;
}
a) 2
b) 3
c) 4
d) 5
Answer: a
Explanation: The value stored in arr[3][3] when the above code is executed is 2.

64. What is the output of the following code?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lps(char *str1)
{
int i,j,len;
len = strlen(str1);
char str2[len + 1];
strcpy(str2, str1);
strrev(str2);
int arr[len + 1][len + 1];
for(i = 0; i <= len; i++)
arr[i][0] = 0;
for(i = 0; i <= len; i++)
arr[0][i] = 0;
for(i = 1; i <= len; i++)
{
for(j = 1; j <= len; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len][len];
}
int main()
{
char str1[] = “abcd”;
int ans = lps(str1);
printf(“%d”,ans);
return 0;
}
a) 0
b) 1
c) 2
d) 3
Answer: b
Explanation: The program prints the length of the longest palindromic subsequence, which is 1.

65. What is the output of the following code?
#include<stdio.h>
#include<string.h>
int max_num(int a, int b)
{
if(a > b)
return a;
return b;
}
int lps(char *str1)
{
int i,j,len;
len = strlen(str1);
char str2[len + 1];
strcpy(str2, str1);
strrev(str2);
int arr[len + 1][len + 1];
for(i = 0; i <= len; i++)
arr[i][0] = 0;
for(i = 0; i <= len; i++)
arr[0][i] = 0;
for(i = 1; i <= len; i++)
{
for(j = 1; j <= len; j++)
{
if(str1[i-1] == str2[j – 1])
arr[i][j] = 1 + arr[i – 1][j – 1];
else
arr[i][j] = max_num(arr[i – 1][j], arr[i][j – 1]);
}
}
return arr[len][len];
}
int main()
{
char str1[] = “abdgkagdjbccbba”;
int ans = lps(str1);
printf(“%d”,ans);
return 0;
}
a) 5
b) 7
c) 9
d) 11
Answer: c
Explanation: The program prints the length of the longest palindromic subsequence, which is 9.

Module 5

1. Which of the problems cannot be solved by backtracking method?
a) n-queen problem
b) subset sum problem
c) hamiltonian circuit problem
d) travelling salesman problem
Answer: d
Explanation: N-queen problem, subset sum problem, Hamiltonian circuit problems can be solved by backtracking method whereas travelling salesman problem is solved by Branch and bound method.

2. Backtracking algorithm is implemented by constructing a tree of choices called as?
a) State-space tree
b) State-chart tree
c) Node tree
d) Backtracking tree
Answer: a
Explanation: Backtracking problem is solved by constructing a tree of choices called as the state-space tree. Its root represents an initial state before the search for a solution begins.

3. What happens when the backtracking algorithm reaches a complete solution?
a) It backtracks to the root
b) It continues searching for other possible solutions
c) It traverses from a different route
d) Recursively traverses through the same route
Answer: b
Explanation: When we reach a final solution using a backtracking algorithm, we either stop or continue searching for other possible solutions.

4. A node is said to be ____________ if it has a possibility of reaching a complete solution.
a) Non-promising
b) Promising
c) Succeeding
d) Preceding
Answer: b
Explanation: If a node has a possibility of reaching the final solution, it is called a promising node. Otherwise, it is non-promising.

5. In what manner is a state-space tree for a backtracking algorithm constructed?
a) Depth-first search
b) Breadth-first search
c) Twice around the tree
d) Nearest neighbour first
Answer: a
Explanation: A state-space tree for a backtracking algorithm is constructed in the manner of depth-first search so that it is easy to look into.

6. The leaves in a state-space tree represent only complete solutions.
a) true
b) false
Answer: b
Explanation: The leaves in a state space tree can either represent non-promising dead ends or complete solutions found by the algorithm.

7. In general, backtracking can be used to solve?
a) Numerical problems
b) Exhaustive search
c) Combinatorial problems
d) Graph coloring problems
Answer: c
Explanation: Backtracking approach is used to solve complex combinatorial problems which cannot be solved by exhaustive search algorithms.

8. Which one of the following is an application of the backtracking algorithm?
a) Finding the shortest path
b) Finding the efficient quantity to shop
c) Ludo
d) Crossword
Answer: d
Explanation: Crossword puzzles are based on backtracking approach whereas the rest are travelling salesman problem, knapsack problem and dice game.

9. Backtracking algorithm is faster than the brute force technique
a) true
b) false
Answer: a
Explanation: Backtracking is faster than brute force approach since it can remove a large set of answers in one test.

10. Which of the following logical programming languages is not based on backtracking?
a) Icon
b) Prolog
c) Planner
d) Fortran
Answer: d
Explanation: Backtracking algorithm form the basis for icon, planner and prolog whereas fortran is an ancient assembly language used in second generation computers.


11. The problem of finding a list of integers in a given specific range that meets certain conditions is called?
a) Subset sum problem
b) Constraint satisfaction problem
c) Hamiltonian circuit problem
d) Travelling salesman problem
Answer: b
Explanation: Constraint satisfaction problem is the problem of finding a list of integers under given constraints. Constraint satisfaction problem is solved using a backtracking approach.

12. Who coined the term ‘backtracking’?
a) Lehmer
b) Donald
c) Ross
d) Ford
Answer: a
Explanation: D.H. Lehmer was the first person to coin the term backtracking. Initially, the backtracking facility was provided using SNOBOL.

13. ___________ enumerates a list of promising nodes that could be computed to give the possible solutions of a given problem.
a) Exhaustive search
b) Brute force
c) Backtracking
d) Divide and conquer
Answer: c
Explanation: Backtracking is a general algorithm that evaluates partially constructed candidates that can be developed further without violating problem constraints.

14. The problem of finding a subset of positive integers whose sum is equal to a given positive integer is called as?
a) n- queen problem
b) subset sum problem
c) knapsack problem
d) hamiltonian circuit problem
Answer: b
Explanation: Subset sum problem is the problem of finding a subset using the backtracking algorithm when summed, equals a given integer.

15. Who published the eight queens puzzle?
a) Max Bezzel
b) Carl
c) Gauss
d) Friedrich
Answer: a
Explanation: The first Eight Queen Puzzle was published by Max Friedrich William Bezzel, who was a chess composer by profession in 1848. He was a German chess composer and the first person to publish the puzzle.

16. When was the Eight Queen Puzzle published?
a) 1846
b) 1847
c) 1848
d) 1849
Answer: c
Explanation: The first Eight Queen Puzzle was published by Max Friedrich William Bezzel, who was a German chess composer by profession. He published the puzzle in 1848.

17. Who published the first solution of the eight queens puzzle?
a) Franz Nauck
b) Max Bezzel
c) Carl
d) Friedrich
Answer: a
Explanation: The first solution to the Eight Queen Puzzle was given by Franz Nauck in 1850. While the first Eight Queen Puzzle was published by Max Friedrich William Bezzel, who was a German chess composer.

18. When was the first solution to Eight Queen Puzzle published?
a) 1850
b) 1847
c) 1848
d) 1849
Answer: a
Explanation: The first solution to the Eight Queen Puzzle was given by Franz Nauck in 1850. Max Friedrich William Bezzel, who was a German chess composer by profession published the puzzle in 1848.

19. Who published the extended version of eight queens puzzle?
a) Franz Nauck
b) Max Bezzel
c) Carl
d) Friedrich
Answer: a
Explanation: The first extended version to the Eight Queen Puzzle was given by Franz Nauck in 1850. Max Friedrich William Bezzel published the puzzle in 1848.

20. For how many queens was the extended version of Eight Queen Puzzle applicable for n*n squares?
a) 5
b) 6
c) 8
d) n
Answer: d
Explanation: The extended version given by Franz Nauck of the Eight Queen Puzzle was for n queens on n*n square chessboard. Earlier the puzzle was proposed with 8 queens on 8*8 board.


21. Who was the first person to find the solution of Eight Queen Puzzle using determinant?
a) Max Bezzel
b) Frank Nauck
c) Gunther
d) Friedrich
Answer: c
Explanation: S. Gunther was the first person to propose a solution to the eight queen puzzle using determinant. Max Friedrich William Bezzel published the puzzle and the first solution to the Eight Queen Puzzle was given by Franz Nauck.

22. Who proposed the depth first backtracking algorithm?
a) Edsger Dijkshtra
b) Max Bezzel
c) Frank Nauck
d) Carl Friedrich
Answer: a
Explanation: In 1972, depth first backtracking algorithm was proposed by Edsger Dijkshtra to illustrate the Eight Queen Puzzle. Max Friedrich William Bezzel published the puzzle and the first solution to the Eight Queen Puzzle was given by Franz Nauck.

23. How many solutions are there for 8 queens on 8*8 board?
a) 12
b) 91
c) 92
d) 93
Answer: c
Explanation: For 8*8 chess board with 8 queens there are total of 92 solutions for the puzzle. There are total of 12 fundamental solutions to the eight queen puzzle.

24. Who publish the bitwise operation method to solve the eight queen puzzle?
a) Zongyan Qiu
b) Martin Richard
c) Max Bezzel
d) Frank Nauck
Answer: a
Explanation: The first person to publish the bitwise operation method to solve the eight queen puzzle was Zongyan Qiu. After him, it was published by Martin Richard.

25. How many fundamental solutions are there for the eight queen puzzle?
a) 92
b) 10
c) 11
d) 12
Answer: d
Explanation: There are total of 12 fundamental solutions to the eight queen puzzle after removing the symmetrical solutions due to rotation. For 8*8 chess board with 8 queens there are total of 92 solutions for the puzzle.


26. Is it possible to have no four queens in a straight line as the part of one of the solution to the eight queen puzzle.
a) True
b) False
Answer: b
Explanation: No three queens lie in a straight line in one of the fundamental solution of the eight queen puzzle.

27. How many fundamental solutions are the for 3 queens on a 3*3 board?
a) 1
b) 12
c) 3
d) 0
Answer: d
Explanation: There are in total zero solution to the 3 queen puzzle for 3*3 chess board. Hence there are no fundamental solutions. For 8*8 chess board with 8 queens there are total of 12 fundamental solutions for the puzzle.

28. The six queen puzzle has a fewer solution than the five queen puzzle.
a) True
b) False
Answer: a
Explanation: There are total 4 solutions for the six queen puzzle and one fundamental solution while there are total of 10 solutions for 5 queen puzzle and 2 fundamental solutions.

29. Which ordered board is the highest enumerated board till now?
a) 25*25
b) 26*26
c) 27*27
d) 28*28
Answer: c
Explanation: The 27*27 board has the highest order board with total 234,907,967,154,122,528 solutions. It also has total of 29,363,495,934,315,694 fundamental solution.

30. In how many directions do queens attack each other?
a) 1
b) 2
c) 3
d) 4
Answer: c
Explanation: Queens attack each other in three directions- vertical, horizontal and diagonal.


31. Placing n-queens so that no two queens attack each other is called?
a) n-queen’s problem
b) 8-queen’s problem
c) Hamiltonian circuit problem
d) subset sum problem
Answer: a
Explanation: Placing n queens so that no two queens attack each other is n-queens problem. If n=8, it is called as 8-queens problem.

32. Where is the n-queens problem implemented?
a) carom
b) chess
c) ludo
d) cards
Answer: b
Explanation: N-queens problem occurs in chess. It is the problem of placing n- queens in a n*n chess board.

33. Not more than 2 queens can occur in an n-queens problem.
a) true
b) false
Answer: b
Explanation: Unlike a real chess game, n-queens occur in a n-queen problem since it is the problem of dealing with n-queens.

34. In n-queen problem, how many values of n does not provide an optimal solution?
a) 1
b) 2
c) 3
d) 4
Answer: b
Explanation: N-queen problem does not provide an optimal solution of only three values of n (i.e.) n=2,3.

35. Which of the following methods can be used to solve n-queen’s problem?
a) greedy algorithm
b) divide and conquer
c) iterative improvement
d) backtracking
Answer: d
Explanation: Of the following given approaches, n-queens problem can be solved using backtracking. It can also be solved using branch and bound.


36. Of the following given options, which one of the following is a correct option that provides an optimal solution for 4-queens problem?
a) (3,1,4,2)
b) (2,3,1,4)
c) (4,3,2,1)
d) (4,2,3,1)
Answer: a
Explanation: Of the following given options for optimal solutions of 4-queens problem, (3, 1, 4, 2) is the right option.

37. How many possible solutions exist for an 8-queen problem?
a) 100
b) 98
c) 92
d) 88
Answer: c
Explanation: For an 8-queen problem, there are 92 possible combinations of optimal solutions.

38. How many possible solutions occur for a 10-queen problem?
a) 850
b) 742
c) 842
d) 724
Answer: d
Explanation: For a 10-queen problem, 724 possible combinations of optimal solutions are available.

39. If n=1, an imaginary solution for the problem exists.
a) true
b) false
Answer: b
Explanation: For n=1, the n-queen problem has a trivial and real solution and it is represented byn-queens-problem-questions-answers-q10

40. What is the domination number for 8-queen’s problem?
a) 8
b) 7
c) 6
d) 5
Answer: d
Explanation: The minimum number of queens needed to occupy every square in n-queens problem is called domination number. While n=8, the domination number is 5.


41. Of the following given options, which one of the following does not provides an optimal solution for 8-queens problem?
a) (5,3,8,4,7,1,6,2)
b) (1,6,3,8,3,2,4,7)
c) (4,1,5,8,6,3,7,2)
d) (6,2,7,1,4,8,5,3)
Answer: b
Explanation: The following given options for optimal solutions of 8-queens problem, (1,6,3,8,3,2,4,7) does not provide an optimal solution.

42. Under what condition any set A will be a subset of B?
a) if all elements of set B are also present in set A
b) if all elements of set A are also present in set B
c) if A contains more elements than B
d) if B contains more elements than A
Answer: b
Explanation: Any set A will be called a subset of set B if all elements of set A are also present in set B. So in such a case set A will be a part of set B.

43. What is a subset sum problem?
a) finding a subset of a set that has sum of elements equal to a given number
b) checking for the presence of a subset that has sum of elements equal to a given number and printing true or false based on the result
c) finding the sum of elements present in a set
d) finding the sum of all the subsets of a set
Answer: b
Explanation: In subset sum problem check for the presence of a subset that has sum of elements equal to a given number. If such a subset is present then we print true otherwise false.

44. Which of the following is true about the time complexity of the recursive solution of the subset sum problem?
a) It has an exponential time complexity
b) It has a linear time complexity
c) It has a logarithmic time complexity
d) it has a time complexity of O(n2)
Answer: a
Explanation: Subset sum problem has both recursive as well as dynamic programming solution. The recursive solution has an exponential time complexity as it will require to check for all subsets in worst case.

45. What is the worst case time complexity of dynamic programming solution of the subset sum problem(sum=given subset sum)?
a) O(n)
b) O(sum)
c) O(n2)
d) O(sum*n)
Answer: d
Explanation: Subset sum problem has both recursive as well as dynamic programming solution. The dynamic programming solution has a time complexity of O(n*sum) as it as a nested loop with limits from 1 to n and 1 to sum respectively.


46. Subset sum problem is an example of NP-complete problem.
a) true
b) false
Answer: a
Explanation: Subset sum problem takes exponential time when we implement a recursive solution. Subset sum problem is known to be a part of NP complete problems.

47. Recursive solution of subset sum problem is faster than dynamic problem solution in terms of time complexity.
a) true
b) false
Answer: b
Explanation: The recursive solution to subset sum problem takes exponential time complexity whereas the dynamic programming solution takes polynomial time complexity. So dynamic programming solution is faster in terms of time complexity.

48. Which of the following is not true about subset sum problem?
a) the recursive solution has a time complexity of O(2n)
b) there is no known solution that takes polynomial time
c) the recursive solution is slower than dynamic programming solution
d) the dynamic programming solution has a time complexity of O(n log n)
Answer: d
Explanation: Recursive solution of subset sum problem is slower than dynamic problem solution in terms of time complexity. Dynamic programming solution has a time complexity of O(n*sum).

49. Which of the following should be the base case for the recursive solution of subset sum problem?
a)

if(sum==0)
return true;
b)

if(sum==0)
return true;
if (n ==0 && sum!= 0)
return false;
c)

if (n ==0 && sum!= 0)
return false;
d)

if(sum<0)
return true;
if (n ==0 && sum!= 0)
return false;
Answer: b
Explanation: The base case condition defines the point at which the program should stop recursion. In this case we need to make sure that, the sum does not become 0 and there should be elements left in our array for recursion to happen.

50. What will be the output for the following code?
#include <stdio.h>
bool func(int arr[], int n, int sum)
{
if (sum == 0)
return true;
if (n == 0 && sum != 0)
return false;
if (arr[n-1] > sum)
return func(arr, n-1, sum);

return func(arr, n-1, sum) || func(arr, n-1, sum-arr[n-1]);
}
int main()
{
int arr[] = {4,6, 12, 2};
int sum = 12;
int n = sizeof(arr)/sizeof(arr[0]);
if (func(arr, n, sum) == true)
printf(“true”);
else
printf(“false”);
return 0;
}
a) 12
b) 4 6 2
c) True
d) False
Answer: c
Explanation: The given code represents the recursive approach of solving the subset sum problem. The output for the code will be true if any subset is found to have sum equal to the desired sum, otherwise false will be printed.


51. What will be the output for the following code?
#include <stdio.h>
bool func(int arr[], int n, int sum)
{
bool subarr[n+1][sum+1];
for (int i = 0; i <= n; i++)
subarr[i][0] = true;
for (int i = 1; i <= sum; i++)
subarr[0][i] = false;
for (int i = 1; i <= n; i++)
{
for (int j = 1; j <= sum; j++)
{
if(j<arr[i-1])
subarr[i][j] = subarr[i-1][j];
if (j >= arr[i-1])
subarr[i][j] = subarr[i-1][j] ||
subarr[i – 1][j-arr[i-1]];
}
}
return subarr[n][sum];
}

int main()
{
int arr[] = {3, 3, 4, 4, 7};
int sum = 5;
int n = sizeof(arr)/sizeof(arr[0]);
if (func(arr, n, sum) == true)
printf(“true”);
else
printf(“false”);
return 0;
}
a) true
b) false
c) 0
d) error in code
Answer: b
Explanation: The given code represents the dynamic programming approach of solving the subset sum problem. The output for the code will be true if any subset is found to have sum equal to the desired sum, otherwise false will be printed.

52. What will be the worst case time complexity for the following code?
#include <stdio.h>
bool func(int arr[], int n, int sum)
{
if (sum == 0)
return true;
if (n == 0 && sum != 0)
return false;
if (arr[n-1] > sum)
return func(arr, n-1, sum);
return func(arr, n-1, sum) || func(arr, n-1, sum-arr[n-1]);
}
int main()
{
int arr[] = {4,6, 12, 2};
int sum = 12;
int n = sizeof(arr)/sizeof(arr[0]);
if (func(arr, n, sum) == true)
printf(“true”);
else
printf(“false”);
return 0;
}
a) O(n log n)
b) O(n2)
c) O(2n)
d) O(n2 log n)
Answer: c
Explanation: The given code represents the recursive approach solution of the subset sum problem. It has an exponential time complexity as it will require to check for all subsets in the worst case. It is equal to O(2n).

53. Branch and bound is a __________
a) problem solving technique
b) data structure
c) sorting algorithm
d) type of tree
Answer: a
Explanation: Branch and bound is a problem solving technique generally used for solving combinatorial optimization problems. Branch and bound helps in solving them faster.

53. Which of the following is not a branch and bound strategy to generate branches?
a) LIFO branch and bound
b) FIFO branch and bound
c) Lowest cost branch and bound
d) Highest cost branch and bound
Answer: d
Explanation: LIFO, FIFO and Lowest cost branch and bound are different strategies to generate branches. Lowest cost branch and bound helps us find the lowest cost path.

54. Which data structure is used for implementing a LIFO branch and bound strategy?
a) stack
b) queue
c) array
d) linked list
Answer: a
Explanation: Stack is the data structure is used for implementing LIFO branch and bound strategy. This leads to depth first search as every branch is explored until a leaf node is discovered.

55. Which data structure is used for implementing a FIFO branch and bound strategy?
a) stack
b) queue
c) array
d) linked list
Answer: b
Explanation: Queue is the data structure is used for implementing FIFO branch and bound strategy. This leads to breadth first search as every branch at depth is explored first before moving to the nodes at greater depth.


56. Which data structure is most suitable for implementing best first branch and bound strategy?
a) stack
b) queue
c) priority queue
d) linked list
Answer: c
Explanation: Priority Queue is the data structure is used for implementing best first branch and bound strategy. Dijkstra’s algorithm is an example of best first search algorithm.

57. Which of the following branch and bound strategy leads to breadth first search?
a) LIFO branch and bound
b) FIFO branch and bound
c) Lowest cost branch and bound
d) Highest cost branch and bound
Answer: b
Explanation: LIFO, FIFO and Lowest cost branch and bound are different strategies to generate branches. FIFO branch and bound leads to breadth first search.

58. Which of the following branch and bound strategy leads to depth first search?
a) LIFO branch and bound
b) FIFO branch and bound
c) Lowest cost branch and bound
d) Highest cost branch and bound
Answer: a
Explanation: LIFO, FIFO and Lowest cost branch and bound are different strategies to generate branches. LIFO branch and bound leads to depth first search.

59. Both FIFO branch and bound strategy and backtracking leads to depth first search.
a) true
b) false
Answer: b
Explanation: FIFO branch and bound leads to breadth first search. Whereas backtracking leads to depth first search.

60. Both LIFO branch and bound strategy and backtracking leads to depth first search.
a) true
b) false
Answer: a
Explanation: Both backtrackings as well as branch and bound are problem solving algorithms. Both LIFO branch and bound strategy and backtracking leads to depth first search.


61. Choose the correct statement from the following.
a) branch and bound is more efficient than backtracking
b) branch and bound is not suitable where a greedy algorithm is not applicable
c) branch and bound divides a problem into at least 2 new restricted sub problems
d) backtracking divides a problem into at least 2 new restricted sub problems
Answer: c
Explanation: Both backtracking as well as branch and bound are problem solving algorithms. Branch and bound is less efficient than backtracking. Branch and bound divides a problem into at least 2 new restricted sub problems.

62. Which of the following can traverse the state space tree only in DFS manner?
a) branch and bound
b) dynamic programming
c) greedy algorithm
d) backtracking
Answer: d
Explanation: Both backtracking as well as branch and bound are problem solving algorithms. Branch and bound can traverse in DFS as well as BFS manner whereas backtracking only traverses in DFS manner.

Module 6

1. What is a Rabin and Karp Algorithm?
a) String Matching Algorithm
b) Shortest Path Algorithm
c) Minimum spanning tree Algorithm
d) Approximation Algorithm
Answer: a
Explanation: The string matching algorithm which was proposed by Rabin and Karp, generalizes to other algorithms and for two-dimensional pattern matching problems.

2. What is the pre-processing time of Rabin and Karp Algorithm?
a) Theta(m2)
b) Theta(mlogn)
c) Theta(m)
d) Big-Oh(n)
Answer: c
Explanation: The for loop in the pre-processing algorithm runs for m(length of the pattern) times. Hence the pre-processing time is Theta(m).

3. Rabin Karp Algorithm makes use of elementary number theoretic notions.
a) True
b) False
Answer: a
Explanation: Rabin Karp Algorithm makes use of elementary theoretic number notions such as the equivalence of two numbers modulo a third number.

4. What is the basic formula applied in Rabin Karp Algorithm to get the computation time as Theta(m)?
a) Halving rule
b) Horner’s rule
c) Summation lemma
d) Cancellation lemma
Answer: b
Explanation: The pattern can be evaluated in time Theta(m) using Horner’s rule:
p = P[m] + 10(P[m-1] + 10(P[m-2] +…+ 10(P[2]+10P[1])…)).

5. What is the worst case running time of Rabin Karp Algorithm?
a) Theta(n)
b) Theta(n-m)
c) Theta((n-m+1)m)
d) Theta(nlogm)
Answer: c
Explanation: The worst case running time of Rabin Karp Algorithm is Theta(n-m+1)m). We write Theta(n-m+1) instead of Theta(n-m) because there are n-m+1 different values that the given text takes on.

6. Rabin- Karp algorithm can be used for discovering plagiarism in a sentence.
a) True
b) False
Answer: a
Explanation: Since Rabin-Karp algorithm is a pattern detecting algorithm in a text or string, it can be used for detecting plagiarism in a sentence.

7. If n is the length of text(T) and m is the length of the pattern(P) identify the correct pre-processing algorithm. (where q is a suitable modulus to reduce the complexity)
p=0; t0=0;
a)

for i=1 to n
do t0=(dt0 + P[i])mod q
p=(dp+T[i])mod q
b)

for i=1 to n
do p=(dp + P[i])mod q
t0=(dt0+T[i])mod q
c)

for i=1 to m
do t0=(dp + P[i])mod q
p=(dt0+T[i])mod q
d)

for i=1 to m
do p=(dp + P[i])mod q
t0=(dt0+T[i])mod q
Answer: d
Explanation: The pre-processing algorithm runs m (the length of pattern) times. This algorithm is used to compute p as the value of P[1….m] mod q and t0 as the value of T[1….m]mod q.

8. If n is the length of text(T) and m is the length of the pattern(P) identify the correct matching algorithm.
a)

for s=0 to n
do if p=t0
then if P[1..m]=T[s+1..s+m]
then print “Pattern occurs with shift” s
b)

for s=0 to n-m
do if p=ts
then if P[1..m]=T[s+1..s+m]
then print “Pattern occurs with shift” s
c)

for s=0 to m
do if p=ts
then if P[1..m]=T[s+1..s+m]
then print “Pattern occurs with shift” s
d)

for s=0 to n-m
do if p!=ts
then if P[1..m]=T[s+1..s+m]
then print “Pattern occurs with shift” s

Answer: b
Explanation: The matching algorithm runs for n-m times. Rabin Karp algorithm explicitly verifies every valid shift. If the required pattern matches with the given text then the algorithm prints pattern found as result.

9. What happens when the modulo value(q) is taken large?
a) Complexity increases
b) Spurious hits occur frequently
c) Cost of extra checking is low
d) Matching time increases
Answer: c
Explanation: If the modulo value(q) is large enough then the spurious hits occur infrequently enough that the cost of extra checking is low.

10. Given a pattern of length-5 window, find the suitable modulo value.
4 3 2 5 0
a) 13
b) 14
c) 12
d) 11
Answer: a
Explanation: The modulus q is typically chosen as a prime number that is large enough to reduce the complexity when p is very large.


11. Given a pattern of length- 5 window, find the valid match in the given text.
Pattern: 2 1 9 3 6
Modulus: 21
Index: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
Text: 9 2 7 2 1 8 3 0 5 7 1 2 1 2 1 9 3 6 2 3 9 7
a) 11-16
b) 3-8
c) 13-18
d) 15-20
Answer: c
Explanation: The pattern 2 1 9 3 6 occurs in the text starting from position 13 to 18. In the given pattern value is computed as 12 by having the modulus as 21. The same text string values are computed for each possible position of a 5 length window.

12. Given a pattern of length- 5 window, find the spurious hit in the given text string.
Pattern: 3 1 4 1 5
Modulus: 13
Index: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Text: 2 3 5 9 0 2 3 1 4 1 5 2 6 7 3 9 9 2 1 3 9
a) 6-10
b) 12-16
c) 3-7
d) 13-17
Answer: d
Explanation: The sub string in the range 13-17, 6 7 3 9 9 produces the same value 7 as the given pattern. But the pattern numbers don’t match with sub string identified, hence it is a spurious hit.

13. If the expected number of valid shifts is small and modulus is larger than the length of pattern what is the matching time of Rabin Karp Algorithm?
a) Theta(m)
b) Big-Oh(n+m)
c) Theta(n-m)
d) Big-Oh(n)
Answer: b
Explanation: When the number of valid shifts(v) is Big-Oh(1) and q>=m then the matching time is given by O(n)+O(m(v+n/q)) is simplified as O(n+m).

14. What is the basic principle in Rabin Karp algorithm?
a) Hashing
b) Sorting
c) Augmenting
d) Dynamic Programming
Answer: a
Explanation: The basic principle employed in Rabin Karp algorithm is hashing. In the given text every substring is converted to a hash value and compared with the hash value of the pattern.

15. Who created the Rabin Karp Algorithm?
a) Joseph Rabin and Michael Karp
b) Michael Rabin and Joseph Karp
c) Richard Karp and Michael Rabin
d) Michael Karp and Richard Rabin
Answer: c
Explanation: Rabin Karp algorithm was invented by Richard Karp and Michael Rabin in the year 1987 for searching a pattern in the given string.


16. Which of the following is the fastest algorithm in string matching field?
a) Boyer-Moore’s algorithm
b) String matching algorithm
c) Quick search algorithm
d) Linear search algorithm
Answer: c
Explanation: Quick search algorithm is the fastest algorithm in string matching field whereas Linear search algorithm searches for an element in an array of elements.

17. Which of the following algorithms formed the basis for the Quick search algorithm?
a) Boyer-Moore’s algorithm
b) Parallel string matching algorithm
c) Binary Search algorithm
d) Linear Search algorithm
Answer: a
Explanation: Quick search algorithm was originally formed to overcome the drawbacks of Boyer-Moore’s algorithm and also for increased speed and efficiency.

18. What is the time complexity of the Quick search algorithm?
a) O(n)
b) O(log n)
c) O(m+n)
d) O(mn)
Answer: c
Explanation: The time complexity of the Quick search algorithm was found to be O(m+n) and is proved to be faster than Boyer-Moore’s algorithm.

19. What character shift tables does quick search algorithm use?
a) good-character shift tables
b) bad-character shift tables
c) next-character shift tables
d) both good and bad character shift tables
Answer: b
Explanation: Quick search algorithm uses only bad character shift tables and it is one of the reasons for its increased speed than Boyer-Moore’s algorithm.

20. What is the space complexity of quick search algorithm?
a) O(n)
b) O(log n)
c) O(m+n)
d) O(mn)
Answer: a
Explanation: The space complexity of quick search algorithm is mathematically found to be O(n) where n represents the input size.


21. Quick search algorithm starts searching from the right most character to the left.
a) true
b) false
Answer: b
Explanation: Quick search algorithm starts searching from the left most character to the right and it uses only bad character shift tables.

22. What character shift tables does Boyer-Moore’s search algorithm use?
a) good-character shift tables
b) bad-character shift tables
c) next-character shift tables
d) both good and bad character shift tables
Answer: d
Explanation: Boyer-Moore’s search algorithm uses both good and bad character shift tables whereas quick search algorithm uses only bad character shift tables.

23. What is the worst case running time in searching phase of Boyer-Moore’s algorithm?
a) O(n)
b) O(log n)
c) O(m+n)
d) O(mn)
Answer: d
Explanation: If the pattern occurs in the text, the worst case running time of Boyer-Moore’s algorithm is found to be O(mn).

24. The searching phase in quick search algorithm has good practical behaviour.
a) true
b) false
Answer: a
Explanation: During the searching phase, the comparison between pattern and text characters can be done in any order. It has a quadratic worst case behaviour and good practical behaviour.

25. Given input string = “ABCDABCATRYCARCABCSRT” and pattern string = “CAT”. Find the first index of the pattern match using quick search algorithm.
a) 2
b) 6
c) 11
d) 14
Answer: b
Explanation: By using quick search algorithm, the given input text string is preprocessed and starts its search from the left most character and finds the first occurrence of the pattern at index=2.


26. What will be the output of the following code?

#include<bits/stdc++.h>
using namespace std;

void func(char* str2, char* str1)
{
int m = strlen(str2);
int n = strlen(str1);
for (int i = 0; i <= n – m; i++)
{
int j;

for (j = 0; j < m; j++)
if (str1[i + j] != str2[j])
break;

if (j == m)
cout << i << endl;
}
}

int main()
{
char str1[] = “1253234”;
char str2[] = “323”;
func(str2, str1);
return 0;
}
a) 1
b) 2
c) 3
d) 4
Answer: c
Explanation: The given code describes the naive method of finding a pattern in a string. So the output will be 3 as the given sub string begins at that index in the pattern.

27. What will be the worst case time complexity of the following code?
#include<bits/stdc++.h>
using namespace std;

void func(char* str2, char* str1)
{
int m = strlen(str2);
int n = strlen(str1);
for (int i = 0; i <= n – m; i++)
{
int j;

for (j = 0; j < m; j++)
if (str1[i + j] != str2[j])
break;

if (j == m)
cout << i << endl;
}
}

int main()
{
char str1[] = “1253234”;
char str2[] = “323”;
func(str2, str1);
return 0;
}
a) O(n)
b) O(m)
c) O(m * n)
d) O(m + n)
Answer: c
Explanation: The given code describes the naive method of pattern searching. By observing the nested loop in the code we can say that the time complexity of the loop is O(m*n).

28. What will be the auxiliary space complexity of the following code?
#include<bits/stdc++.h>
using namespace std;

void func(char* str2, char* str1)
{
int m = strlen(str2);
int n = strlen(str1);
for (int i = 0; i <= n – m; i++)
{
int j;

for (j = 0; j < m; j++)
if (str1[i + j] != str2[j])
break;

if (j == m)
cout << i << endl;
}
}

int main()
{
char str1[] = “1253234”;
char str2[] = “323”;
func(str2, str1);
return 0;
}
a) O(n)
b) O(1)
c) O(log n)
d) O(m)
Answer: b
Explanation: The given code describes the naive method of pattern searching. Its auxiliary space requirement is O(1).

29. What is the worst case time complexity of KMP algorithm for pattern searching (m = length of text, n = length of pattern)?
a) O(n)
b) O(n*m)
c) O(m)
d) O(log n)
Answer: c
Explanation: KMP algorithm is an efficient pattern searching algorithm. It has a time complexity of O(m) where m is the length of text.

30. What will be the best case time complexity of the following code?
#include<bits/stdc++.h>
using namespace std;
void func(char* str2, char* str1)
{
int m = strlen(str2);
int n = strlen(str1);

for (int i = 0; i <= n – m; i++)
{
int j;

for (j = 0; j < m; j++)
if (str1[i + j] != str2[j])
break;

if (j == m)
cout << i << endl;
}
}

int main()
{
char str1[] = “1253234”;
char str2[] = “323”;
func(str2, str1);
return 0;
}
a) O(n)
b) O(m)
c) O(m * n)
d) O(m + n)
Answer: b
Explanation: The given code describes the naive method of pattern searching. The best case of the code occurs when the first character of the pattern does not appear in the text at all. So in such a case, only one iteration is required thus time complexity will be O(m).


31. What is the time complexity of Z algorithm for pattern searching (m = length of text, n = length of pattern)?
a) O(n + m)
b) O(m)
c) O(n)
d) O(m * n)
Answer: a
Explanation: Z algorithm is an efficient pattern searching algorithm as it searches the pattern in linear time. It has a time complexity of O(m + n) where m is the length of text and n is the length of the pattern.

32. What is the auxiliary space complexity of Z algorithm for pattern searching (m = length of text, n = length of pattern)?
a) O(n + m)
b) O(m)
c) O(n)
d) O(m * n)
Answer: b
Explanation: Z algorithm is an efficient pattern searching algorithm as it searches the pattern in linear time. It an auxiliary space of O(m) for maintaining Z array.

33. The naive pattern searching algorithm is an in place algorithm.
a) true
b) false
Answer: a
Explanation: The auxiliary space complexity required by naive pattern searching algorithm is O(1). So it qualifies as an in place algorithm.

34. Rabin Karp algorithm and naive pattern searching algorithm have the same worst case time complexity.
a) true
b) false
Answer: a
Explanation: The worst case time complexity of Rabin Karp algorithm is O(m*n) but it has a linear average case time complexity. So Rabin Karp and naive pattern searching algorithm have the same worst case time complexity.

35. The searching phase in quick search algorithm has good practical behaviour.
a. true
b. false
Answer a
true

36. If the expected number of valid shifts is small and modulus is larger than the length of pattern what is the matching time of Rabin Karp Algorithm?
a. Theta(m)
b. Big-Oh(n+m)
c. Theta(n-m)
d. Big-Oh(n)
Answer d

Big-Oh(n+m)

Prepare For Your Placements: https://lastmomenttuitions.com/courses/placement-preparation/

/ Youtube Channel: https://www.youtube.com/channel/UCGFNZxMqKLsqWERX_N2f08Q

Follow For Latest Updates, Study Tips & More Content!

/lastmomenttuition

Last Moment Tuitions

lastmomentdost