Exit Intent

## [MCQs] Digital Signal & Image Processing

#### Module 04

1. The spatial coordinates of a digital image (x,y) are proportional to:

a) Position

b) Brightness

c) Contrast

d) Noise

Explanation: The Brightness levels are distributed over the spatial area. Hence, the spatial coordinates are proportional to brightness levels.

2. Among the following image processing techniques which is fast, precise and flexible.

a) Optical

b) Digital

c) Electronic

d) Photographic

Explanation: Digital image processing is more flexible and agile techniques as it is fast, accurate and reliable.

3. An image is considered to be a function of a(x,y), where a represents:

a) Height of image

b) Width of image

c) Amplitude of image

d) Resolution of image

Explanation: The image is a collection of dots with a definite intensity or amplitude.

4. What is pixel?

a) Pixel is the elements of a digital image

b) Pixel is the elements of an analog image

c) Pixel is the cluster of a digital image

d) Pixel is the cluster of an analog image

Explanation: An Image is a collection of individual points referred as pixel, thus a Pixel is the element of a digital image.

5. The range of values spanned by the gray scale is called:

a) Dynamic range

b) Band range

c) Peak range

d) Resolution range

Explanation: The valued spanned in gray scale image are depicted using dynamic range values.

6. Which is a colour attribute that describes a pure colour?

a) Saturation

b) Hue

c) Brightness

d) Intensity

Explanation: The color attribute of an image refers to the contrast of colors, which can be controlled using the Hue values.

7. Which gives a measure of the degree to which a pure colour is diluted by white light?

a) Saturation

b) Hue

c) Intensity

d) Brightness

Explanation: Saturation is color recognizing capability of the human eye. Hence a degree of dilution is measured using saturation.

8. Which means the assigning meaning to a recognized object.

a) Interpretation

b) Recognition

c) Acquisition

d) Segmentation

Explanation: The interpretation is called the assigning meaning to recognized object.

9. A typical size comparable in quality to monochromatic TV image is of size.

a) 256 X 256

b) 512 X 512

c) 1920 X 1080

d) 1080 X 1080

Explanation: A normal T.V have 512 x 512 resolution.

10. The number of grey values are integer powers of:

a) 4

b) 2

c) 8

d) 1

Explanation: The gray values are interpreted as the power of number of colors. In monochromatic image the number of colors are 2.

11. What is the first and foremost step in Image Processing?

a) Image restoration

b) Image enhancement

c) Image acquisition

d) Segmentation

Explanation: Image acquisition is the first process in image processing. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.

12. In which step of processing, the images are subdivided successively into smaller regions?

a) Image enhancement

b) Image acquisition

c) Segmentation

d) Wavelets

Explanation: Wavelets are the foundation for representing images in various degrees of resolution. Wavelets are particularly used for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.

13. What is the next step in image processing after compression?

a) Wavelets

b) Segmentation

c) Representation and description

d) Morphological processing

Explanation: Steps in image processing:

Image acquisition-> Image enhancement-> Image restoration-> Color image processing-> Wavelets and multi resolution processing-> Compression-> Morphological processing-> Segmentation-> Representation & description-> Object recognition.

14. What is the step that is performed before color image processing in image processing?

a) Wavelets and multi resolution processing

b) Image enhancement

c) Image restoration

d) Image acquisition

Explanation: Steps in image processing:

Image acquisition-> Image enhancement-> Image restoration-> Color image processing-> Wavelets and multi resolution processing-> Compression-> Morphological processing-> Segmentation-> Representation & description-> Object recognition.

15. How many number of steps are involved in image processing?

a) 10

b) 9

c) 11

d) 12

Explanation: Steps in image processing:

Image acquisition-> Image enhancement-> Image restoration-> Color image processing-> Wavelets and multi resolution processing-> Compression-> Morphological processing-> Segmentation-> Representation & description-> Object recognition.

16. What is the expanded form of JPEG?

a) Joint Photographic Expansion Group

b) Joint Photographic Experts Group

c) Joint Photographs Expansion Group

d) Joint Photographic Expanded Group

Explanation: Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.

17. Which of the following step deals with tools for extracting image components those are useful in the representation and description of shape?

a) Segmentation

b) Representation & description

c) Compression

d) Morphological processing

Explanation: Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.

18. In which step of the processing, assigning a label (e.g., “vehicle”) to an object based on its descriptors is done?

a) Object recognition

b) Morphological processing

c) Segmentation

d) Representation & description

Explanation: Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors. We conclude our coverage of digital image processing with the development of methods for recognition of individual objects.

19. What role does the segmentation play in image processing?

a) Deals with extracting attributes that result in some quantitative information of interest

b) Deals with techniques for reducing the storage required saving an image, or the bandwidth required transmitting it

c) Deals with partitioning an image into its constituent parts or objects

d) Deals with property in which images are subdivided successively into smaller regions

Explanation: Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually.

20. What is the correct sequence of steps in image processing?

a) Image acquisition->Image enhancement->Image restoration->Color image processing->Compression->Wavelets and multi resolution processing->Morphological processing->Segmentation->Representation & description->Object recognition

b) Image acquisition->Image enhancement->Image restoration->Color image processing->Wavelets and multi resolution processing->Compression->Morphological processing->Segmentation->Representation & description->Object recognition

c) Image acquisition->Image enhancement->Color image processing->Image restoration->Wavelets and multi resolution processing->Compression->Morphological processing->Segmentation->Representation & description->Object recognition

d) Image acquisition->Image enhancement->Image restoration->Color image processing->Wavelets and multi resolution processing->Compression->Morphological processing->Representation & description->Segmentation->Object recognition

Explanation: Steps in image processing:

Image acquisition-> Image enhancement->Image restoration->Color image processing->Wavelets and multi resolution processing->Compression->Morphological processing->Segmentation->Representation & description->Object recognition

21. To convert a continuous sensed data into Digital form, which of the following is required?

a) Sampling

b) Quantization

c) Both Sampling and Quantization

d) Neither Sampling nor Quantization

Explanation: The output of the most sensor is a continuous waveform and the amplitude and spatial

behavior of such waveform are related to the physical phenomenon being sensed.

22. To convert a continuous image f(x, y) to digital form, we have to sample the function in __________

a) Coordinates

b) Amplitude`

c) All of the mentioned

d) None of the mentioned

Explanation: An image may be continuous in the x- and y-coordinates or in amplitude, or in both.

23. For a continuous image f(x, y), how could be Sampling defined?

a) Digitizing the coordinate values

b) Digitizing the amplitude values

c) All of the mentioned

d) None of the mentioned

Explanation: Sampling is the method of digitizing the coordinate values of the image.

24. For a continuous image f(x, y), Quantization is defined as

a) Digitizing the coordinate values

b) Digitizing the amplitude values

c) All of the mentioned

d) None of the mentioned

Explanation: Sampling is the method of digitizing the amplitude values of the image.

25. Validate the statement:

“For a given image in one-dimension given by function f(x, y), to sample the function we take equally

spaced samples, superimposed on the function, along a horizontal line. However, the sample values still

span (vertically) a continuous range of gray-level values. So, to convert the given function into a digital

function, the gray-level values must be divided into various discrete levels.”

a) True

b) False

Explanation: Digital function requires both sampling and quantization of the one-dimensional image

function.

26. How is sampling been done when an image is generated by a single sensing element combined

With mechanical motion?

a) The number of sensors in the strip defines the sampling limitations in one direction and Mechanical

motion in the other direction.

b) The number of sensors in the sensing array establishes the limits of sampling in both directions.

c) The number of mechanical increments when the sensor is activated to collect data.

d) None of the mentioned.

Explanation: When an image is generated by a single sensing element along with mechanical motion,

the output data is quantized by dividing the gray-level scale into many discrete levels. However,

sampling is done by selecting the number of individual mechanical increments recorded at which we

activate the sensor to collect data.

27. How does sampling gets accomplished with a sensing strip being used for image acquisition?

a) The number of sensors in the strip establishes the sampling limitations in one image direction and

Mechanical motion in the other direction

b) The number of sensors in the sensing array establishes the limits of sampling in both directions

c) The number of mechanical increments when the sensor is activated to collect data

d) None of the mentioned

Explanation: When a sensing strip is used the number of sensors in the strip defines the sampling

limitations in one direction and mechanical motion in the other direction.

28. How is sampling accomplished when a sensing array is used for image acquisition?

a) The number of sensors in the strip establishes the sampling limitations in one image direction and

Mechanical motion in the other direction

b) The number of sensors in the sensing array defines the limits of sampling in both directions

c) The number of mechanical increments at which we activate the sensor to collect data

d) None of the mentioned

Explanation: When we use sensing array for image acquisition, there is no motion and so, only the

number of sensors in the array defines the limits of sampling in both directions and the output of the

sensor is quantized by dividing the gray-level scale into many discrete levels.

29. The quality of a digital image is well determined by ___________

a) The number of samples

b) The discrete gray levels

c) All of the mentioned

d) None of the mentioned

Explanation: The quality of a digital image is determined mostly by the number of samples and discrete

gray levels used in sampling and quantization.

30. A continuous image is digitized at _______ points.

a) random

b) vertex

c) contour

d) sampling

Explanation: The sampling points are ordered in the plane and their relation is called a Grid.

31. The transition between continuous values of the image function and its digital equivalent is called

______________

a) Quantization

b) Sampling

c) Rasterization

d) None of the Mentioned

Explanation: The transition between continuous values of the image function and its digital equivalent is

called Quantization.

32. Images quantised with insufficient brightness levels will lead to the occurrence of ____________

a) Pixillation

b) Blurring

c) False Contours

d) None of the Mentioned

Explanation: This effect arises when the number brightness levels is lower that which the human eye can

distinguish.

33. The smallest discernible change in intensity level is called ____________

a) Intensity Resolution

b) Contour

c) Saturation

d) Contrast

Explanation: Number of bits used to quantise intensity of an image is called intensity resolution.

34. What is the tool used in tasks such as zooming, shrinking, rotating, etc.?

a) Sampling

b) Interpolation

c) Filters

d) None of the Mentioned

Explanation: Interpolation is the basic tool used for zooming, shrinking, rotating, etc.

35. The type of Interpolation where for each new location the intensity of the immediate pixel is

Assigned is ___________

a) bicubic interpolation

b) cubic interpolation

c) bilinear interpolation

d) nearest neighbour interpolation

Explanation: Its called as Nearest Neighbour Interpolation since for each new location the intensity of

the next neighbouring pixel is assigned.

36. The type of Interpolation where the intensity of the FOUR neighbouring pixels is used to obtain

intensity a new location is called ___________

a) cubic interpolation

b) nearest neighbour interpolation

c) bilinear interpolation

d) bicubic interpolation

Explanation: Bilinear interpolation is where the FOUR neighbouring pixels is used to estimate intensity

for a new location.

37. Dynamic range of imaging system is a ratio where the upper limit is determined by

a) Saturation

b) Noise

c) Brightness

d) Contrast

Explanation: Saturation is taken as the Numerator.

38. For Dynamic range ratio the lower limit is determined by

a) Saturation

b) Brightness

c) Noise

d) Contrast

Explanation: Noise is taken as the Denominator.

39. Quantitatively, spatial resolution cannot be represented in which of the following ways

a) line pairs

b) pixels

c) dots

d) none of the Mentioned

Explanation: All the options can be used to represent spatial resolution.

40. Assume that an image f(x, y) is sampled so that the result has M rows and N columns. If the values

Of the coordinates at the origin are (x, y) = (0, 0), then the notation (0, 1) is used to signify :

a) Second sample along first row

b) First sample along second row

c) First sample along first row

d) Second sample along second row

Explanation: The values of the coordinates at the origin are (x, y) = (0, 0). Then, the next coordinate

values (second sample) along the first row of the image are represented as (x, y) = (0, 1).

41. The resulting image of sampling and quantization is considered a matrix of real numbers. By what

name(s) the element of this matrix array is called __________

a) Image element or Picture element

b) Pixel or Pel

c) All of the mentioned

d) None of the mentioned

Explanation: Sampling and Quantization of an image f(x, y) forms a matrix of real numbers and each

element of this matrix array is commonly known as Image element or Picture element or Pixel or Pel.

42. Let Z be the set of real integers and R the set of real numbers. The sampling process may be viewed

as partitioning the x-y plane into a grid, with the central coordinates of each grid being from the

Cartesian product Z2, that is a set of all ordered pairs (zi, zj), with zi and zj being integers from Z. Then,

f(x, y) is said a digital image if:

a.(x, y) are integers from Z2 and f is a function that assigns a gray-level value (from Z) to each distinct

pair of coordinates (x, y)

b. (x, y) are integers from R2 and f is a function that assigns a gray-level value (from R) to each distinct

pair of coordinates (x, y)

c. (x, y) are integers from R2 and f is a function that assigns a gray-level value (from Z) to each distinct

pair of coordinates (x, y)

d. (x, y) are integers from Z2 and f is a function that assigns a gray-level value (from R) to each distinct

pair of coordinates (x, y)

Explanation: In the given condition, f(x, y) is a digital image if (x, y) are integers from Z2 and f a function

that assigns a gray-level value (that is, a real number from the set R) to each distinct coordinate pair (x,y).

43. Let Z be the set of real integers and R the set of real numbers. The sampling process may be viewed

as partitioning the x-y plane into a grid, with the central coordinates of each grid being from the

Cartesian product Z2, that is a set of all ordered pairs (zi, zj), with zi and zj being integers from Z. Then,

f(x, y) is a digital image if (x, y) are integers from Z2 and f is a function that assigns a gray-level value (that is, a real number from the set R) to each distinct coordinate pair (x, y). What happens to the digital image if the gray levels also are integers?

a) The Digital image then becomes a 2-D function whose coordinates and amplitude values are integers

b) The Digital image then becomes a 1-D function whose coordinates and amplitude values are integers

c) The gray level can never be integer

d) None of the mentioned

Explanation: In Quantization Process if the gray levels also are integers the Digital image then becomes a

2-D function whose coordinates and amplitude values are integers.

44. The digitization process i.e. the digital image has M rows and N columns, requires decisions about

values for M, N, and for the number, L, of gray levels allowed for each pixel. The value M and N have

to be:

a) M and N have to be positive integer

b) M and N have to be negative integer

c) M have to be negative and N have to be positive integer

d) M have to be positive and N have to be negative integer

Explanation: The digitization process i.e. the digital image has M rows and N columns, requires decisions

about values for M, N, and for the number, L, of max gray level. There are no requirements on M and N,

other than that M and N have to be positive integer.

45. The digitization process i.e. the digital image has M rows and N columns, requires decisions about

values for M, N, and for the number, L, of max gray levels. There are no requirements on M and N, other than that M and N have to be positive integer. However, the number of gray levels typically is

a) An integer power of 2 i.e. L = 2k

b) A Real power of 2 i.e. L = 2k

c) Two times the integer value i.e. L = 2k

d) None of the mentioned

Explanation: Due to processing, storage, and considering the sampling hardware, the number of gray

levels typically is an integer power of 2 i.e. L = 2k.

46. The digitization process i.e. the digital image has M rows and N columns, requires decisions about

values for M, N, and for the number, L, of max gray levels is an integer power of 2 i.e. L = 2k, allowed

for each pixel. If we assume that the discrete levels are equally spaced and that they are integers then

they are in the interval __________ and Sometimes the range of values spanned by the gray scale is

called the ________ of an image.

a) [0, L – 1] and static range respectively

b) [0, L / 2] and dynamic range respectively

c) [0, L / 2] and static range respectively

d) [0, L – 1] and dynamic range respectively

Explanation: In digitization process M rows and N columns have to be positive and for the number, L, of

discrete gray levels typically an integer power of 2 for each pixel. If we assume that the discrete levels

are equally spaced and that they are integers then they lie in the interval [0, L-1] and Sometimes the

range of values spanned by the gray scale is called the dynamic range of an image.

47. After digitization process a digital image with M rows and N columns have to be positive and for

The number, L, max gray levels i.e. an integer power of 2 for each pixel. Then, the number b, of bits

required to store a digitized image is:

a) b=M*N*k

b) b=M*N*L

c) b=M*L*k

d) b=L*N*k

Explanation: In digital image of M rows and N columns and L max gray levels an integer power of 2 for

each pixel. The number, b, of bits required to store a digitized image is: b=M*N*k.

48. An image whose gray-levels span a significant portion of gray scale have __________ dynamic

Range while an image with dull, washed out gray look have __________ dynamic range.

a) Low and High respectively

b) High and Low respectively

c) Both have High dynamic range, irrespective of gray levels span significance on gray scale

d) Both have Low dynamic range, irrespective of gray levels span significance on gray scale

Explanation: An image whose gray-levels signifies a large portion of gray scale have High dynamic range,

while that with dull, washed out gray look have Low dynamic range.

49. Validate the statement “When in an Image an appreciable number of pixels exhibit high dynamic

range, the image will have high contrast.”

a) True

b) False

Explanation: In an Image if an appreciable number of pixels exhibit high dynamic range property, the

image will have high contrast.

50. In digital image of M rows and N columns and L discrete gray levels, calculate the bits required to

store a digitized image for M=N=32 and L=16.

a) 16384

b) 4096

c) 8192

d) 512

Explanation: In digital image of M rows and N columns and L max gray levels i.e. an integer power of 2

for each pixel. The number, b, of bits required to store a digitized image is: b=M*N*k.

For L=16, k=4.

i.e. b=4096.

#### Module 05

1. Using gray-level transformation, the basic function linearity deals with which of the following transformation?

a) log and inverse-log transformations

b) negative and identity transformations

c) nth and nth root transformations

d) All of the mentioned

Explanation: For Image Enhancement gray-level transformation shows three basic function that are:

Linearity for negative and identity transformation

Logarithmic for log and inverse-log transformation, and

Power-law for nth and nth root transformations.

2. Using gray-level transformation, the basic function Logarithmic deals with which of the following transformation?

a) Log and inverse-log transformations

b) Negative and identity transformations

c) nth and nth root transformations

d) All of the mentioned

Explanation: For Image Enhancement gray-level transformation shows three basic function that are:

Linearity for negative and identity transformation

Logarithmic for log and inverse-log transformation, and

Power-law for nth and nth root transformations.

3. Using gray-level transformation, the basic function power-law deals with which of the following transformation?

a) log and inverse-log transformations

b) negative and identity transformations

c) nth and nth root transformations

d) all of the mentioned

Explanation: For Image Enhancement gray-level transformation shows three basic function that are:

Linearity for negative and identity transformation

Logarithmic for log and inverse-log transformation, and

Power-law for nth and nth root transformations.

4. If r be the gray-level of image before processing and s after processing then which expression defines the negative transformation, for the gray-level in the range [0, L-1]?

a) s = L – 1 – r

b) s = crᵞ, c and ᵞ are positive constants

c) s = c log (1 + r), c is a constant and r ≥ 0

d) none of the mentioned

Explanation: The expression for negative transformation is given as: s = L – 1 – r.

5. If r be the gray-level of image before processing and s after processing then which expression helps to obtain the negative of an image for the gray-level in the range [0, L-1]?

a) s = L – 1 – r

b) s = crᵞ, c and ᵞ are positive constants

c) s = c log (1 + r), c is a constant and r ≥ 0

d) none of the mentioned

Explanation: The expression for log transformation is given as: s = c log (1 + r), c is a constant and r ≥ 0.

6. If r be the gray-level of image before processing and s after processing then which expression defines the power-law transformation, for the gray-level in the range [0, L-1]?

a) s = L – 1 – r

b) s = crᵞ, c and ᵞ are positive constants

c) s = c log (1 + r), c is a constant and r ≥ 0

d) none of the mentioned

Explanation: The expression for power-law transformation is given as: s = crᵞ, c and ᵞ are positive constants.

7. Which of the following transformations is particularly well suited for enhancing an image with white and gray detail embedded in dark regions of the image, especially when there is more black area in the image.

a) Log transformations

b) Power-law transformations

c) Negative transformations

d) None of the mentioned

Explanation: Negative transformation reverses the intensity levels in the image and produces an equivalent photographic negative. So, well suited for the above given condition.

8. Which of the following transformations expands the value of dark pixels while the higher-level values are being compressed?

a) Log transformations

b) Inverse-log transformations

c) Negative transformations

d) None of the mentioned

Explanation: Log transformation derives a narrow range of gray-level values in input image to wider range of gray-levels in the output image, and does performs the above given transformation.The inverse-log is applied for the opposite.

9. Although power-law transformations are considered more versatile than log transformations for compressing of gray-levels in an image, then, how is log transformations advantageous over power-law transformations?

a) The log transformation compresses the dynamic range of images

b) The log transformations reverses the intensity levels in the images

c) All of the mentioned

d) None of the mentioned

Explanation: For compressing gray-levels in an image, power-law transformation is more versatile than log transformation, but log transformation has an important characteristics of compressing dynamic ranges of pixels having a large variation of values.

10. A typical Fourier Spectrum with spectrum value ranging from 0 to 106, which of the following transformation is better to apply.

a) Log transformations

b) Power-law transformations

c) Negative transformations

d) None of the mentioned

Explanation: The log transformation compresses the dynamic range of images and so the given range turns to 0 to approx. 7, which is easily displayable with 8-bit display.

11. The power-law transformation is given as: s = crᵞ, c and ᵞ are positive constants, and r is the gray-level of image before processing and s after processing. Then, for what value of c and ᵞ does power-law transformation becomes identity transformation?

a) c = 1 and ᵞ < 1

b) c = 1 and ᵞ > 1

c) c = -1 and ᵞ = 0

d) c = ᵞ = 1

Explanation: For c = ᵞ = 1 the power-law transformations s = crᵞ become s = r that is an identity transformations.

12. What is gamma correction?

a) A process to remove power-law transformation response phenomena

b) A process to remove log transformation response phenomena

c) A process to correct log transformation response phenomena

d) A process to correct power-law transformation response phenomena

Explanation: The exponent used in power-law transformation is called gamma. So, using the ᵞ value, either ᵞ < 1 or ᵞ> 1, various responses are obtained.

13. Which of the following transformation is used cathode ray tube (CRT) devices?

a) Log transformations

b) Power-law transformations

c) Negative transformations

d) None of the mentioned

Explanation: The CRT devices has a power function relation between intensity and volt response.

In such devices output appears darker than input. So, gamma correction is a must in this case.

14. Log transformation is generally used in which of the following device(s)?

a) Cathode ray tube

b) Scanners and printers

c) All of the mentioned

d) None of the mentioned

Explanation: All the mentioned devices uses gamma correction and so power-law transformation is generally of use in such case.

15. The power-law transformation is given as: s = crᵞ, c and ᵞ are positive constants, and r is the gray-level of image before processing and s after processing. What happens if we increase the gamma value from 0.3 to 0.7?

a) The contrast increases and the detail increases

b) The contrast decreases and the detail decreases

c) The contrast increases and the detail decreases

d) The contrast decreases and the detail increases

Explanation: In power-law transformation as gamma decreases is increase in image details however, the contrast reduces.

16. If h(rk) = nk, rk the kth gray level and nk total pixels with gray level rk, is a histogram in gray level range [0, L – 1]. Then how can we normalize a histogram?

a) If each value of histogram is added by total number of pixels in image, say n, p(rk)=nk+n

b) If each value of histogram is subtracted by total number of pixels in image, say n, p(rk)=nk-n

c) If each value of histogram is multiplied by total number of pixels in image, say n, p(rk)=nk * n

d) If each value of histogram is divided by total number of pixels in image, say n, p(rk)=nk / n

Explanation: To normalize a histogram each of its value is divided by total number of pixels in image, say n. p(rk) = nk / n.

17. What is the sum of all components of a normalized histogram?

a) 1

b) -1

c) 0

d) None of the mentioned

Explanation: A normalized histogram. p(rk) = nk / n

Where, n is total number of pixels in image, rk the kth gray level and nk total pixels with gray level rk.

Here, p(rk) gives the probability of occurrence of rk.

18. A low contrast image will have what kind of histogram when, the histogram, h(rk) = nk, rk the kth gray level and nk total pixels with gray level rk, is plotted nk versus rk?

a) The histogram that are concentrated on the dark side of gray scale

b) The histogram whose component are biased toward high side of gray scale

c) The histogram that is narrow and centered toward the middle of gray scale

d) The histogram that covers wide range of gray scale and the distribution of pixel is approximately uniform

Explanation: The histogram plot is nk versus rk. So, the histogram of a low contrast image will be narrow and centered toward the middle of gray scale.

A dark image will have the histogram that are concentrated on the dark side of gray scale.

A bright image will have the histogram whose component are biased toward high side of gray scale.

A high contrast image will have the histogram that covers wide range of gray scale and the distribution of pixel is approximately uniform.

19. A bright image will have what kind of histogram, when the histogram, h(rk) = nk, rk the kth gray level and nk total pixels with gray level rk, is plotted nk versus rk?

a) The histogram that are concentrated on the dark side of gray scale

b) The histogram whose component are biased toward high side of gray scale

c) The histogram that is narrow and centered toward the middle of gray scale

d) The histogram that covers wide range of gray scale and the distribution of pixel is approximately uniform

Explanation: The histogram plot is nk versus rk. So, the histogram of a low contrast image will be narrow and centered toward the middle of gray scale.

A dark image will have the histogram that are concentrated on the dark side of gray scale.

A bright image will have the histogram whose component are biased toward high side of gray scale.

A high contrast image will have the histogram that covers wide range of gray scale and the distribution of pixel is approximately uniform.

20. A high contrast image and a dark image will have what kind of histogram respectively, when the histogram, h(rk) = nk, rk the kth gray level and nk total pixels with gray level rk, is plotted nk versus rk?

I.  The histogram that are concentrated on the dark side of gray scale.

II. The histogram whose component are biased toward high side of gray scale.

III. The histogram that is narrow and centered toward the middle of gray scale.

IV. The histogram that covers wide range of gray scale and the distribution of pixel is approximately uniform.

a) I) And II) respectively

b) III) And II) respectively

c) II) And IV) respectively

d) IV) And I) respectively

Explanation: The histogram plot is nk versus rk. So, the histogram of a low contrast image will be narrow and centered toward the middle of gray scale.

A dark image will have the histogram that are concentrated on the dark side of gray scale.

A bright image will have the histogram whose component are biased toward high side of gray scale.

A high contrast image will have the histogram that covers wide range of gray scale and the distribution of pixel is approximately uniform.

21. The transformation s = T(r) producing a gray level s for each pixel value r of input image. Then, if the T(r) is single valued in interval 0 ≤ r ≤ 1, what does it signifies?

a) It guarantees the existence of inverse transformation

b) It is needed to restrict producing of some inverted gray levels in output

c) It guarantees that the output gray level and the input gray level will be in same range

d) All of the mentioned

Explanation: The T(r) is single valued in interval 0 ≤ r ≤ 1, guarantees the existence of inverse transformation.

22. The transformation s = T(r) producing a gray level s for each pixel value r of input image. Then, if the T(r) is monotonically increasing in interval 0 ≤ r ≤ 1, what does it signifies?

a) It guarantees the existence of inverse transformation

b) It is needed to restrict producing of some inverted gray levels in output

c) It guarantees that the output gray level and the input gray level will be in same range

d) All of the mentioned

Explanation: A T(r) which is not monotonically increasing, could result in an output containing at least a section of inverted intensity range. The T(r) is monotonically increasing in interval 0 ≤ r ≤ 1, is needed to restrict producing of some inverted gray levels in output.

23. The transformation s = T(r) producing a gray level s for each pixel value r of input image. Then, if the T(r) is satisfying 0 ≤ T(r) ≤ 1 in interval 0 ≤ r ≤ 1, what does it signifies?

a) It guarantees the existence of inverse transformation

b) It is needed to restrict producing of some inverted gray levels in output

c) It guarantees that the output gray level and the input gray level will be in same range

d) All of the mentioned

Explanation: If, 0 ≤ T(r) ≤ 1 in interval 0 ≤ r ≤ 1, then the output gray level and the input gray level will be in same range.

24. What is the full form for PDF, a fundamental descriptor of random variables i.e. gray values in an image?

a) Pixel distribution function

b) Portable document format

c) Pel deriving function

d) Probability density function

Explanation: For a random variable, a PDF, probability density function, is one of the most fundamental descriptor.

25. What is the full form of CDF?

a) Cumulative density function

b) Contour derived function

c) Cumulative distribution function

d) None of the mentioned

Explanation: CDF of random variable r, gray value of input image, is cumulative distribution function.

26.  In Histogram Matching r and z are gray level of input and output image and p stands for PDF, then, what does pz(z) stands for?
a) Specific probability density function
b) Specified pixel distribution function
c) Specific pixel density function
d) Specified probability density function

Explanation: In Histogram Matching, pr(r) is estimated from input image while pz(z) is Specified probability density function that output image is supposed to have.

27.  Which of the following histogram processing techniques is global?
a) Histogram Linearization
b) Histogram Specification
c) Histogram Matching
d) All of the mentioned

Explanation: All of the mentioned methods modifies the pixel value by transformations that are based on the gray-level of the whole image.

28. If the histogram of same images, with different contrast, are different, then what is the relation between the histogram equalized images?

a) They look visually very different from one another

b) They look visually very similar to one another

c) They look visually different from one another just like the input images

d) None of the mentioned

Explanation: This is because the contents of all images is same. The difference is just the contrast.

The histogram equalization increases the contrast and make the gray-level difference of output image visually indistinguishable.

29. In neighborhood operations working is being done with the value of image pixel in the neighborhood and the corresponding value of a subimage that has same dimension as neighborhood. The subimage is referred as _________

a) Filter

c) Template

d) All of the mentioned

Explanation: Working in neighborhood operations is done with the value of a subimage having same dimension as neighborhood corresponding to the value in the image pixel. The subimage is called as filter, mask, template, kernel or window.

30. The response for linear spatial filtering is given by the relationship __________

a) Sum of filter coefficient’s product and corresponding image pixel under filter mask

b) Difference of filter coefficient’s product and corresponding image pixel under filter mask

c) Product of filter coefficient’s product and corresponding image pixel under filter mask

d) None of the mentioned

Explanation: In spatial filtering the mask is moved from point to point and at each point the response is calculated using a predefined relationship. The relationship in linear spatial filtering is given by: the Sum of filter coefficient’s product and corresponding image pixel in area under filter mask.

31. In linear spatial filtering, what is the pixel of the image under mask corresponding to the mask coefficient w (1, -1), assuming a 3*3 mask?

a) f (x, -y)

b) f (x + 1, y)

c) f (x, y – 1)

d) f (x + 1, y – 1)

Explanation: The pixel corresponding to mask coefficient (a 3*3 mask) w (0, 0) is f (x, y), and so for w (1, -1) is f (x + 1, y – 1).

32. Which of the following is/are a nonlinear operation?

a) Computation of variance

b) Computation of median

c) All of the mentioned

d) None of the mentioned

Explanation: Computation of variance as well as median comes under nonlinear operation.

33. Which of the following is/are used as basic function in nonlinear filter for noise reduction?

a) Computation of variance

b) Computation of median

c) All of the mentioned

d) None of the mentioned

Explanation: Computation of median gray-level value in the neighborhood is the basic function of nonlinear filter for noise reduction.

34. In neighborhood operation for spatial filtering if a square mask of size n*n is used it is restricted that the center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image, what happens to the resultant image?

a) The resultant image will be of same size as original image

b) The resultant image will be a little larger size than original image

c) The resultant image will be a little smaller size than original image

d) None of the mentioned

Explanation: If the center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image, the border pixels won’t get processed under mask and so the resultant image would be of smaller size.

35. Which of the following method is/are used for padding the image?

a) Adding rows and column of 0 or other constant gray level

b) Simply replicating the rows or columns

c) All of the mentioned

d) None of the mentioned

Explanation: In neighborhood operation for spatial filtering using square mask, padding of original image is done to obtain filtered image of same size as of original image done, by adding rows and column of 0 or other constant gray level or by replicating the rows or columns of the original image.

36. In neighborhood operation for spatial filtering using square mask of n*n, which of the following approach is/are used to obtain a perfectly filtered result irrespective of the size?

b) By filtering all the pixels only with the mask section that is fully contained in the image

c) By ensuring that center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image

d) None of the mentioned

Explanation: By ensuring that center of mask must be at a distance ≥ (n – 1)/2 pixels from border of image, the resultant image would be of smaller size but all the pixels would be the result of the filter processing and so is a fully filtered result.

In the other approach like padding affect the values near the edges that gets more prevalent with mask size increase, while the another approach results in the band of pixels near border that gets processed with partial filter mask. So, not a fully filtered case.

37. Noise reduction is obtained by blurring the image using smoothing filter.

a) True

b) False

Explanation: Noise reduction is obtained by blurring the image using smoothing filter. Blurring is used in pre-processing steps, such as removal of small details from an image prior to object extraction and, bridging of small gaps in lines or curves.

38. What is the output of a smoothing, linear spatial filter?

a) Median of pixels

b) Maximum of pixels

c) Minimum of pixels

d) Average of pixels

Explanation: The output or response of a smoothing, linear spatial filter is simply the average of the pixels contained in the neighbourhood of the filter mask.

39. Smoothing linear filter is also known as median filter.

a) True

b) False

Explanation: Since the smoothing spatial filter performs the average of the pixels, it is also called as averaging filter.

40. Which of the following in an image can be removed by using smoothing filter?

a) Smooth transitions of gray levels

b) Smooth transitions of brightness levels

c) Sharp transitions of gray levels

d) Sharp transitions of brightness levels

Explanation: Smoothing filter replaces the value of every pixel in an image by the average value of the gray levels. So, this helps in removing the sharp transitions in the gray levels between the pixels. This is done because, random noise typically consists of sharp transitions in gray levels.

41. Which of the following is the disadvantage of using smoothing filter?

a) Blur edges

b) Blur inner pixels

c) Remove sharp transitions

d) Sharp edges

Explanation: Edges, which almost always are desirable features of an image, also are characterized by sharp transitions in gray level. So, averaging filters have an undesirable side effect that they blur these edges.

42. Smoothing spatial filters doesn’t smooth the false contours.

a) True

b) False

Explanation: One of the application of smoothing spatial filters is that, they help in smoothing the false contours that result from using an insufficient number of gray levels.

43. Which of the following shows three basic types of functions used frequently for image enhancement?
a) Linear, logarithmic and inverse law
b) Power law, logarithmic and inverse law
c) Linear, logarithmic and power law
d) Linear, exponential and inverse law

44. Which expression is obtained by performing the negative transformation on the negative of an image with gray levels in the range[0,L-1] ?
a) s=L+1-r
b) s=L+1+r
c) s=L-1-r
d) s=L-1+r

Explanation: The negative of an image with gray levels in the range[0,L-1] is obtained by using the negative transformation, which is given by the expression: s=L-1-r.

45. Box filter is a type of smoothing filter.

a) True

b) False

Explanation: A spatial averaging filter or spatial smoothening filter in which all the coefficients are equal is also called as box filter.

46. In which type of slicing, highlighting a specific range of gray levels in an image often is desired?
a) Gray-level slicing
b) Bit-plane slicing
c) Contrast stretching
d) Byte-level slicing

Explanation: Highlighting a specific range of gray levels in an image often is desired in gray-level slicing. Applications include enhancing features such as masses of water in satellite imagery and enhancing flaws in X-ray images.

47. Which of the following comes under the application of image blurring?

a) Object detection

b) Gross representation

c) Object motion

d) Image segmentation

Explanation: An important application of spatial averaging is to blur an image for the purpose of getting a gross representation of interested objects, such that the intensity of the small objects blends with the background and large objects become easy to detect.c

48. Which of the following filters response is based on ranking of pixels?

a) Nonlinear smoothing filters

b) Linear smoothing filters

c) Sharpening filters

d) Geometric mean filter

Explanation: Order static filters are nonlinear smoothing spatial filters whose response is based on the ordering or ranking the pixels contained in the image area encompassed by the filter, and then replacing the value of the central pixel with the value determined by the ranking result.

49. Median filter belongs to which category of filters?

a) Linear spatial filter

b) Frequency domain filter

c) Order static filter

d) Sharpening filter

Explanation: The median filter belongs to order static filters, which, as the name implies, replaces the value of the pixel by the median of the gray levels that are present in the neighbourhood of the pixels.

50. Median filters are effective in the presence of impulse noise.

a) True

b) False

Explanation: Median filters are used to remove impulse noises, also called as salt-and-pepper noise because of its appearance as white and black dots in the image.

51. What is the maximum area of the cluster that can be eliminated by using an n×n median filter?

a) n2

b) n2/2

c) 2*n2

d) n

Explanation: Isolated clusters of pixels that are light or dark with respect to their neighbours, and whose area is less than n2/2, i.e., half the area of the filter, can be eliminated by using an n×n median filter.

52. Which of the following is the primary objective of sharpening of an image?

a) Blurring the image

b) Highlight fine details in the image

c) Increase the brightness of the image

d) Decrease the brightness of the image

Explanation: The sharpening of image helps in highlighting the fine details that are present in the image or to enhance the details that are blurred due to some reason like adding noise.

53. Image sharpening process is used in electronic printing.

a) True

b) False

Explanation: The applications of image sharpening is present in various fields like electronic printing, autonomous guidance in military systems, medical imaging and industrial inspection.

54. In spatial domain, which of the following operation is done on the pixels in sharpening the image?

a) Integration

b) Average

c) Median

d) Differentiation

Explanation: We know that, in blurring the image, we perform the average of pixels which can be considered as integration. As sharpening is the opposite process of blurring, logically we can tell that we perform differentiation on the pixels to sharpen the image.

55. Image differentiation enhances the edges, discontinuities and deemphasizes the pixels with slow varying gray levels.

a) True

b) False

Explanation: Fundamentally, the strength of the response of the derivative operative is proportional to the degree of discontinuity in the image. So, we can state that image differentiation enhances the edges, discontinuities and deemphasizes the pixels with slow varying gray levels.

56. In which of the following cases, we wouldn’t worry about the behaviour of sharpening filter?

a) Flat segments

b) Step discontinuities

c) Ramp discontinuities

d) Slow varying gray values

Explanation: We are interested in the behaviour of derivatives used in sharpening in the constant gray level areas i.e., flat segments, and at the onset and end of discontinuities, i.e., step and ramp discontinuities.

57. Which of the following is the valid response when we apply a first derivative?

a) Non-zero at flat segments

b) Zero at the onset of gray level step

c) Zero in flat segments

d) Zero along ramps

Explanation: The derivations of digital functions are defined in terms of differences. The definition we use for first derivative should be zero in flat segments, nonzero at the onset of a gray level step or ramp and nonzero along the ramps.

58. Which of the following is not a valid response when we apply a second derivative?

a) Zero response at onset of gray level step

b) Nonzero response at onset of gray level step

c) Zero response at flat segments

d) Nonzero response along the ramps

Explanation: The derivations of digital functions are defined in terms of differences. The definition we use for second derivative should be zero in flat segments, zero at the onset of a gray level step or ramp and nonzero along the ramps.

59. If f(x,y) is an image function of two variables, then the first order derivative of a one dimensional function, f(x) is:

a) f(x+1)-f(x)

b) f(x)-f(x+1)

c) f(x-1)-f(x+1)

d) f(x)+f(x-1)

Explanation: The first order derivative of a single dimensional function f(x) is the difference between f(x) and f(x+1).

That is, ∂f/∂x=f(x+1)-f(x).

60. Isolated point is also called as noise point.

a) True

b) False

Explanation: The point which has very high or very low gray level value compared to its neighbours, then that point is called as isolated point or noise point. The noise point of is of one pixel size.

61. What is the thickness of the edges produced by first order derivatives when compared to that of second order derivatives?

a) Finer

b) Equal

c) Thicker

d) Independent

Explanation: We know that, the first order derivative is nonzero along the entire ramp while the second order is zero along the ramp. So, we can conclude that the first order derivatives produce thicker edges and the second order derivatives produce much finer edges.

62. First order derivative can enhance the fine detail in the image compared to that of second order derivative.

a) True

b) False

Explanation: The response at and around the noise point is much stronger for the second order derivative than for the first order derivative. So, we can state that the second order derivative is better to enhance the fine details in the image including noise when compared to that of first order derivative.

63. Which of the following derivatives produce a double response at step changes in gray level?

a) First order derivative

b) Third order derivative

c) Second order derivative

d) First and second order derivatives

Explanation: Second order derivatives produce a double line response for the step changes in the gray level. We also note of second-order derivatives that, for similar changes in gray-level values in an image, their response is stronger to a line than to a step, and to a point than to a line.

#### Module 06

1.Digital functions’ derivatives are defined as

a. differences

b. multiplication

d. division

2.For line detection we use mask that is

a. Gaussian

b. laplacian

c. ideal

d. butterworth

a. |Gx|+|Gy|

b. |Gx|-|Gy|

c. |Gx|/|Gy|

d. |Gx|x|Gy|

4.For finding horizontal lines we use mask of values

a.[-1 -1 -1; 2 2 2; -1 -1 -1]

b.[2 -1 -1; -1 2 -1; -1 -1 2]

c.[-1 2 -1; -1 2 -1; -1 2 -1]

d.[-1 -1 2; -1 2 -1;2 -1 -1]

5.If the inner region of the object is textured then approach we use is

a.discontinuity

b.similarity

c.extraction

d.recognition

6.The horizontal gradient pixels are denoted by

a.Gx

b.Gy

c.Gt

d.Gs

7.To avoid the negative values taking absolute values in lapacian image doubles

a.thickness of lines

b.thinness of lines

c.thickness of edges

d.thinness of edges

8.First derivative approximation says that values of constant intensities must be

a. 1

b. 0

c. positive

d. negative

9.For finding lines at angle 45 we use mask of values

a.[-1 -1 -1; 2 2 2; -1 -1 -1]

b.[2 -1 -1; -1 2 -1; -1 -1 2]

c.[-1 2 -1; -1 2 -1; -1 2 -1]

d.[-1 -1 2; -1 2 -1;2 -1 -1]

10.Second derivative approximation says that values along the ramp must be

a. nonzero

b. zero

c. positive

d. negative

11.Ri is a connected set, where is

a. 1,2,3,4

b. 1,2,3…10

c. 1,2,3…50

d. 1,2,3…n

12.Gradient magnitude images are more useful in

a. point detection

b. line detection

c. area detection

d. edge detection

13.Image having gradient pixels is called

a.sharp image

b.blur image

d.binary image

14.In laplacian images light shades of gray level is represented by

a. 0

b. 1

c. positive

d. negative

15.For noise reduction we use

a. image smoothing

b. image contouring

c. image enhancement

d. image recognition

16.Diagonal lines are angles at

a. 0

b. 30

c. 45

d. 90

17.Transition between objects and background shows

a. ramp edges

b. step edges

c. sharp edges

d. Both a and b

18.Horizontal lines are angles at

a. 0

b. 30

c. 45

d. 90

19. Standard deviation is referred to as noiseless if having the value

a. 0.1

b. 0.2

c. 0.3

d. 0.4

20. For edge detection we use

a. first derivative

b. second derivative

c. third derivative

d. Both a and b

21.Step edge transition is between pixels over the distance of

a. 1 pixel

b. 2 pixels

c. 3 pixels

d. 4 pixels

22. Sobel gradient is not that good for detection of

a. horizontal lines

b. vertical lines

c. Diagonal lines

d. edges

23. Smoothness reduced the bricks of

a. pixels

b. constant intensities

c. point pixels

d. edges

24. Second derivative approximation says that it is nonzero only at

a. ramp

b. step

c. onset

d. edges

25. Method in which images are input and attributes are output is called

a. low level processes

b. high level processes

c. mid level processes

d. edge level processes

26. Computation of derivatives in segmentation is also called

a. spatial filtering

b. frequency filtering

c. low pass filtering

d. high pass filtering

27. Model of lines through region is called

a. ramp edges

b. step edge

c. roof edges

d. thinness of edges

28. Transition of intensity takes place between

b. near pixels

c. edge pixels

d. line pixels

29. Averaging is analogous to

a. differentiation

b. derivation

d. integration

30. Response of derivative mask is zero at

a. sharp intensities

b. constant intensities

c. low intensities

d. high intensities

31. Subdivision of the image depends upon the

a. problem

b. objects

c. image

d. partition

32. One that is not a method of image segmentation is

a. area

b. line

c. point

d. edge

33. Discontinuity approach of segmentation depends upon

a. low frequencies

b. smooth changes

c. abrupt changes

d. contrast

34. On ramp and step second derivatives produce

a. single edge effect

b. single effect

c. double edge effect

d. double line effect

35. Point detection is done using filter that is

a. Gaussian

b. laplacian

c. ideal

d. butterworth

36. Second derivatives are zero at points on

a. ramp

b. step

c. constant intensity

d. edge

37. Two regions are said to be adjacent if their union forms

a. connected set

b. boundaries

c. region

d. image

38. 8bit image has intensity levels of

a. 0

b. 128

c. 255

d. 256

39. Sobel operators were introduced in

a. 1970

b. 1971

c. 1972

d. 1973

40. Blurring attenuate the

a. pixels

b. points