Get Latest Exam Updates, Free Study materials and Tips

Exit Intent

[MCQ] High Performace Computing

1 : The need for parallel processor to increase speedup
a.Moores Law
b.Minsky conjecture
c.Flynns Law
d.Amdhals Law
Answer:
Amdhals Law

2 : Which of the following interrupt is non maskable
a.INTR
b.RST 7.5
c.RST 6.5
d.TRAP
Answer:
————-

3 : In which system desire HPC
a.Adaptivity
b.Transparency
c.Dependency
d.Secretivte
Answer
Transparency

4 : When every caches hierarchy level is subset of level which futher away from the processor
a.Synchronous
b.Atomic synschronous
c.Distrubutors
d.Multilevel inclusion
Answer
Multilevel inclusion

5 : . ______________ leads to concurrency
a.Serialization
b.cloud computing
c.Distribution
d.Parallelism
Answer
Parallelism


6 : The problem where process concurrency becomes an issue is called as ___________
a.Reader-write problem
b.Bankers problem
c.Bakery problem
d.Philosophers problem
Answer
Reader-write problem

7 : Interprocess communication that take place
a.Centralized memory
b.Message passing
c.shared memory
d.cache memory
Answer
shared memory

8 : Speedup can be as low as____
a.1
b.2
c.0
d.3
Answer
0

9 : A type of parallelism that uses micro architectural techniques.
a.bit based
b.bit level
c.increasing
d.instructional
Answer
instructional

10 : MPI_Comm_size
a.Returns number of processes
b.Returns number of line
c.Returns size of program
d.Returns value of instruction
Answer
Returns number of processes

11 : High-performance computing of the computer system tasks are done by
a.node clusters
b.network clusters
c.Beowulf clusters
d.compute nodes
Answer
compute nodes

12 : MPI_Comm_rank
a.returns rank
b.returns processes
c.returns value
d.Returns value of instruction
Answer
returns rank

13 : A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______ .
a.Super-scaling
b.Pipe-lining
c.Parallel Computation
d.distributed
Answer
Pipe-lining

14 : Any condition that causes a processor to stall is called as _________
a.page fault
b.system error
c.Hazard
d.execuation error
Answer
Hazard

15 : Characteristic of RISC (Reduced Instruction Set Computer) instruction set is
a.one word instruction
b.two word instruction
c.three word instruction
d.four word instruction
Answer
one word instruction


16 : The disadvantage of using a parallel mode of communication is ______
a.Leads to erroneous data transfer
b.It is costly
c.Security of data
d.complexity of network
Answer
It is costly

17 : A microprogram sequencer
a.generates the address of next micro instruction to be executed.
b.generates the control signals to execute a microinstruction.
c.sequentially averages all microinstructions in the control memory.
d.enables the efficient handling of a micro program subroutine.
Answer
————-

18 : The___ time collectively spent by all the processing elements Tall = p TP
a.total
b.Average
c.mean
d.sum
Answer
total

19 : In a distributed computing environment, distributed shared memory is used which is_____________
a.Logical combination of virtual memories on the nodes
b.Logical combination of physical memories on the nodes
c.Logical combination of the secondary memories on all the nodes
d.Logical combinatin of files
Answer
————-

20 : The average number of steps taken to execute the set of instructions can be made to be less than one by following _______ .
a.Sequentional
b.super-scaling
c.pipe-lining
d.ISA
Answer
super-scaling

21 : The main difference between the VLIW and the other approaches to improve performance is ___________
a.increase in performance
b.Lack of complex hardware design
c.Cost effectiveness
d.latency
Answer
Lack of complex hardware design

22 : CISC stands for
a.Complete Instruction Sequential Compilation
b.Complete Instruction Sequential Compiler
c.Complete Instruction Serial Compilation
d.Complex Instruction set computer
Answer
complex

23 : Speedup, in theory, should be ______ bounded by p
a.lower
b.upper
c.left
d.right
Answer
upper

24 : Virtualization that creates one single address space architecture that of, is called
a.Loosely coupled
b.Space based
c.Tightly coupled
d.peer-to-peer
Answer
Space based

25 : MPI_Init
a.Close MPI environment
b.Initialize MPI environment
c.start programing
d.Call processes
Answer
start programing

26 : Content of the program counter is added to the address part of the instruction in order to obtain the effective address is called
a.relative address mode
b.index addressing mode
c.register mode
d.implied mode
Answer
————-

27 : The straight-forward model used for the memory consistency, is called
a.Sequential consistency
b.Random consistency
c.Remote node
d.Host node
————-

28 : Which MIMD systems are best scalable with respect to the number of processors
a.Distributed memory
b.ccNUMA
c.nccNUMA
d.Symmetric multiprocessor
Answer
Distributed memory

29 : Memory management on a multiprocessor must deal with all of found on
a.Uniprocessor Computer
b.Computer
c.Processor
d.System
————-

30 : The___ time collectively spent by all the processing elements Tall = p TP
a.total
b.sum
c.average
d.product
Answer
total


31 : Hazard are eliminated through renaming by renaming all
a.Source register
b.Memory
c.Data
d.Destination register
Answer
Destination register

32 : The situation wherein the data of operands are not available is called ______
a.stock
b.Deadlock
c.data hazard
d.structural hazard
Answer
data hazard

33 : types of HPC application
a.Mass Media
b.Business
c.Management
d.Science
Answer
Science

34 : A distributed operating system must provide a mechanism for
a.intraprocessor communication
b.intraprocess and intraprocessor communication
c.interprocess and interprocessor communication
d.interprocessor communication
Answer
————-


35 : This is computation not performed by the serial version
a.Serial computation
b.Excess computation
c.serial computation
d.parallel computing
Answer
————-

36 : The important feature of the VLIW is ______
a.ILP
b.Performance
c.Cost effectiveness
d.delay
Answer
ILP

37 : The tightly coupled set of threads execution working on a single task ,that is called
a.Multithreading
b.Parallel processing
c.Recurrence
d.Serial processing
Answer
Multithreading

38 : Parallel Algorithm Models
a.Data parallel model
b.Bit model
c.Data model
d.network model
Answer
Data parallel model

39 : Mpi_Recv used for
a.reverse message
b.receive message
c.forward message
d.Collect message
Answer
receive message

40 : Status bit is also called
a.Binary bit
b.Flag bit
c.Signed bit
d.Unsigned bit
————-


41 : For inter processor communication the miss arises are called
a.hit rate
b.coherence misses
c.comitt misses
d.parallel processing
Answer
coherence misses

42 : The interconnection topologies are implemented using _________ as a node.
a.control unit
b.microprocessor
c.processing unit
d.microprocessor or processing unit
Answer
————-

43 : _________ gives the theoretical speedup in latency of the execution of a task at fixed execution time
a.Amdahl’s
b.Moor’s
c.metcalfe’s
d.Gustafson’s law
Answer
Gustafson’s law

44 : The number and size of tasks into which a problem is decomposed determines the
a.fine-grainularity
b.coarse-grainularity
c.sub Task
d.granularity
Answer
granularity

45 : MPI_Finalize used for
a.Stop mpi environment program
b.intitalise program
c.Include header files
d.program start
Answer
Stop mpi environment program


46 : Private data that is used by a single processor then shared data are used
a.Single processor
b.Multi processor
c.Single tasking
d.Multi tasking
Answer
Single processor

47 : The time lost due to the branch instruction is often referred to as ____________
a.Delay
b.Branch penalty
c.Latency
d.control hazard
Answer
Branch penalty

48 : NUMA architecture uses _______in design
a.cache
b.shared memory
c.message passing
d.distributed memory
Answer
distributed memory

49 : Divide and Conqure apporach is known for
a.Sequentional algorithm development
b.parallel algorithm develpoment
c.Task defined algorithm
d.Non defined Algorithm
Answer
Sequentional algorithm development

50 : The parallelism across branches require which scheduling
a.Global scheduling
b.Local Scheduling
c.post scheduling
d.pre scheduling
Answer
Global scheduling

51 : Parallel processing may occur
a.In the data stream
b.In instruction stream
c.In network
d.In transferring
Answer
In the data strea

52 : Pipe-lining is a unique feature of _______.
a.CISC
b.RISC
c.ISA
d.IANA
Answer
RISC

53 : In MPI programing MPI_char is the instruction for
a.Unsign Char
b.Sign character
c.Long Char
d.unsign long char
Answer
Sign Char

54 : To increase the speed of memory access in pipelining, we make use of _______
a.Special memory locations
b.Special purpose registers
c.Cache
d.Buffer
Answer
buffer

55 : If the value V(x) of the target operand is contained in the address field itself, the addressing mode is
a.Immediate
b.Direct
c.Indirect
d.Implied
Answer
————-


56 : In a multi-processor configuration two coprocessors are connected to host 8086 processor. The instruction sets of the two coprocessors
a.must be same
b.may overlap
c.must be disjoint
d.must be the same as that of host
Answer
————-

57 : A feature of a task-dependency graph that determines the average degree of concurrency for a given granularity is its ___________ path
a.critical
b.easy
c.difficult
d.ambiguous
Answer
critical

58 : MPI_send used for
a.collect message
b.transfer message
c.send message
d.receive message
Answer
send message

59 : What is usually regarded as the von Neumann Bottleneck
a.Instruction set
b.Arithmetic logical unit
c.Processor/memory interface
d.Control unit
Answer
Arithmetic logical unit

60 : An interface between the user or an application program, and the system resources is
a.Microprocessor
b.Microcontroller
c.Multimicroprocessor
d.operating system
Answer
————-

61 : The computer architecture aimed at reducing the time of execution of instructions is ________.
a.CISC
b.RISC
c.SPARC
d.ISA
Answer
RISC

62 : parallel computer is capable of
a.Decentalized computing
b.Parallel computing
c.Distributed computing
d.centralized computing
Answer
Parallel computing

63 : Design of _______processor is complex
a.parallel
b.pipeline
c.serial
d.distributed
Answer
pipeline

64 : The instructions which copy information from one location to another either in the processor’s internal register set or in the external main memory are called
a.Data transfer instructions
b.Program control instructions
c.Input-output instructions
d.Logical instructions
Answer
————-

65 : The pattern of___________ among tasks is captured by what is known as a task-interaction graph
a.interaction
b.communication
c.optmization
d.flow
Answer
interaction


66 : In vector processor a single instruction, can ask for ____________ data operations
a.multiple
b.single
c.two
d.four
Answer
multiple

67 : The cost of a parallel processing is primarily determined by
a.switching complexity
b.circuit complexity
c.Time Complexity
d.space complexity
Answer
Time Complexity

68 : Interaction overheads can be minimized by____
a.Maximize Data Locality
b.Maximize Volume of data exchange
c.Increase Bandwidth
d.Minimize social media contents
Answer
Maximize Data Locality

69 : This is computation not performed by the serial version
a.Excess Computation
b.serial computation
c.Parallel Computing
d.cluster computation
Answer
Excess Computation

70 : The cost of dynamic networks is often determined by the number of ____________ nodes in the network.
a.Packet
b.Ring
c.Static
d.Switching
Answer
Switching

71 : The contention for the usage of a hardware device is called ______
a.data hazard
b.Stalk
c.Deadlock
d.structural hazard
Answer
structural hazard

72 : Which Algorithm is better choice for pipelining
a.Small Algorithm
b.Hash Algorithm
c.Merge-Sort Algorithm
d.Quick-Sort Algorithm
Answer
Merge-Sort Algorithm

73 : In MPI programing MPI_Reduce is the instruction for
a.Full operation
b.Limited operation
c.reduction operation
d.selected operation
Answer
reduction operation

74 : The stalling of the processor due to the unavailability of the instructions is called as ___________
a.Input hazard
b.data hazard
c.structural hazard
d.control hazard
Answer
control hazard

75 : _____processors rely on compile time analysis to identify and bundle together instructions that can be executed concurrently
a.VILW
b.LVIW
c.VLIW
d.VLWI
Answer
VLIW

76 : type of parallelism that is naturally expressed by independent tasks in a task-dependency graph is called _______ parallelism.
a.Task
b.Instruction
c.Data
d.Program
Answer
Task

77 : NSM has launched its first supercomputer at
a.BHU
b.IITB
c.IITKG
d.IITM
Answer
BHU

78 : Writing parallel programs is referred to as
a.Parallel computation
b.parallel development
c.parallel programing
d.Parallel processing
Answer
Parallel computation

79 : A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______ .
a.Super-scaling
b.Pipe-lining
c.Parallel computation
d.serial computation
Answer
Pipe-lining

80 : Zero address instruction format is used for
a.RISC architecture
b.CISC architecture
c.Von-Neuman architecture
d.Stack-organized architecture
Answer
————-

81 : An interface between the user or an application program, and the system resources are
a.microprocessor
b.microcontroller
c.multi-microprocessor
d.operating system
Answer
————-

82 : The main objective in building the multi-microprocessor is
a.greater throughput
b.enhanced fault tolerance
c.greater throughput and enhanced fault tolerance
d.zero throughput
Answer
————-

83 : UMA architecture uses _______in design
a.cache
b.shared memory
c.message passing
d.distributed memory
Answer
shared memory

84 : To which class of systems does the von Neumann computer belong
a.SIMD
b.MIMD
c.MISD
d.SISD
Answer
SISD

85 : characteristic of CISC (Complex Instruction Set Computer)
a.Variable format instruction
b.Fixed format instructions
c.Instruction are executed by hardware
d.unsign long char
Answer
Variable format instruction


86 : A _________ computation performs one multiply-add on a single pair of vector elements
a.dot product
b.cross product
c.multiply
d.add
Answer
dot

87 : Data parallelism is parallelism inherent in
a.program loops
b.Serial program
c.parallel program
d.long programs
Answer
parallel program

88 : What is the execution time per stage of a pipeline that has 5 equal stages and a mean overhead of 12 cycles
a.2 cycles
b.3 cycles
c.5 cycles
d.4 cycles
Answer
3 cycles

89 : This algorithm is a called greedy because
a.the greedy algorithm never considers the same solution again
b.the greedy algorithm always give same solution again
c.the greedy algorithm never considers the optimal solution
d.the greedy algorithm never considers whole program
Answer
the greedy algorithm never considers the same solution again

90 : If n is a power of two, we can perform this operation in ____ steps by propagating partial sums up a logical binary tree of processors.
a.logn
b.nlogn
c.n
d.n^2
Answer
logn

91 : A multiprocessor machine which is capable of executing multiple instructions on multiple data sets
a.SISD
b.SIMD
c.MIMD
d.MISD
MIMD

92 : Tree networks suffer from a communication bottleneck at higher levels of the tree. This network, also called a _________ tree.
a.fat
b.binary
c.order static
d.heap tree
Answer
FAT

93 : Multiple application independently running are typically called
a.Multiprograming
b.multiithreading
c.Multitasking
d.Synchronization
Answer
Multiprograming

94 : Each of the clock cycle from the previous section of execution becomes
a.Previous stage
b.stall
c.previous cycle
d.pipe stage
Answer
pipe stage

95 : The main objective in building the multimicroprocessor is
a.greater throughput
b.enhanced fault tolerance
c.greater throughput and enhanced fault tolerance
d.none of the mentioned
Answer
————-


96 : Wating until there is no data hazards then
a.stall
b.write operand
c.Read operand
d.Branching
Answer
Read operand

97 : In message passing, send and receive message between
a.Task or processes
b.Task and Execution
c.Processor and Instruction
d.Instruction and decode
Answer
Task or processes

98 : We denote the serial runtime by TS and the parallel____by TP
a.runtime
b.clock time
c.processor time
d.clock frequency
Answer
runtime

99 : Uniprocessor computing devices is called__________.
a.Grid computing
b.Centralized computing
c.Parallel computing
d.Distributed computing
Answer
Centralized computing

100 : The tighhtly copuled set of threads execution working on a single task is called
a.Serial processing
b.parallel processing
c.Multithreading
d.Recurrent
Answer
Multithreading

101 : what is WAR
a.Write before read
b.write after write
c.write after read
d.write with read
Answer
write after read

102 : Partitioning refer to decomposing of the computational activity as
a.Small Task
b.Large Task
c.Full program
d.group of program
Answer
Small Task

103 : Speed up is defined as a ratio of
a.s=Ts/Tp
b.S= Tp/Ts
c.Ts=S/Tp
d.Tp=S /Ts
Answer
s=Ts/Tp

104 : A processor that continuously tries to acquire the locks, spinning around a loop till it reaches its success, is known as
a.Spin locks
b.Store locks
c.Link locks
d.Store operational
Answer
————-


105 : Pipelining strategy is called implement
a.Instruction execution
b.Instruction prefetch
c.Instruction manipulation
d.instruction decoding
Answer
Instruction prefetch

106 : Parallel computing means to divide the job into several __________
a.Bit
b.Data
c.Instruction
d.Task
Answer
Task

107 : if a piece of data is repeatedly used, the effective latency of this memory system can be reduced by the __________.
a.RAM
b.ROM
c.Cache
d.HDD
Answer
Cache

108 : Processing of multiple tasks simultaneously on multiple processors is called
a.Parallel processong
b.Distributed processing
c.Uni- processing
d.Multi-processing
Answer
Multi-processing

109 : The instuction execution sequence ,that holds the instruction result known as
a.Data buffer
b.control buffer
c.reorder buffer
d.ordered buffer
Answer
reorder buffer

110 : A multiprocessor operating system must take care of
a.authorized data access and data protection
b.unauthorized data access and data protection
c.authorized data access
d.data protection
Answer
————-

111 : The expression ‘delayed load’ is used in context of
a.prefetching
b.pipelining
c.processor-printer communication
d.memory-monitor communication
Answer
pipelining

112 : _________ is a method for inducing concurrency in problems that can be solved using the divide-and-conquer strategy.
a.exploratory decomposition
b.speculative decomposition
c.data-decomposition
d.Recursive decomposition
Answer
data-decomposition

113 : If no node having a copy of a cache block, this technique is known as
a.Uniform memory access
b.Cached
c.Un-cached
d.Commit
Answer
————-

Prepare For Your Placements: https://lastmomenttuitions.com/courses/placement-preparation/

/ Youtube Channel: https://www.youtube.com/channel/UCGFNZxMqKLsqWERX_N2f08Q

Follow For Latest Updates, Study Tips & More Content!

/lastmomenttuition

Last Moment Tuitions

lastmomentdost