-
Neural Networks and Fuzzy Logic Viva Questions 6
-
Lecture1.1
-
Lecture1.2
-
Lecture1.3
-
Lecture1.4
-
Lecture1.5
-
Lecture1.6
-
Introduction to ANN
1.Define ANN .
Ans: a) ANN refers to Artificial Neural Networks.
b) ANN is a series of algorithms that endeavors to recognize underlying relationships in a
set of data through a process that mimics the way the human brain operates.
2.ANNs mimic biological neuron. Are the mappings in our neurons Linear?
Ans:
a) Modeling and training of data is achieved by finding relationship between input features and
output vector.
b) Thus, ANNs are a series of forward linear mappings.
3.Why are activation functions introduced ?
Ans:
a) Real life problems presented to ANNs do not involve direct linear relations
between ground truth input features and labels.
b) Thus, to introduce some kind of non-linearity activation functions are associated
with neurons.
4.What is the gradient of the binary step function ?
Ans:
a) Gradients are used in certain training algorithm, that is the error backpropagation
algorithm.
b) If the neuron’s activation function has zero or constant gradient, then no use of
back-propagating as then the gradients learning will not happen.
5.EDefine activation function of perceptron.
Ans:
y=f(yin)
f(yin)= 1 if yin> θ
0 if – θ <= yin <= θ
-1 if yin < - θ
6. Explain in brief the different architectures mentioned.
Ans:
• Single Layer Feedforward Networks- Here, the number of nodes in output layer depends upon the
number of outputs we desire from the network.
• Multi-Layer Feedforward Networks- This network is Characterized by a Hidden Layer. This
is called hidden because, it does not have direct contact with outputs which are visible. Here,
the number of nodes in the hidden layer depends upon certain functionalities and expected
accuracy.
• Single Node with its Own Feedback-This is sometimes also known as single neuron Recurrent
Network. It helps in situations where we desire that the neuron learns from its own feedback.
• Single Layer Recurrent Networks- So basically Recurrent Networks remember past. However,
also Simple neural networks also remember past but that is offered between training. If we want
the network to remember some context in the past, we use Recurrent Networks.
• Multi-Layer Recurrent Networks- Here, hidden layers are present in a Recurrent Network.
So, basically same input set can produce a different output set based on the context.
7. Explain ReLU.
Ans:
a) The rectified linear activation function or ReLU for short is a piecewise linear function
that
will output the input directly if it is positive, otherwise, it will output zero.
b) The rectified linear activation function overcomes the vanishing gradient problem,
allowing
models to learn faster and perform better.
8.Justify/contradict: An artificial neural network is fault tolerant.
Ans:Artificial Neural Network are not fault tolerant instead they are purposely made or designed to perform the same . Fault tolerance can be introduced in ANN's by some techniques. Biological Neural Network are fault network , they recover data through other parts if some paths encounter disconnect.