Get Latest Exam Updates, Free Study materials and Tips
a) ANN refers to Artificial Neural Networks.
b) ANN is a series of algorithms that endeavors to recognize underlying relationships in a
set of data through a process that mimics the way the human brain operates.
a) Modeling and training of data is achieved by finding relationship between input features and
output vector.
b) Thus, ANNs are a series of forward linear mappings.
a) Real life problems presented to ANNs do not involve direct linear relations
between ground truth input features and labels.
b) Thus, to introduce some kind of non-linearity activation functions are associated
with neurons.
a) Gradients are used in certain training algorithm, that is the error backpropagation
algorithm.
b) If the neuron’s activation function has zero or constant gradient, then no use of
back-propagating as then the gradients learning will not happen.
y=f(yin)
f(yin)= 1 if yin> θ
0 if – θ <= yin <= θ
-1 if yin < - θ
• Single Layer Feedforward Networks- Here, the number of nodes in output layer depends upon the
number of outputs we desire from the network.
• Multi-Layer Feedforward Networks- This network is Characterized by a Hidden Layer. This
is called hidden because, it does not have direct contact with outputs which are visible. Here,
the number of nodes in the hidden layer depends upon certain functionalities and expected
accuracy.
• Single Node with its Own Feedback-This is sometimes also known as single neuron Recurrent
Network. It helps in situations where we desire that the neuron learns from its own feedback.
• Single Layer Recurrent Networks- So basically Recurrent Networks remember past. However,
also Simple neural networks also remember past but that is offered between training. If we want
the network to remember some context in the past, we use Recurrent Networks.
• Multi-Layer Recurrent Networks- Here, hidden layers are present in a Recurrent Network.
So, basically same input set can produce a different output set based on the context.
a) The rectified linear activation function or ReLU for short is a piecewise linear function
that
will output the input directly if it is positive, otherwise, it will output zero.
b) The rectified linear activation function overcomes the vanishing gradient problem,
allowing
models to learn faster and perform better.
Artificial Neural Network are not fault tolerant instead they are purposely made or designed to perform the same . Fault tolerance can be introduced in ANN's by some techniques. Biological Neural Network are fault network , they recover data through other parts if some paths encounter disconnect.
Not a member yet? Register now
Are you a member? Login now