Neurons and Activation Functions
Neurons process inputs through weighted sums and activation functions like Sigmoid and Rectified Linear Units (ReLU).
A neuron takes one or more inputs having different weights and has an output which depends on the inputs. The output is achieved by adding up inputs of each neuron with weights and feeding the sum into the activation function.
A Sigmoid function is usually the most common choice for activation function
but other non-linear functions, piecewise linear functions or step functions are also
used.
The Rectified Linear Units function NNET_ACTIVATIONS_RELU
is a commonly
used activation function that addresses the vanishing gradient problem for larger neural
networks.
The following are some examples of activation functions:
-
Logistic Sigmoid function
-
Linear function
-
Tanh function
-
Arctan function
-
Bipolar sigmoid function
-
Rectified Linear Units