ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub
Back to Glossary
Deep Learning Intermediate Also: Transfer Function, Neuron Activation

Activation Function

Definition

A mathematical function applied to each neuron's output that introduces non-linearity, enabling neural networks to learn complex, non-linear patterns rather than just linear combinations of inputs.

In Depth

Without activation functions, a neural network with any number of layers would be mathematically equivalent to a single-layer linear model — no matter how deep the stack, the overall transformation would remain linear. Activation functions introduce non-linearity at each neuron, making it possible for networks to learn the complex, non-linear patterns that appear in real data: curved decision boundaries, hierarchical abstractions, and intricate feature interactions.

The most widely used activation today is ReLU (Rectified Linear Unit): max(0, x). It outputs the input if positive, zero otherwise. Despite its simplicity, ReLU trains faster and performs better than earlier alternatives like Sigmoid (which squashes outputs between 0 and 1) and Tanh (which squashes to -1 to 1) — partly because it avoids the 'vanishing gradient' problem that plagued deep networks with earlier activations. Leaky ReLU and GELU are common variants that address ReLU's own limitation of 'dying' neurons.

The choice of activation function is a hyperparameter that affects training dynamics significantly. Sigmoid and Softmax are still used in output layers — Sigmoid for binary classification (outputting a probability between 0 and 1), Softmax for multi-class classification (outputting a probability distribution across classes). Understanding which activation to use where is fundamental to designing functional neural architectures.

Key Takeaway

Activation functions are the source of a neural network's expressive power — without them, even the deepest network would be no more powerful than a simple linear regression.

Real-World Applications

01 Hidden layers in image classifiers: ReLU activations enabling the network to learn non-linear visual features across millions of images.
02 Binary classification outputs: Sigmoid activation producing calibrated probabilities for spam detection or medical diagnosis.
03 Multi-class prediction: Softmax activation in the final layer of a 1000-class image recognition model.
04 Transformer architectures: GELU (Gaussian Error Linear Unit) activation used in BERT, GPT, and other modern LLMs.
05 Recurrent networks: Tanh activation historically used in LSTM gates to bound neuron outputs and aid gradient flow.