ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub
Back to Glossary
Deep Learning Intermediate Also: ANN, Neural Network, Feedforward Network

Artificial Neural Network (ANN)

Definition

A computational model inspired by the human brain, composed of interconnected layers of nodes (neurons) that process information and learn complex mappings from inputs to outputs.

In Depth

An Artificial Neural Network is loosely inspired by the biological neural networks in animal brains, though the resemblance is more metaphorical than mechanistic. It consists of layers of computational units called neurons, each of which receives numeric inputs, multiplies them by learned weights, sums the results, and passes the output through an activation function. The first layer receives raw data; intermediate 'hidden' layers learn progressively abstract representations; the final layer produces the model's prediction.

The network learns by adjusting its weights through backpropagation and gradient descent. For each training example, the network makes a prediction, computes the error (loss), and propagates that error backward through the network, calculating how much each weight contributed to the mistake. The optimizer then adjusts each weight slightly in the direction that reduces the error — a process repeated millions or billions of times until the network's predictions are accurate.

Modern neural networks bear little resemblance to early perceptrons. Today's architectures — Convolutional Neural Networks, Recurrent Neural Networks, Transformers — are specialized variants optimized for specific data types and tasks. But all share the core principle of learned, layered representations. The 'depth' in Deep Learning simply refers to having many such layers — deep stacks of interconnected neurons that build complex understanding from simple components.

Key Takeaway

An Artificial Neural Network learns by repeatedly adjusting millions of weights to minimize prediction errors — the same way a student improves by practicing and correcting mistakes, scaled to billions of examples.

Real-World Applications

01 Tabular data prediction: feedforward ANNs applied to structured datasets for tasks like loan default or churn prediction.
02 Function approximation: ANNs modeling complex, nonlinear relationships in physics simulations or financial models.
03 Pattern recognition: recognizing handwritten digits (MNIST) — the classic benchmark that demonstrated ANN viability.
04 Game playing: ANNs as the value and policy networks in AlphaGo and similar RL systems.
05 Recommendation systems: neural collaborative filtering that learns user-item embeddings for personalization.