A computational model inspired by the human brain, composed of interconnected layers of nodes (neurons) that process information and learn complex mappings from inputs to outputs.
In Depth
An Artificial Neural Network is loosely inspired by the biological neural networks in animal brains, though the resemblance is more metaphorical than mechanistic. It consists of layers of computational units called neurons, each of which receives numeric inputs, multiplies them by learned weights, sums the results, and passes the output through an activation function. The first layer receives raw data; intermediate 'hidden' layers learn progressively abstract representations; the final layer produces the model's prediction.
The network learns by adjusting its weights through backpropagation and gradient descent. For each training example, the network makes a prediction, computes the error (loss), and propagates that error backward through the network, calculating how much each weight contributed to the mistake. The optimizer then adjusts each weight slightly in the direction that reduces the error — a process repeated millions or billions of times until the network's predictions are accurate.
Modern neural networks bear little resemblance to early perceptrons. Today's architectures — Convolutional Neural Networks, Recurrent Neural Networks, Transformers — are specialized variants optimized for specific data types and tasks. But all share the core principle of learned, layered representations. The 'depth' in Deep Learning simply refers to having many such layers — deep stacks of interconnected neurons that build complex understanding from simple components.
An Artificial Neural Network learns by repeatedly adjusting millions of weights to minimize prediction errors — the same way a student improves by practicing and correcting mistakes, scaled to billions of examples.

