Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact
Back to Glossary
Machine Learning Intermediate Also: Overtraining, Memorization

Overfitting

Definition

A modeling failure where a machine learning model learns the training data too closely — memorizing noise and edge cases — and subsequently performs poorly on new, unseen data.

In Depth

Overfitting is one of the most fundamental failure modes in Machine Learning. A model that overfits has learned the training data so precisely that it has essentially memorized it — including random noise, outliers, and idiosyncrasies specific to that particular dataset. When confronted with new data, the model performs poorly because its 'knowledge' doesn't generalize beyond the examples it has seen.

The intuition is like a student who memorizes every practice exam verbatim but can't solve any problem phrased differently. The model achieves high accuracy on training data — sometimes near 100% — while its accuracy on a held-out test set is significantly lower. This gap between training and test performance is the signature of overfitting. Its counterpart, underfitting, occurs when a model is too simple to capture even the underlying patterns in training data.

The antidote to overfitting is regularization — a family of techniques that constrain model complexity to force generalization. L1/L2 regularization penalizes large parameter values. Dropout randomly disables neurons during training. Early stopping halts training when test performance begins to degrade. Data augmentation artificially expands training data. Cross-validation helps detect overfitting before deployment. All of these address the same root cause: too much model complexity relative to the data available.

Key Takeaway

Overfitting is the model learning the map instead of the territory — a powerful but brittle system that fails whenever reality looks slightly different from its training examples.

Real-World Applications

01 Fraud detection models that achieve perfect training accuracy but fail on new fraud patterns because they memorized historical attack signatures.
02 Medical models trained on homogeneous hospital data that generalize poorly to patients from different demographics or clinics.
03 Stock price prediction models that fit historical market data with extreme precision but fail entirely in live trading.
04 NLP models that perfectly classify training documents but perform poorly on text from a different domain or time period.
05 Early-stage deep learning models with millions of parameters trained on small datasets — a classic overfitting recipe.

Frequently Asked Questions

How can you tell if a model is overfitting?

The classic sign is a large gap between training accuracy and test/validation accuracy. If your model achieves 99% accuracy on training data but only 70% on new data, it's overfitting. Monitoring learning curves — plotting training and validation loss over time — is the standard way to detect overfitting. When validation loss starts increasing while training loss keeps decreasing, overfitting has begun.

What causes overfitting?

The main causes are: too little training data relative to model complexity, training for too many epochs, a model with too many parameters (too much capacity), noisy or unrepresentative training data, and insufficient regularization. In essence, overfitting happens when the model has more freedom to memorize than the data has signal to teach.

What are the best techniques to prevent overfitting?

Key techniques include: regularization (L1/L2 penalties on weights), dropout (randomly deactivating neurons during training), early stopping (halting training when validation performance degrades), data augmentation (artificially expanding the training set), cross-validation (evaluating on multiple data splits), and reducing model complexity. Collecting more diverse training data is often the most effective solution.